Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
You are probably one of the loudest waymo cheerleader on TMC. I sometimes wonder why you don't go to a waymo's forum.
The Ignore feature is your friend. Click on their name on the left of their post (on desktop, anyway), then click Ignore. You can also ignore all the people who feel obliged to rant over each public statement by Elon (who I also have on Ignore).
 
I can see that Dr. Know It All isn't a reliable source, but James Douma is very impressive as a speaker. Can you think of any examples of stuff that he's gotten wrong? Are we talking "horses have five legs" wrong or "a horse's canter and a lope are not the same speed" wrong?
JD has adopted the Elon Musk playbook - pseudo scientific nonsense such as “humans can drive with only eyes”. He has zero experience with ML in general and AVs in particular, and seems to know nothin about the field outside of Tesla land.
 
There is zero chance this is end to end (ie a full rewrite) and/or mind-blowing.
Rewrite won't have taken time as there is nothing to write...
Elon will have his mind blown partly because it will have been fitted to his routes as always (to some degree versus roads in the middle of nowhere). Also, even if it didn't work as well as 11.4.6, it is still mind blowing that an early version has performed well.
 
  • Like
Reactions: diplomat33 and goRt
Rewrite won't have taken time as there is nothing to write...
Elon will have his mind blown partly because it will have been fitted to his routes as always (to some degree versus roads in the middle of nowhere). Also, even if it didn't work as well as 11.4.6, it is still mind blowing that an early version has performed well.
End to end autonomous driving is a completely different architecture than Tesla's current modular architecture, and that instead of having tests and sim on the components you need to move these regression tests to the monolith black box output and have all tests there. That typically involves rewriting a lot of code (or just reducing test coverage and safety by an order of magnitude).

Not that end to end is ready for deployment anytime soon other than in specific situations. Right now it's an area of interest for researchers and not ready for wide scale deployment. There was a panel discussion on this recently at a conference (ICRA'23 or CVPR´23 cant remember which). I doubt it's a viable architecture in the end because of the black box aspect of it. It's harder to guarantee safety between versions if all you have is a big blob of statistical "neurons". Like in traditional software, architectural tradeoffs exists in ML-systems too.

Regardless of this, Elon branded himself as "the boy who cried FSD" and I don't believe a word of what he says anymore on that subject. A monkey throwing arrows should have a higher statistical chance of being right. It's not bad luck or optimism if you're off with a decade every time you say "I'd be surprised or shocked if it's not ready in 6-12 months".
 
Last edited:
V12 main defining feature imo is not a driving feature but a new training feature. It can finally train within itself, ai vs ai without human input. No more labeling, and it's the path to be 10x safer than a human. This has been the holy grail the ai self driving community has been chasing after but it was difficult to translate image space into vector space. Tesla's occupancy network was a just building block phase to get it done.

This is the alpha go zero of fsd.

"AlphaGo Zero, a version created without using data from human games, and stronger than any previous version.[1] By playing games against itself, AlphaGo Zero surpassed the strength of AlphaGo Lee in three days by winning 100 games to 0, reached the level of AlphaGo Master in 21 days, and exceeded all the old versions in 40 days.[2]"

 
  • Like
Reactions: Captkerosene
JD has adopted the Elon Musk playbook - pseudo scientific nonsense such as “humans can drive with only eyes”. He has zero experience with ML in general and AVs in particular, and seems to know nothin about the field outside of Tesla land.
Man I must be dreaming when my car does these intervention free drives. Will report back if it was a dream or spacecoin is full of (moderator edit).
 
Last edited by a moderator:
Man I must be dreaming when my car does these intervention free drives. Will report back if it was a dream or spacecoin is full of (moderator edit).

It is entirely possible you have intervention free drives on your routes but someone else does not on their routes. There are many variables that can affect whether a drive is zero intervention or not. That's the challenge of doing non-geofenced autonomous driving. There are literally millions of different edge cases to solve. And the US has millions of miles of roads. Roads are different, lane markings can be different, intersections can be different from place to place. Traffic can be different. Environmental conditions can be different too. There is a lot of variation that FSD beta has to be able to handle. Even the same route can produce different results (sun in a different position that adds glare, busier traffic, etc). And we know Tesla overfits their training models. So if you happen to live in an area where Tesla has a lot of data, FSD beta will work better for you. if you happen to live in an area where Tesla does not have as much data, FSD beta will likely perform worse. That's why you cannot extrapolate FSD beta performance based on a small sample. Just because you have zero intervention drives does not mean that it will be zero intervention drives elsewhere. I use FSD beta everyday and it still struggles in many areas. FSD beta handles highways pretty well. I can usually get a zero intervention drive on highways but I almost never get a zero intervention drive on city streets where I live.
 
Last edited by a moderator:
It is entirely possible you have intervention free drives on your routes but someone else does not on their routes. There are many variables that can affect whether a drive is zero intervention or not. That's the challenge of doing non-geofenced autonomous driving. There are literally millions of different edge cases to solve. And the US has millions of miles of roads. Roads are different, lane markings can be different, intersections can be different from place to place. Traffic can be different. Environmental conditions can be different too. There is a lot of variation that FSD beta has to be able to handle. Even the same route can produce different results (sun in a different position that adds glare, busier traffic, etc). And we know Tesla overfits their training models. So if you happen to live in an area where Tesla has a lot of data, FSD beta will work better for you. if you happen to live in an area where Tesla does not have as much data, FSD beta will likely perform worse. That's why you cannot extrapolate FSD beta performance based on a small sample. Just because you have zero intervention drives does not mean that it will be zero intervention drives elsewhere. I use FSD beta everyday and it still struggles in many areas. FSD beta handles highways pretty well. I can usually get a zero intervention drive on highways but I almost never get a zero intervention drive on city streets where I live.
Well that's a great summery of what a work in progress fsd beta is.

Calling vision based fsd pseudo science is a bit more...definitive which leaves no room for nuance. But appreciate the like you gave to that statement.
 
  • Like
Reactions: Artful Dodger
Calling vision based fsd pseudo science is a bit more...definitive which leaves no room for nuance. But appreciate the like you gave to that statement.
I didn't call vision-only pseudo science. But saying stuff like "humans can driver with two eyes, so therefore computers will be able to" is. A computer is not a brain, and ML is not capable of reasoning or intelligence at this point in time. Perhaps some day vision-only will pan out, but likely not in the coming 3-5 years.
Man I must be dreaming when my car does these intervention free drives. Will report back if it was a dream or spacecoin is full of sh!t.
The difference between what Tesla has currently got and a robotaxi is reliability. It's not enough to have 90% disengagement free drives. You need something like 99.999999% of drives to be without human intervention or failures in most weather and regardless of time of day for a system to be viable. Based on the current rate of progress it doesn't seem likely that this happens ever on current hardware.

I don't understand you and so many people seem to have such a hard time understanding the difference. Elon has used the terms "march of nines" and "stacked logarithmic curves". This means that each "9" is exponentially harder/slower than the previous one. Progress should be fast now and slow later. It's slow now. That's not a promising sign.

Do you still believe that I am full of (moderator edit), or did you learn something?
 
Last edited by a moderator:
  • Like
Reactions: diplomat33
I didn't call vision only pseudo science. But saying stuff like "humans can driver with two eyes, so therefor computers can" is. A computer is not a brain, and ML is not capable of reasoning or intelligence at this point in time.

The difference between what Tesla has currently got and a robotaxi is reliability. It's not enough to have 90% disengagement free drives. You need 99.9999999% of drives to be without human intervention in most weather and regardless of time of day. Based on the current rate of progress it doesn't seem likely that this happens ever on current hardware.

I don't understand that people have such a hard time understanding the difference.

Two things:

First, the "two eyes" concept is a bad analogy for multiple reasons. HW3 has 8 eyes, some of which are wide-angle, and all of which have better night-vision than me. FSD Beta can pick out a pedestrian wearing dark clothes walking along the side of a poorly lit road way before I can; I'm always incredulous when their icon appears in the visualization, and always surprised when I can finally see the pedestrian a second or two later.

Second, we have no idea what the disengagement rate of FSD Beta would be if it were autonomous. It's like asking what the disengagement rate of Waymo would be if passengers were able to hit a button and disengage it. They're not allowed to do that, just like FSD Beta testers are not allowed to not have a driver, so any guess at those hypothetical disengagement rates is just a guess.

The vast majority of my FSD Beta disengagements are for choosing a wrong lane. If an autonomous vehicle like Waymo or Cruise chose a wrong lane, the passenger wouldn't be able to disengage it. They just have to sit there and let the AV re-route. And in many cases, I could not disengage FSD Beta, let it make a wrong turn, and let it re-route as well. But since I'm in the driver-seat and I typically have places to be, I don't let it do that.
 
Last edited:
  • Like
Reactions: GSP and spacecoin
Two things:

First, the "two eyes" concept is a bad analogy for multiple reasons. HW3 has 8 eyes, some of which are wide-angle, and all of which have better night-vision than me. FSD Beta can pick out a pedestrian wearing dark clothes walking along the side of a poorly lit road way before I can; I'm always incredulous when their icon appears in the visualization, and always surprised when I can finally see the pedestrian a second or two later.
Computers and humans have different failure modes and strengths. A computer may think that a stop sign on a bus ad is a real stop sign. Or that a partially occluded stop sign isn't a stop sign. These are two examples of failures that humans would never do. Humans may be distracted/inattentive or have poor eye sight. Computers without sensor cleaning may also be degraded for extended periods of time.

The conclusion that replacing humans with computers on the road will decrease the number of accidents is not proven to be true. A human is at fault for causing an accident two times per lifetime on average. That is an reliability level that computers are yet anywhere near to show in an "L5" ODD.

What many researchers believe is that in order to compensate for the lack of intelligence and reasoning you need to add super-human sensing - in vision and otherwise. Remember than humans have more senses than eyes too.

At the end of the day there are ZERO deployed vision-only cars of the 10000+ robotaxis in traffic right now.

Second, we have no idea what the disengagement rate of FSD Beta would be if it were autonomous. It's like asking what the disengagement rate of Waymo would be if passengers were able to hit a button and disengage it. They're not allowed to do that, just like FSD Beta testers are not allowed to not have a driver, so any guess at those hypothetical disengagement rates is just a guess.

The vast majority of my FSD Beta disengagements are for choosing a wrong lane. If an autonomous vehicle like Waymo or Cruise chose a wrong lane, the passenger wouldn't be able to disengage it. They just have to sit there and let the AV re-route. And in many cases, I could not disengage FSD Beta, let it make a wrong turn, and let it re-route as well. But since I'm in the driver-seat and I typically have places to be, I don't let it do that.
I think it's clear that even if FSDb's DE (or failure) rate was 10x as good or whatever you claim, it is nowhere near the reliability levels needed to remove the human fallback.

The people reporting to the FSD beta tracker tracks "critical disengagements" separately, and that number is waaay to low as well by a factor of 1000x at least.

Also, Autonomy is more than reliability. It's about being able to understand to hand over 5-10 seconds before you reach your limit. It's about defining ODD:s - you cannot go from nothing to anything. There is no engineering that works like that. And this is safety critical stuff. Passive cameras are clearly not safe enough at night when there are oncoming cars blinding them.

Edit:
I'm not claiming that FSDb never will be useful. It clearly is to some, and will be to a growing number of users the better it gets. If it gets to a point where it's worth $15k? Likely not to most people. If it gets to eyes-off reliability on the highway with vision-only I'd be very very surprised. If it gets to "safer than a human" in any ODD, I'd be equally surprised.
 
Last edited:
  • Like
Reactions: diplomat33
I didn't call vision-only pseudo science. But saying stuff like "humans can driver with two eyes, so therefore computers will be able to" is. A computer is not a brain, and ML is not capable of reasoning or intelligence at this point in time. Perhaps some day vision-only will pan out, but likely not in the coming 3-5 years.

The difference between what Tesla has currently got and a robotaxi is reliability. It's not enough to have 90% disengagement free drives. You need something like 99.999999% of drives to be without human intervention or failures in most weather and regardless of time of day for a system to be viable. Based on the current rate of progress it doesn't seem likely that this happens ever on current hardware.

I don't understand you and so many people seem to have such a hard time understanding the difference. Elon has used the terms "march of nines" and "stacked logarithmic curves". This means that each "9" is exponentially harder/slower than the previous one. Progress should be fast now and slow later. It's slow now. That's not a promising sign.

Do you still believe that I am full of sh!t, or did you learn something?
I learned that you changed what you said and then start moving goal posts.

It's a lot of text just to say...FSD vision only may work, but just not right now. Opposite of what you said earlier, " pseudo scientific nonsense such as “humans can drive with only eyes”.

Anyways, if it takes 5 years then it takes 5 years. It really doesn't matter how long it takes because Lidar/Geofenced based robotaxies can never make a profit. So either it takes 5 years or it doesn't happen at all. Until Waymo/Cruise figures out how to not lose over 2.5B a year they will always be on the chopping blocks by Google and GM if the economy goes south. Currently they are science projects at best.


From the year I had FSDb, I have a different opinion because 10.3 or whatever I started on was terrible. Intervention free drives were not common and unprotected turns were not even functioning. The car used to ping pong around when trying to merge into a lane with cars close together which is now a problem of the past since v11. Today besides crappy map data that screws around with lane selection, it pretty much drives you from point A to B. If I didn't care I would just let it reroute and still gets to the destination faster than a waymo and this is accounting for the fact it misses a turn. The only area that still needs a lot of work would be construction zones. So I don't see "progress is slow now".
 
Last edited:
Haha very true.

However none of us were beta testers before v10.3. Reports from earlier beta testers if you were watch their videos really had lots of comfort and dangerous issues like the car would weave in and out while driving next to a bunch of parked cars. There were some major improvements only they understood and we customers didn't experience the truely awful FSDB from early days. Like for me V11 was a major step change from V10 and someone who got V11 as their first version would never know.
 
  • Like
Reactions: AZRI11
Haha very true.

However none of us were beta testers before v10.3. Reports from earlier beta testers if you were watch their videos really had lots of comfort and dangerous issues like the car would weave in and out while driving next to a bunch of parked cars. There were some major improvements only they understood and we customers didn't experience the truely awful FSDB from early days. Like for me V11 was a major step change from V10 and someone who got V11 as their first version would never know.
Elon has driven them all from Day 1 and is still mind=blown every time!

There have definitely been major improvements from the early days of testing where the system was struggling to execute a simple turn around a corner in a smooth manner
 
  • Like
Reactions: GSP
Elon has driven them all from Day 1 and is still mind=blown every time!

There have definitely been major improvements from the early days of testing where the system was struggling to execute a simple turn around a corner in a smooth manner
He was driving in an overfitted bay area tho and every early beta testers who tried out FSDB in the bay area said it does indeed works better.

I do remember Elon on the earnings call said "non 0% chance it can make a full intervention free drive". That didn't sound like it was very mind blowing.

Also we don't exactly know what part of V12 is mindblowing. If it's an AI v AI trained FSDb without using any labeling that have matched the ability of V11 then that's pretty mind blowing.