Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Level 5

This site may earn commission on affiliate links.
Hey guys,

I was curious if anyone has any information regarding full autonomy, specifically level 5 FSD.

Obviously it will take some time to hit this accolade, but when it does happen, will Tesla make their programs open source?

I’m envisioning something where an address gets inputted, and the vehicle drives itself there, and returns back, basically like a robo taxi, but for companies apart from Tesla.

For example Uber; Say a passenger enters the pickup/dropoff location in the Uber app, the app tells the Tesla the location, the tesla drives itself there, picks up, and drops off.

I would love to hear everyone’s thoughts on this!
 
  • Funny
Reactions: dbldwn02
Some (like Omar) seem to already think this ChatGPT-moment has happened. I do not. So true L5 will be some years after the "FSD-ChatGPT-moment".
Omar is a shill, by definition. He's marketing for Tesla and says things and curates his videos to promote FSD far beyond it's actual ability. He had videos with 10.3 that he hides touching the wheel and showed 0 intervention drives...He games "races" between FSD and Waymo/Cruise to favor FSD. He's been claiming it's almost finished for over 2 years and anyone who actually has FSD Beta knows he's full of *sugar*.
 
Upvote 0
Omar is a shill, by definition. He's marketing for Tesla and says things and curates his videos to promote FSD far beyond it's actual ability. He had videos with 10.3 that he hides touching the wheel and showed 0 intervention drives...He games "races" between FSD and Waymo/Cruise to favor FSD. He's been claiming it's almost finished for over 2 years and anyone who actually has FSD Beta knows he's full of *sugar*.
I know, that's why I brought him up.

I should've said: Omar says he believes the FSD-ChatGPT-moment has already happened.
His inner beliefs, I of course do not know.
 
Upvote 0
I know, that's why I brought him up.

I should've said: Omar says he believes the FSD-ChatGPT-moment has already happened.
His inner beliefs, I of course do not know.
His presentation of FSD just bothers me and sometimes I just need to rant about how dishonest it is.

He's almost worse than Elon in how he portrays FSD. There are so many people that watch his videos and then buy/subscribe to FSD and are severely disappointed. Of course, Elon enables him because that's their form of marketing, but it's misleading at best...and outright lying at worst.

I've seen other YT testers call him out with, "If it's already there, why is Tesla going through an entire re-write", which has been the case 2 or 3 times since Omar has said it's basically delivered.

Like all things there are extremes. He's on one extreme and Dan O'Dowd is the other...the truth, as usual, is right in the middle.
 
  • Like
Reactions: jeewee3000
Upvote 0
Therefore I see the whole "dumb people will cover up camera's" argument as only a minor occurance, like vandalism. (People thought charging your car in a public place would immediately result in vandalism to the charger/cable/car but these fears have not panned out. There are cases of vandalism but these are as rare as with vandalims on ICE cars).

Well, we saw idiots put cones on Cruise (and Waymo?) vehicles to make them stop as a protest against robotaxis because of the "stalls" that have been happening in SF. If Tesla did deploy FSD beta as a driverless robotaxi and it had "stalls" or caused issues, don't be surprised if idiots decided to block the Tesla cameras to cause them to stop too.

TL;DR: this is the least of our worries. The main question is: can vision + machine learning pull it off? George Hotz seems to think so, but said "it could be in 1 year, 10 years or 30 years". (Latest George Hotz interview by Lex Fridman, a worthwile listen. He is very confident that Tesla FSD is on the right path and they know exactly how to achieve FSD (which architecture), but that they just have to now build it and train it.)

vision+ML will likely solve autonomous driving eventually. But I think we should be skeptical of timelines, especially when estimates vary so widely from 1 year to 30 years. That's pretty big uncertainty. I would also be skeptical of anyone saying that they know how to solve FSD, it just a matter of doing the training.

I think Elon and Hotz are counting their chickens before they hatch when they say that we know how to solve FSD, it is just a matter of training now. We can't know which architecture will ultimately prevail until we actually achieve L5. The fact is that Elon was convinced he had the right architecture before and yet Tesla FSD has been through several rewrites. Now, Elon is convinced end-to-end is the answer. Maybe it is. But there is a big gap between thinking you found the right solution and turning that solution into a viable commercial product. And there are still open questions about what architecture will ultimately prevail. It is possible that we will discover some new challenge that will require further architectural changes.
 
Upvote 0
Yup.

FSD will see a "chatGPT-moment" where it suddenly works much better than people ever thought it would/could do, possibly within the next few years.

However, like with ChatGPT3 (which brought ChatGPT in the media, and brought ChatGPT 1 and 2 to shame), people will immediately after realise that - even though the newest iteration is a great step forward - a lot of work lies ahead for true autonomy.

Some (like Omar) seem to already think this ChatGPT-moment has happened. I do not. So true L5 will be some years after the "FSD-ChatGPT-moment".

Let's say 2 years for the FSD ChatGPT moment, and then 3 years after that for L5, I'm leaning towards 2028 for Tesla Vision L5. (Most likely HW5 or 6).

As an investor I just hope there will be more/grander profits from FSD before that time.

People are seeing the impressive demos of chatGPT. So it is understandable that people are excited for what it could do for FSD. After all, many did not believe that AI would ever be able to drive a car so seeing AI learn end-to-end how to drive is incredible. It's why Elon is so bullish that it is the right approach. But Mobileye's CTO, Shai Shalev-Shwartz pointed out recently that while GPT has made some impressive demos, it still makes some big mistakes. So he is skeptical of a chatGPT approach to FSD right now because you cannot have FSD that randomly makes big mistakes. So yes, I think we will see FSD have a chatGPT moment (I don't think it has happened yet) but the real challenge will be the AI driving consistently and reliably. That will take time.
 
Last edited:
  • Like
Reactions: jeewee3000
Upvote 0
We can't know which architecture will ultimately prevail until we actually achieve L5.
Not with 100% certainty, no. I agree with you that much.

However, Elon and George are not idiots. They are more knowledgeable than most regarding the capabilities of ML.

The biggest "issues" with Tesla Vision are two fold:
1) can a vision only system detect distance to objects accurately enough (over time, so I'm including estimating their velocity in this question)?
TL;DR: perception accuracy

2) can a ML system "reason" well enough in order to for example: determine if the car can drive through/over certain objects (smoke, debris, a plastic bag vs a concrete block), predicting future behaviour from other road users based on their past and current behaviour and their characteristics, etc.
TL;DR: having a world model (and "understanding" it)

Most discussion in the TMC FSD threads are regarding one of these issues. I think most agree that, if the answer to both of these questions was positive, the true planning/controls is a fixable problem, given enough time/training.

The thing is that Elon and George truly believe perception won't be an issue, and neither will the world model be.

Most of us are not (yet?) on the same page. We want to see it happen first before accepting it/"believing" it.

I'm not throwing stones, deep down I'm also skeptical and think Elon/George are counting their chickens before they hatch.
On the other hand I know how capable these guys are and don't want to dismiss their theories entirely.

Therefore I'm guessing vision only is perfectly possible, but nobody knows the timeframe. In that sense Hotz is on the same page as us: could be 30 years. Elon is always saying "end of this year". Impossible to predict of course.
 
Upvote 0
Not with 100% certainty, no. I agree with you that much.

However, Elon and George are not idiots. They are more knowledgeable than most regarding the capabilities of ML.
Stop buying into cult of personalities. If you want to know what people who are actually knowledgeable about ML think, read research papers. Watch CV presentations.
 
Upvote 0
People are seeing the impressive demos of chatGPT. So it is understandable that people are excited for what it could do for FSD. After all, many did not believe that AI would ever be able to drive a car so seeing AI learn end-to-end how to drive is incredible. It's why Elon is so bullish that it is the right approach. But Mobileye's CTO, Shai Shalev-Shwartz pointed out recently that while GPT has made some impressive demos, it still makes some big mistakes. So he is skeptical of a chatGPT approach to FSD right now because you cannot have FSD that randomly makes big mistakes. So yes, I think we will see FSD have a chatGPT moment (I don't think it has happened yet) but the real challenge will be the AI driving consistently and reliably. That will take time.
Cost of error is indeed the _general_ issue with AI. With AVs people often focus on safety, but stopping confused in the middle of an intersection is as much a high cost epic failure as a crash.
 
Upvote 0
We’ve never seen Tesla with this type of processing power, so I’m wondering if Elons latest prediction of Lvl 4/5 FSD by later this year will actually prevail.
I can guarantee you it won't.

Elon has been predicting the same thing for 6 or 7 years running. He has literally used the phrase "mind blowing" at least twice now. Maybe his mind is easy to blow, but I have not seen any mind blowing improvements to FSD. As others have said, he is essentially using marketing-speak, which is sad, because I do think he's a smart guy and probably really does know that there are significant barriers to full autonomy. But, he also got into the position he's in by making bold statements, so I guess that's just his way.

I wasn’t saying license it completely, obviously they aren’t going to give away their technology for free.
Well you did use the term "open source". When you open source something, you are essentially putting it into the public domain, with certain restrictions. You can make money on your own enhancements and sell it, but by definition, you are not going to get any money from those that take whatever it is you are open sourcing. That is the difference between "open source" and "license". I guess maybe you meant to say "license" in the first place.

What I am getting at, however, is letting other companies with fleets of Teslas utilize FSD to scale their own operations.
This is exactly Tesla's intent, and Elon just recently stated that he would entertain licensing FSD to third parties. But to reiterate, this is all just hypothetical and best case, quite a ways off.
 
  • Like
Reactions: GalacticHero
Upvote 0
However, Elon and George are not idiots. They are more knowledgeable than most regarding the capabilities of ML.

Elon and Hotz are not more knowledgeable about ML than most. They have very little ML experience actually. There are folks like Drago Anguelov who are far more knowledgeable.

The biggest "issues" with Tesla Vision are two fold:
1) can a vision only system detect distance to objects accurately enough (over time, so I'm including estimating their velocity in this question)?
TL;DR: perception accuracy

Yes but the issue is reliability. Vision-only is not reliable enough in all conditions (rain, snow etc). This issue is already solved with cameras+radar+lidar.

2) can a ML system "reason" well enough in order to for example: determine if the car can drive through/over certain objects (smoke, debris, a plastic bag vs a concrete block), predicting future behaviour from other road users based on their past and current behaviour and their characteristics, etc.
TL;DR: having a world model (and "understanding" it)

Probably but this will require a lot of training of the AI. But this issue has already been solved with cameras+radar+lidar.

The thing is that Elon and George truly believe perception won't be an issue, and neither will the world model be.

Elon and Hotz have it completely backwards. They think FSD is just a matter of solving perception and once that is done with vision-only, you are basically done. But perception is just the foundation, the real challenge is planning or decision-making. And perception is the easier problem. Planning/controls is the hard problem. I say perception is the easier problem because you can measure the position and velocity of an object. It is an objective truth. With vision you can measure position and velocity of objects. And if you add lidar and radar, you can make the position and velocity measurements even more accurate and more reliable. The true challenge of autonomous driving is planning. That's because planning requires intelligence. You need to understand what other objects will do, how they will react to you, and make intelligent decisions. You can have the best perception but if your car lacks the planning to make the right decisions, it will not drive correctly. And there is no objective truth, that's why even human drivers will disagree on the right action. One driver might say you should slow down and wait, another driver might say you should gun it and make the turn. Both actions might be acceptable if they don't cause a crash. So which action was "best"? It might really depend on the situation. So the AV needs the intelligence to reason about a situation and make the right decision. And it is not just about avoiding collisions, AVs need to drive respectfully. So we need to solve planning not just for safety but also to make sure AVs can drive "nice" with other drivers but also be assertive when needed. It is a hard problem. The reason we don't have L5 yet, is not that perception is so hard, it's because AVs lack the general intelligence to reason about situation and new edge cases.
 
Last edited:
Upvote 0
...(I think of the traffic cones on Chevy Cruze)....
Got me wondering......since we humans are somewhat analogous to L5 (I know there is NO WAY to make a direct comparison) and if someone put a cone on our car we would simply get out and move it off and set it on the curb and continue. Since an L5 by definition MUST have an unlimited ODD how will it deal with this or a similar situations? Would it be considered a "mechanical" breakdown or a DDT failure and require service? Seems the L3016 spec doesn't cover these types of situations.

EDIT: I know we are 99.9% likely at least a decade or more away before anyone has a L5 system on the market, so just musing. Of course the whole thread is an exercise in unknowable future futility anyway so might as well play.
 
Last edited:
Upvote 0
Elon and Hotz are not more knowledgeable about ML than most. They have very little ML experience actually. There are folks like Drago Anguelov who are far more knowledgeable.



Yes but the issue is reliability. Vision-only is not reliable enough in all conditions (rain, snow etc). This issue is already solved with cameras+radar+lidar.



Probably but this will require a lot of training of the AI. But this issue has already been solved with cameras+radar+lidar.



Elon and Hotz have it completely backwards. They think FSD is just a matter of solving perception and once that is done with vision-only, you are basically done. But perception is just the foundation, the real challenge is planning or decision-making. And perception is the easier problem. Planning/controls is the hard problem. I say perception is the easier problem because you can measure the position and velocity of an object. It is an objective truth. With vision you can measure position and velocity of objects. And if you add lidar and radar, you can make the position and velocity measurements even more accurate and more reliable. The true challenge of autonomous driving is planning. That's because planning requires intelligence. You need to understand what other objects will do, how they will react to you, and make intelligent decisions. You can have the best perception but if your car lacks the planning to make the right decisions, it will not drive correctly. And there is no objective truth, that's why even human drivers will disagree on the right action. One driver might say you should slow down and wait, another driver might say you should gun it and make the turn. Both actions might be acceptable if they don't cause a crash. So which action was "best"? It might really depend on the situation. So the AV needs the intelligence to reason about a situation and make the right decision. And it is not just about avoiding collisions, AVs need to drive respectfully. So we need to solve planning not just for safety but also to make sure AVs can drive "nice" with other drivers but also be assertive when needed. It is a hard problem. The reason we don't have L5 yet, is not that perception is so hard, it's because AVs lack the general intelligence to reason about situation and new edge cases.
Fair points. Thanks for taking the time to write it out.

Lex Fridman is of similar opinion: planning/reasoning is the hard part, since it requires intelligence.

Jim Keller on the other hand dismisses this completely and believe what we (humans) find important characteristics of a good driver are not necessary for an autonomous vehicle, basically because human perception/response time is orders of magnitude slower than bots.

Elon alludes to this as well when he talks about how everything is "slow" to the robotaxi since it perceives and plans many times a second, compared to us squishy brains.

I don't know what will turn out to be "the hardest part" (perception/planning/controls), they're all insanely hard for 100% reliability. Getting 80-90% of the way is doable, but getting rid of all edge cases (either perceiving them or planning for them) is what will take us years if not decades to get to true L5.

But I do follow Jim Keller in the sense that "reading" a human/dog's movement shouldn't get as much emphasis as some give it. Let's say a human stands still at the side of the road and the car wants to drive past him. If the car can perceive (1) this is a human and know (2) humans max speed/acceleration is so-and-so, I can get (3) this close to him before having to make an eventual braking manoeuvre at (4) speed X. Therefore the autonomous vehicle can choose to slow down to a certain speed depending on how close the human is (standing still) to the side of the road. The instant the human starts moving the car has to reevaluate and slow down/stop. By doing so 100 times a second the car shouldn't have to read/understand if the human "wants to cross" or has the intention or whatever. Just knowing it is a human pedestrian (not on a hoverboard or something) seems enough for a decent enough autonomous vehicle to manage this situation.

I could be wrong but I do think some of the planning genius required is overstated. If you follow the rules of the road strictly (and autonomous vehicles do this) then you rarely get into dangerous situations that you cannot get out of by accelerating towards X or stopping on the spot. Creating an autonomous vehicle IMO should first be held up to regular human standards (i.e. obeying the rules of the road). If a crash occurs with an autonomous vehicle because a pedestrian crossed in an unsafe manner or another driver speeded, the autonomous vehicle shouldn't be at fault and therefore shouldn't be able to prevent this crash.

Let's keep responsibility with those making errors in traffic. Of course autonomous vehicles will TRY to mitigate accidents as much as possible, but let's not impose unreasonably high standards.

Most countries are understanding now that to reduce fatalities amongst pedestrians and cyclists, the road network must be rethought and the VRU's should be seperated from motor vehicles. This will help in creating more traffic safety just as much as creating AV's.

Anywho, not here to agree or disagree with anyone, just musing on the state of the art and the direction we are headed.
 
  • Like
Reactions: spacecoin
Upvote 0
Got me wondering......since we humans are somewhat analogous to L5 (I know there is NO WAY to make a direct comparison) and if someone put a cone on our car we would simply get out and move it off and set it on the curb and continue. Since an L5 by definition MUST have an unlimited ODD how will it deal with this or a similar situations? Would it be considered a "mechanical" breakdown or a DDT failure and require service? Seems the L3016 spec doesn't cover these types of situations.

EDIT: I know we are 99.9% likely at least a decade or more away before anyone has a L5 system on the market, so just musing. Of course the whole thread is an exercise in unknowable future futility anyway so might as well play.
I imagine if Tesla gets there, the Tesla name alone will invite this kind of behavior. The word Tesla generates clicks in whatever it’s part of (I think when a vehicle catches fire the news has two possible headlines, a “Tesla caught fire“ and a “vehicle caught fire“ for every other car.) so I imagine some type of protocol will have to be placed and a fine or similar to deter it, otherwise I can see folks doing it for social media attention, or just like the random car keying, just bdcause people can be jerks.
 
Upvote 0
Yea I was thinking about that. I imagine trying to find a spot at a supercharger and seeing my wife take up the last spot in her NON-Tesla. :oops:
Do you guys feel the same way when someone pulls up in a Tesla built after yours?

Tesla themselves, by building more gigafactories and producing tons more EVs, are by far the biggest culprit in increased Supercharger usage. The idea that it's okay for a Tesla branded vehicle to be blocking you from charging, but not a non-Tesla vehicle seems a bit elitist, or at least unlogical. If that does become a problem (and at this point, it's mainly just speculation), then the problem is not whether or not Tesla opened up the NACS standard, but rather that they didn't adequately expand the network. And you can say this whether or not there are just Teslas, or both Teslas and non-Teslas at Superchargers.
 
  • Disagree
Reactions: T-Mom
Upvote 0
I'm even annoyed NACS was rolled out, and we'll be seeing loads of non-Teslas at SuperChargers soon. But I get the economics of it, and the reasoning behind it. Still annoying. :)
For DCFC it just depends on whether it rescues or just enhances the economics of Supercharger deployment.

But for Level 2, it's a different matter.

Tesla says "Hey landlord, if you install 6 or more of our chargers* and hook them up with Internet access, your tenants and/or owners and any visitors will have charging available handle all the billing."

Landlord says "But that's only if people have Teslas."

Tesla says "About that, did you know that other manufacturers are adopting our plug? OK, well we've been talking to them about more than just our fast chargers ..."

* Because the Landlord won't know what connector or EVSE means.
 
Upvote 0
FSD will see a "chatGPT-moment" where it suddenly works much better than people ever thought it would/could do, possibly within the next few years.
ChatGPT wasn't really a moment. It was perhaps "a moment" to people that didn't work in or follow ML/AI. The main "invention" of ChatGPT was the chat interface for next-word prediction.

Also chatbots are hardly safety- and time-critical. People don't die when an LLM gets it wrong.

The most likely outcome of FSD on current cars is as a useful driver-assist system that works well in a wide ODD in both US and the EU in a few years. Perhaps some future hardware suite will be autonomous, but likely not HW3/4. Vision-only might get there in 3-4 years but more likely not at this point in time.

Level 5 autonomy is an aspirational level at will likely not happen in the coming 10-20 years.
 
Last edited:
Upvote 0