Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
What level of Autonomous driving is required to go "from LA to NYC with no driver intervention"?…
Could be L2 or L3 if you have someone in the driver seat!
Now summoning your car from LA to NYC would be L4 or L5 but I think a system that could do that reliably without getting stuck would probably work anywhere and under all human manageable weather conditions and therefore be L5.
 
Last edited by a moderator:
No. The question was on Level 4 and he talks about 10.000 times safer than humans. He could try to dodge the question of course but that seems a bit strange way to do it.

"Very small number of interventions pr mile"

Tesla (NASDAQ:TSLA)
Q4 2021 Earnings Call
Jan 26, 2022, 5:30 p.m. ET

Martin Viecha

Thank you. And the last question from investors is, Elon mentioned Level 4 autonomy could be achieved this year. Is it based off initial FSD beta rollout experience, or is Level 4 ability predicated on Dojo being completed online?

Elon Musk -- Chief Executive Officer and Product Architect

As mentioned earlier, Dojo is not required for full self-driving. You know, it should have a positive effect on the cost of training networks. You know, it's not just a question like does it get to full self-driving but really kind of like the "march of nines" of reliability, is it 99.999% reliable or 99.999999% reliable. This is -- it gets nutty.

So, obviously, we want to get to close to perfection as possible. So, frankly, being safe than human is a low standard, not a high standard. People are very, very lossy, often distracted, tired, you know, texting. Anyway, it's remarkable that we don't have more accidents.

So, it's -- yeah. So actually being better than a human, I think, is relatively very forward, frankly, how do you be 1,000% better or 10,000% better. Yeah, that's what, you know, gets much harder. But I think anyone who's been in the FSD beta program, I mean, if they were just to plot the progress of the beta interventions per mile, it's obviously trending to, you know, a very small number of interventions per mile and pace of improvement is fast.

And there's several profound improvements to the FSD stack that are coming, you know, in the next few months. So, yeah, I would be shocked if we do not achieve full self-driving safer than human this year. I would be shocked.
Yes he dodges the question, he never says directly he'll be able to release a L4 car by year end. And there is the 10x or even 100x mentioned for safety.

Him saying just safer than humans is a very low bar, not enough for regulators to allow L3+ release (I don't believe any of the ones still testing out there are below that bar, but rather are significantly above it for a accident rates, much less releasing an L4 solution to the public). From this I don't see a possibility of anything other than a general L2 release at most, especially if it doesn't even reach at least 2x (which was they had on the order page back in 2016).
 
So what is he talking about here?
Is there anything that Elon could say that would convince you he's talking about robotaxis?
He did use that term in the past explicitly. This time he didn't, he only mentioned FSD (which raised in price by another $2k a few months after that tweet, although it's arguable if that is really appreciation in value). Not sure what market value an end-to-end L2 solution is worth.

The full transcript was posted by @daktari and Elon dodges the question about L4. Even if as part of FSD Beta testing they achieve steady accident/intervention probability better than humans by the end of this year (which is still a long shot, as it's a repeat of the exponential improvement that Elon was wrong about many times in the past), that is still way below what others have achieved in L4 testing. No way that means wide release of a L4 system to the general public by the end of the year (at minimum CA DMV wouldn't allow it, and that is most of Tesla's market).

That may be good enough for a general City Streets release though. The current 60k is about 23% of the FSD fleet out of the around 265k currently. Not out of the realm of possibility to reach close to 100% by the end of the year (and thus allow it to be a general release). Not sure if necessarily it means a complete removal of the scoring system though (or a similar system).
Survey Reveals Tesla's Full Self-Driving Take Rate Is Declining
 
Last edited:
  • Funny
Reactions: Daniel in SD
the "march of nines" of reliability, is it 99.999% reliable or 99.999999% reliable
Assuming that Tesla sticks to level 2 for the foreseeable future, what could Tesla do against the effect that people become less attentive when they see fewer autopilot/FSD mistakes?

It seems to me that people have a poor grip on very low probabilities. People even play the lottery. Presumably they would play the lottery of their own death, leaving control to the car at more than 99.99% already. What could be done against that?

Is there a reasonable way forward short of level 3 or 4? Of course, Tesla aims for level 4 or 5, but that may lie farther in the future than what an ever optimistic Elon Musk says. (We don't know what he really thinks.) So what could Tesla do in the meantime? What could they do during the long march to full autonomy to keep drivers safe and happy?

I think that continuous steering wheel momentum is not a very good solution. I'm pondering the thought of precautionary warnings when approaching a situation that could be difficult for the autopilot, demanding or forcing the driver to direct full attention to the situation in time before the car is in it. Do you think that is possible and sufficient? Currently the old autopilot does that already, but it does it only when the car is already in a critical situation.

In effect, the autopilot would say, "We are approaching a difficult situation. I am not entirely sure I can master it. I will try, but please pay full attention right now and take over when needed."

Another possibility would be that the autopilot slows down or even stops before any difficult situation, inducing the driver to use the accelerator and presumably pay attention to the whole situation.
 
He did use that term in the past explicitly. This time he didn't, he only mentioned FSD (which raised in price by another $2k a few months after that tweet, although it's arguable if that is really appreciation in value). Not sure what market value an end-to-end L2 solution is worth.
He did!

So over time, we think Full Self-Driving will become the most important source of profitability for Tesla. If you run the numbers on robotaxis, it's kind of nutty. It's nutty good from a financial standpoint. I think we are completely confident at this point that it will be achieved. My personal guess is that we'll achieve Full Self-Driving this year at a safety level significantly greater than a person. The cars in the fleet essentially becoming Self-Driving by a software update might end up being the biggest increase in asset value of any asset class in history. We shall see. It will also have a profound impact on improving safety and on accelerating the world towards sustainable energy through vastly better asset utilization.​
 
"The trade association that was formerly known as the Self-Driving Coalition for Safer Streets has changed its name to the Autonomous Vehicle Industry Association... The change seems to be aimed directly at Tesla which is notably not a part of the group and the only major automaker to use the term “self-driving” to describe its advanced driver-assist system."

 
Is there anything that Elon could say that would convince you he's talking about robotaxis?
2030.

The EA call just reconfirms that
- Elon still believes FSD will be achieved
- Robotaxi will mint money
- Cars with FSD will appreciate in value

BUT - unlike earlier years, Elon is not saying when they will have FSD good enough to be running Robotaxis. He only said getting to 10x/100x is hard.
 
When Elon talks about "achieving Full Self-driving", I'm quite sure he's referring to the plan that was originally laid out to the California DMV. Everything seems to be progressing exactly as outlined in those emails: Tesla is currently in the stage of improving FSD prior to general release as a Level 2 ADAS, and I believe that general release is what's meant by "achieving Full Self-driving". After FSD is through the general release, there will still be a ton of work to do and they'll be working with the regulators on another similar iterative process for Level 3+.

Elon thinks Robotaxis could exacerbate traffic issues and that boring tunnels could be part of the solution, and to me it sounds like he's talking about a relatively distant future. The problems that could be caused by Robotaxis are many years away and so is any solution like boring tunnels beneath our cities to alleviate them.
 
  • Like
Reactions: stopcrazypp
He did!

So over time, we think Full Self-Driving will become the most important source of profitability for Tesla. If you run the numbers on robotaxis, it's kind of nutty. It's nutty good from a financial standpoint. I think we are completely confident at this point that it will be achieved. My personal guess is that we'll achieve Full Self-Driving this year at a safety level significantly greater than a person. The cars in the fleet essentially becoming Self-Driving by a software update might end up being the biggest increase in asset value of any asset class in history. We shall see. It will also have a profound impact on improving safety and on accelerating the world towards sustainable energy through vastly better asset utilization.​
He's clearly talking about robotaxis in general in that statement, not about Tesla being able to achieve robotaxi functionality by the end of the year. He's saying "over time" Tesla will get there. Note there is the same repetition of "this year at a safety level significantly greater than a person." Not 2x, not 3x, much less 10x. That is not good enough for wide release of robotaxis, even under his own standards, which he laid out late last year in the Alex Fridman interview:

"Interventions per million miles have been dropping dramatically. At some point, and that trend looks like it happens next year, the probability of an accident on FSD is less than that of an average human, and then significantly less than that of an average human. Then there’s going to be a case where we now have to prove this to regulators. We want a standard that is not just equivalent to a human but much better than the average human. I think it got to be at least 2-3 times higher safety than a human"

So even if by the end of the year, Tesla reaches a level of safety that is steadily above humans, they still have a ways to go before they reach good enough regulators would allow them to have a wide release of L4.

Again, keep in mind when Elon says "feature complete" he simply means end-to-end L2 (as he considers a "feature complete" level which the driver still needs to pay attention AKA L2). He considers the issue "solved" once he reaches that point, and then later work is simply reducing the probability of accidents until it gets to driverless robotaxi levels. Of course that approach is not how others in the industry is doing it. They have completely skipped any L2 steps (most have even skipped L3). Nor do people necessarily agree that just because you got a system that can do end-to-end L2 that it necessarily means it's good enough to reach L4.
I pointed this out to you before here:
Autonomous Car Progress
 
Last edited:
  • Informative
Reactions: Baumisch
Question on a point of self-driving jargon here:
A number of comments have referred to the term "end-to-end", apparently to mean what I would call "door-to-door", i.e. covering the entire drive from start to finish. Is this meaning of "end-to-end" part of a recognized ODD definition?

In my understanding of the terminology of machine learning for self-driving, "end-to-end" refers to a system in which the ML/NN stack accepts sensor input at the front end and produces vehicle control signals for steering, acceleration, braking etc at the back end.

I bring this up because end-to-end is a very relevant topic in the discussion of the self-driving architectures, and although it may seem like the natural end goal, most systems including Tesla's aren't doing this today - and in fact there may be very good reasons to Implement something less than end-to-end. In the comma.ai presentation last year, they noted that while they're trying to put as much as possible into the NN, they're deliberately leaving the driving control as a back-end procedural programming task, because they want to serve a wide variety of vehicle models and it's Impractical to train an end-to-end NN for each specific platform. Similar considerations would arise if Tesla really expects to license their FSD architecture to other car manufacturers.

So, I'm really not trying to police terminology but I'm mildly suggesting that when were talking about the a start-to-finish ODD, perhaps we should call it "door to door" (or "start to finish" :)) rather than "end-to-end" which addresses a different specific topic in the self-driving world.
 
  • Like
Reactions: diplomat33
Question on a point of self-driving jargon here:
A number of comments have referred to the term "end-to-end", apparently to mean what I would call "door-to-door", i.e. covering the entire drive from start to finish. Is this meaning of "end-to-end" part of a recognized ODD definition?

I don't think "end to end" is part of a recognized ODD definition. It's just a colloquial synonym for "door to door", referring to a driver assist that can drive an entire drive from start to finish.

You make a good point about "end to end" ADAS being potentially confused with "end to end " NN. I agree we should use "door to door" to refer to ADAS that can do an entire trip and keep "end to end" to refer to the NN architecture.
 
  • Like
Reactions: JHCCAZ
Assuming that Tesla sticks to level 2 for the foreseeable future, what could Tesla do against the effect that people become less attentive when they see fewer autopilot/FSD mistakes?

It seems to me that people have a poor grip on very low probabilities. People even play the lottery. Presumably they would play the lottery of their own death, leaving control to the car at more than 99.99% already. What could be done against that?

Is there a reasonable way forward short of level 3 or 4? Of course, Tesla aims for level 4 or 5, but that may lie farther in the future than what an ever optimistic Elon Musk says. (We don't know what he really thinks.) So what could Tesla do in the meantime? What could they do during the long march to full autonomy to keep drivers safe and happy?

I think that continuous steering wheel momentum is not a very good solution. I'm pondering the thought of precautionary warnings when approaching a situation that could be difficult for the autopilot, demanding or forcing the driver to direct full attention to the situation in time before the car is in it. Do you think that is possible and sufficient? Currently the old autopilot does that already, but it does it only when the car is already in a critical situation.

In effect, the autopilot would say, "We are approaching a difficult situation. I am not entirely sure I can master it. I will try, but please pay full attention right now and take over when needed."

Another possibility would be that the autopilot slows down or even stops before any difficult situation, inducing the driver to use the accelerator and presumably pay attention to the whole situation.

Tesla doesn't really have to much about it as long as the collision rate of smart car + complacent human is better than dumb car + not-so-complacent human. Not-so-complacent humans are responsible for a lot of collisions, injuries and death already.

But, if they ever reach expert Level 2, they'll need to have improved driver monitoring.
 
Assuming that Tesla sticks to level 2 for the foreseeable future, what could Tesla do against the effect that people become less attentive when they see fewer autopilot/FSD mistakes?

It seems to me that people have a poor grip on very low probabilities. People even play the lottery. Presumably they would play the lottery of their own death, leaving control to the car at more than 99.99% already. What could be done against that?

Is there a reasonable way forward short of level 3 or 4? Of course, Tesla aims for level 4 or 5, but that may lie farther in the future than what an ever optimistic Elon Musk says. (We don't know what he really thinks.) So what could Tesla do in the meantime? What could they do during the long march to full autonomy to keep drivers safe and happy?

I think that continuous steering wheel momentum is not a very good solution. I'm pondering the thought of precautionary warnings when approaching a situation that could be difficult for the autopilot, demanding or forcing the driver to direct full attention to the situation in time before the car is in it. Do you think that is possible and sufficient? Currently the old autopilot does that already, but it does it only when the car is already in a critical situation.

In effect, the autopilot would say, "We are approaching a difficult situation. I am not entirely sure I can master it. I will try, but please pay full attention right now and take over when needed."

Another possibility would be that the autopilot slows down or even stops before any difficult situation, inducing the driver to use the accelerator and presumably pay attention to the whole situation.
I agree. There are professors in the field of human machine interaction and complacency issues. They should have knowledge of what works.

In my experience, coming from Tesla torque sensor "car has control" interface to the european touch sensor "blended human machine driving" was very strange at first. Especially when the cars disengaged autosteer at random. But one certainly then pay a lot more attention and drive with the car. With Tesla, I looked to much out of the window.

So a real hands on sensor, frequent nags, sudden disengagement and better IR eye tracking would help. IMO Autopilot is designed to be hands off since it has this binary on/off system.

With a touch sensor, car could always demand one hand at wheel at all times, and nag for 2 hands after let's say 5 seconds.
 
He's clearly talking about robotaxis in general in that statement, not about Tesla being able to achieve robotaxi functionality by the end of the year. He's saying "over time" Tesla will get there. Note there is the same repetition of "this year at a safety level significantly greater than a person." Not 2x, not 3x, much less 10x. That is not good enough for wide release of robotaxis, even under his own standards, which he laid out late last year in the Alex Fridman interview:

"Interventions per million miles have been dropping dramatically. At some point, and that trend looks like it happens next year, the probability of an accident on FSD is less than that of an average human, and then significantly less than that of an average human. Then there’s going to be a case where we now have to prove this to regulators. We want a standard that is not just equivalent to a human but much better than the average human. I think it got to be at least 2-3 times higher safety than a human"

So even if by the end of the year, Tesla reaches a level of safety that is steadily above humans, they still have a ways to go before they reach good enough regulators would allow them to have a wide release of L4.

Again, keep in mind when Elon says "feature complete" he simply means end-to-end L2 (as he considers a "feature complete" level which the driver still needs to pay attention AKA L2). He considers the issue "solved" once he reaches that point, and then later work is simply reducing the probability of accidents until it gets to driverless robotaxi levels. Of course that approach is not how others in the industry is doing it. They have completely skipped any L2 steps (most have even skipped L3). Nor do people necessarily agree that just because you got a system that can do end-to-end L2 that it necessarily means it's good enough to reach L4.
I pointed this out to you before here:
Autonomous Car Progress
Ok, so significantly better than a human but less than 2-3 times by the end of the year. That's pretty darn close to robotaxi performance, especially relative to where FSD Beta is now. Anyway, I'll definitely buy it when it gets to that level.
 
  • Like
Reactions: Baumisch
Question on a point of self-driving jargon here:
A number of comments have referred to the term "end-to-end", apparently to mean what I would call "door-to-door", i.e. covering the entire drive from start to finish. Is this meaning of "end-to-end" part of a recognized ODD definition?

In my understanding of the terminology of machine learning for self-driving, "end-to-end" refers to a system in which the ML/NN stack accepts sensor input at the front end and produces vehicle control signals for steering, acceleration, braking etc at the back end.

I bring this up because end-to-end is a very relevant topic in the discussion of the self-driving architectures, and although it may seem like the natural end goal, most systems including Tesla's aren't doing this today - and in fact there may be very good reasons to Implement something less than end-to-end. In the comma.ai presentation last year, they noted that while they're trying to put as much as possible into the NN, they're deliberately leaving the driving control as a back-end procedural programming task, because they want to serve a wide variety of vehicle models and it's Impractical to train an end-to-end NN for each specific platform. Similar considerations would arise if Tesla really expects to license their FSD architecture to other car manufacturers.

So, I'm really not trying to police terminology but I'm mildly suggesting that when were talking about the a start-to-finish ODD, perhaps we should call it "door to door" (or "start to finish" :)) rather than "end-to-end" which addresses a different specific topic in the self-driving world.
I have no dog in the game, that was how I saw it described by others a long while ago and it made perfect sense as a separation against typical L2 (which can not finish the whole trip). I have no problem using "door to door" if that avoids confusion with the machine learning terminology.
I will note however that not very far upthread, Mobileye uses the same "end to end" terminology.
Autonomous Car Progress
 
  • Like
Reactions: JHCCAZ
Ok, so significantly better than a human but less than 2-3 times by the end of the year. That's pretty darn close to robotaxi performance, especially relative to where FSD Beta is now. Anyway, I'll definitely buy it when it gets to that level.
That's why I made my comment that if they do manage to achieve that (which is a very long shot), it might be good enough for a general City Streets release, but definitely not good enough for robotaxi or driverless operation. Of course the usual subjects just resorted to name calling as always instead of looking deeper into what Elon actually meant.
 
That's why I made my comment that if they do manage to achieve that (which is a very long shot), it might be good enough for a general City Streets release, but definitely not good enough for robotaxi or driverless operation. Of course the usual subjects just resorted to name calling as always instead of looking deeper into what Elon actually meant.
Well I think significantly safer than a human is good enough for robotaxi, afterall it would save lives. Elon is a very rational person so I actually think he agrees and the only reason he hedges is concerns about liability. Anyway he should be more clear because with all the talk of robotaxis and asset values it seems like he’s trying to mislead on the timeline.
 
It doesn't seem like Elon has worked out any specifics wrt robotaxis. He just makes it up as they go. As of late, he's just been more enthusiastic about it because they are seeing some light at the end of the FSD tunnel.

FSD is an engineering problem, which Tesla is good at. After FSD is "solved", robotaxis are more of a human relations and user-experience problem. Tesla isn't great at this, as demonstrated by their poor car service.
 
Last edited: