Nothing. (moderator edit)Is there anything that Elon could say that would convince you he's talking about robotaxis?
Last edited by a moderator:
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Nothing. (moderator edit)Is there anything that Elon could say that would convince you he's talking about robotaxis?
Could be L2 or L3 if you have someone in the driver seat!What level of Autonomous driving is required to go "from LA to NYC with no driver intervention"?…
Yes he dodges the question, he never says directly he'll be able to release a L4 car by year end. And there is the 10x or even 100x mentioned for safety.No. The question was on Level 4 and he talks about 10.000 times safer than humans. He could try to dodge the question of course but that seems a bit strange way to do it.
"Very small number of interventions pr mile"
Tesla (NASDAQ:TSLA)
Q4 2021 Earnings Call
Jan 26, 2022, 5:30 p.m. ET
Martin Viecha
Thank you. And the last question from investors is, Elon mentioned Level 4 autonomy could be achieved this year. Is it based off initial FSD beta rollout experience, or is Level 4 ability predicated on Dojo being completed online?
Elon Musk -- Chief Executive Officer and Product Architect
As mentioned earlier, Dojo is not required for full self-driving. You know, it should have a positive effect on the cost of training networks. You know, it's not just a question like does it get to full self-driving but really kind of like the "march of nines" of reliability, is it 99.999% reliable or 99.999999% reliable. This is -- it gets nutty.
So, obviously, we want to get to close to perfection as possible. So, frankly, being safe than human is a low standard, not a high standard. People are very, very lossy, often distracted, tired, you know, texting. Anyway, it's remarkable that we don't have more accidents.
So, it's -- yeah. So actually being better than a human, I think, is relatively very forward, frankly, how do you be 1,000% better or 10,000% better. Yeah, that's what, you know, gets much harder. But I think anyone who's been in the FSD beta program, I mean, if they were just to plot the progress of the beta interventions per mile, it's obviously trending to, you know, a very small number of interventions per mile and pace of improvement is fast.
And there's several profound improvements to the FSD stack that are coming, you know, in the next few months. So, yeah, I would be shocked if we do not achieve full self-driving safer than human this year. I would be shocked.
He did use that term in the past explicitly. This time he didn't, he only mentioned FSD (which raised in price by another $2k a few months after that tweet, although it's arguable if that is really appreciation in value). Not sure what market value an end-to-end L2 solution is worth.So what is he talking about here?
Is there anything that Elon could say that would convince you he's talking about robotaxis?
Assuming that Tesla sticks to level 2 for the foreseeable future, what could Tesla do against the effect that people become less attentive when they see fewer autopilot/FSD mistakes?the "march of nines" of reliability, is it 99.999% reliable or 99.999999% reliable
He did!He did use that term in the past explicitly. This time he didn't, he only mentioned FSD (which raised in price by another $2k a few months after that tweet, although it's arguable if that is really appreciation in value). Not sure what market value an end-to-end L2 solution is worth.
2030.Is there anything that Elon could say that would convince you he's talking about robotaxis?
He's clearly talking about robotaxis in general in that statement, not about Tesla being able to achieve robotaxi functionality by the end of the year. He's saying "over time" Tesla will get there. Note there is the same repetition of "this year at a safety level significantly greater than a person." Not 2x, not 3x, much less 10x. That is not good enough for wide release of robotaxis, even under his own standards, which he laid out late last year in the Alex Fridman interview:He did!
So over time, we think Full Self-Driving will become the most important source of profitability for Tesla. If you run the numbers on robotaxis, it's kind of nutty. It's nutty good from a financial standpoint. I think we are completely confident at this point that it will be achieved. My personal guess is that we'll achieve Full Self-Driving this year at a safety level significantly greater than a person. The cars in the fleet essentially becoming Self-Driving by a software update might end up being the biggest increase in asset value of any asset class in history. We shall see. It will also have a profound impact on improving safety and on accelerating the world towards sustainable energy through vastly better asset utilization.
Question on a point of self-driving jargon here:
A number of comments have referred to the term "end-to-end", apparently to mean what I would call "door-to-door", i.e. covering the entire drive from start to finish. Is this meaning of "end-to-end" part of a recognized ODD definition?
Assuming that Tesla sticks to level 2 for the foreseeable future, what could Tesla do against the effect that people become less attentive when they see fewer autopilot/FSD mistakes?
It seems to me that people have a poor grip on very low probabilities. People even play the lottery. Presumably they would play the lottery of their own death, leaving control to the car at more than 99.99% already. What could be done against that?
Is there a reasonable way forward short of level 3 or 4? Of course, Tesla aims for level 4 or 5, but that may lie farther in the future than what an ever optimistic Elon Musk says. (We don't know what he really thinks.) So what could Tesla do in the meantime? What could they do during the long march to full autonomy to keep drivers safe and happy?
I think that continuous steering wheel momentum is not a very good solution. I'm pondering the thought of precautionary warnings when approaching a situation that could be difficult for the autopilot, demanding or forcing the driver to direct full attention to the situation in time before the car is in it. Do you think that is possible and sufficient? Currently the old autopilot does that already, but it does it only when the car is already in a critical situation.
In effect, the autopilot would say, "We are approaching a difficult situation. I am not entirely sure I can master it. I will try, but please pay full attention right now and take over when needed."
Another possibility would be that the autopilot slows down or even stops before any difficult situation, inducing the driver to use the accelerator and presumably pay attention to the whole situation.
I agree. There are professors in the field of human machine interaction and complacency issues. They should have knowledge of what works.Assuming that Tesla sticks to level 2 for the foreseeable future, what could Tesla do against the effect that people become less attentive when they see fewer autopilot/FSD mistakes?
It seems to me that people have a poor grip on very low probabilities. People even play the lottery. Presumably they would play the lottery of their own death, leaving control to the car at more than 99.99% already. What could be done against that?
Is there a reasonable way forward short of level 3 or 4? Of course, Tesla aims for level 4 or 5, but that may lie farther in the future than what an ever optimistic Elon Musk says. (We don't know what he really thinks.) So what could Tesla do in the meantime? What could they do during the long march to full autonomy to keep drivers safe and happy?
I think that continuous steering wheel momentum is not a very good solution. I'm pondering the thought of precautionary warnings when approaching a situation that could be difficult for the autopilot, demanding or forcing the driver to direct full attention to the situation in time before the car is in it. Do you think that is possible and sufficient? Currently the old autopilot does that already, but it does it only when the car is already in a critical situation.
In effect, the autopilot would say, "We are approaching a difficult situation. I am not entirely sure I can master it. I will try, but please pay full attention right now and take over when needed."
Another possibility would be that the autopilot slows down or even stops before any difficult situation, inducing the driver to use the accelerator and presumably pay attention to the whole situation.
Ok, so significantly better than a human but less than 2-3 times by the end of the year. That's pretty darn close to robotaxi performance, especially relative to where FSD Beta is now. Anyway, I'll definitely buy it when it gets to that level.He's clearly talking about robotaxis in general in that statement, not about Tesla being able to achieve robotaxi functionality by the end of the year. He's saying "over time" Tesla will get there. Note there is the same repetition of "this year at a safety level significantly greater than a person." Not 2x, not 3x, much less 10x. That is not good enough for wide release of robotaxis, even under his own standards, which he laid out late last year in the Alex Fridman interview:
"Interventions per million miles have been dropping dramatically. At some point, and that trend looks like it happens next year, the probability of an accident on FSD is less than that of an average human, and then significantly less than that of an average human. Then there’s going to be a case where we now have to prove this to regulators. We want a standard that is not just equivalent to a human but much better than the average human. I think it got to be at least 2-3 times higher safety than a human"
So even if by the end of the year, Tesla reaches a level of safety that is steadily above humans, they still have a ways to go before they reach good enough regulators would allow them to have a wide release of L4.
Again, keep in mind when Elon says "feature complete" he simply means end-to-end L2 (as he considers a "feature complete" level which the driver still needs to pay attention AKA L2). He considers the issue "solved" once he reaches that point, and then later work is simply reducing the probability of accidents until it gets to driverless robotaxi levels. Of course that approach is not how others in the industry is doing it. They have completely skipped any L2 steps (most have even skipped L3). Nor do people necessarily agree that just because you got a system that can do end-to-end L2 that it necessarily means it's good enough to reach L4.
I pointed this out to you before here:
Autonomous Car Progress
I have no dog in the game, that was how I saw it described by others a long while ago and it made perfect sense as a separation against typical L2 (which can not finish the whole trip). I have no problem using "door to door" if that avoids confusion with the machine learning terminology.Question on a point of self-driving jargon here:
A number of comments have referred to the term "end-to-end", apparently to mean what I would call "door-to-door", i.e. covering the entire drive from start to finish. Is this meaning of "end-to-end" part of a recognized ODD definition?
In my understanding of the terminology of machine learning for self-driving, "end-to-end" refers to a system in which the ML/NN stack accepts sensor input at the front end and produces vehicle control signals for steering, acceleration, braking etc at the back end.
I bring this up because end-to-end is a very relevant topic in the discussion of the self-driving architectures, and although it may seem like the natural end goal, most systems including Tesla's aren't doing this today - and in fact there may be very good reasons to Implement something less than end-to-end. In the comma.ai presentation last year, they noted that while they're trying to put as much as possible into the NN, they're deliberately leaving the driving control as a back-end procedural programming task, because they want to serve a wide variety of vehicle models and it's Impractical to train an end-to-end NN for each specific platform. Similar considerations would arise if Tesla really expects to license their FSD architecture to other car manufacturers.
So, I'm really not trying to police terminology but I'm mildly suggesting that when were talking about the a start-to-finish ODD, perhaps we should call it "door to door" (or "start to finish" ) rather than "end-to-end" which addresses a different specific topic in the self-driving world.
That's why I made my comment that if they do manage to achieve that (which is a very long shot), it might be good enough for a general City Streets release, but definitely not good enough for robotaxi or driverless operation. Of course the usual subjects just resorted to name calling as always instead of looking deeper into what Elon actually meant.Ok, so significantly better than a human but less than 2-3 times by the end of the year. That's pretty darn close to robotaxi performance, especially relative to where FSD Beta is now. Anyway, I'll definitely buy it when it gets to that level.
Well I think significantly safer than a human is good enough for robotaxi, afterall it would save lives. Elon is a very rational person so I actually think he agrees and the only reason he hedges is concerns about liability. Anyway he should be more clear because with all the talk of robotaxis and asset values it seems like he’s trying to mislead on the timeline.That's why I made my comment that if they do manage to achieve that (which is a very long shot), it might be good enough for a general City Streets release, but definitely not good enough for robotaxi or driverless operation. Of course the usual subjects just resorted to name calling as always instead of looking deeper into what Elon actually meant.