Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

"Elon’s tweet does not match engineering reality per CJ." - CJ Moore, Tesla's Director of Autopilot Software

This site may earn commission on affiliate links.
I wish Tesla would release the current miles per interaction so that we could get a sense of how far away Tesla is from their L5 goal. Tesla could release the data of FSD Beta miles and number of driver interventions every quarter and we could see how much progress Tesla is making.
Not all interactions are alike. As the system improves, the vast majority of interactions/disengagements will become "optional"; the human driver may take over out of impatience or feeling like they can do a slightly better job, or if they see a situation they're not 100% sure the car can handle, but not because the car wouldn't be able to navigate the situation safely. Distinguishing "optional" disengagements from "necessary" disengagements (where the car would likely experience a collision or failure without intervention) is far from trivial. In practice (with FSD 8.0 on the highway), I disengage on average every 10 miles or so, but I'd guess only 5-10% of these disengagements are because I'm worried about safety, and maybe 1% of those would actually lead to a crash if I didn't disengage. Presumably FSD 9.0 will be an order of magnitude safer, though maybe still not safe enough for L4. We will see.
 
Not all interactions are alike. As the system improves, the vast majority of interactions/disengagements will become "optional"; the human driver may take over out of impatience or feeling like they can do a slightly better job, or if they see a situation they're not 100% sure the car can handle, but not because the car wouldn't be able to navigate the situation safely. Distinguishing "optional" disengagements from "necessary" disengagements (where the car would likely experience a collision or failure without intervention) is far from trivial. In practice (with FSD 8.0 on the highway), I disengage on average every 10 miles or so, but I'd guess only 5-10% of these disengagements are because I'm worried about safety, and maybe 1% of those would actually lead to a crash if I didn't disengage. Presumably FSD 9.0 will be an order of magnitude safer, though maybe still not safe enough for L4. We will see.

I agree. And the CA DMV publishes the disengagement report that includes the cause of the disengagement.

But that's why I wish Tesla would release some data on this. Let's say for Q1, we saw that the optional disengagements for FSD Beta were 1 per 10 miles and the safety disengagements were 1 per 30 miles. And then for Q2, the optional disengagements for FSD Beta were 1 per 50 miles and the safety disengagements were 1 per 100 miles. We could see a trend and get a sense of how FSD Beta is improving in both the safety areas and the convenience areas. I know disengagement rates are not a hard and fast metric, but it would at least give us a general sense of the progress.
 
Depends on which L3 definition you're using. If by L3 you mean "The car is capable of initiating and completing the full set of driving maneuvers [e.g. lane changes] but still requires constant supervision" (one definition), then FSD 9.0 will be L3. If you take it to mean "Under specific limited driving conditions, drivers can take their eyes off the road, unless the car specifically requests intervention" (the other definition), then agreed it is still L2, and won't be L3 for a couple more years.

Japan's law requires L3 to take responsibility when it is active. If L3 does not want to take responsibility, it needs to give the driver enough time such as 45 seconds to hand it over back to a human driver.

European Union is also adopting that same principle too. Whenever the car is operating in L3 and above, the manufacturers are responsible when something goes wrong (such as failure to brake) and not the human driver.

L3 and above shift the accountability from the human driver to the machine and its manufacturers while the car is operating under the defined SAE level of autonomous L3 and above.

L3 has been commercially available in Japan starting 3/5/2021. While it's active, drivers are free to read newspapers, watch videos... and when there's a collision in the scenario that the human driver was watching videos while the car is in L3 mode, the manufacturer will be responsible.

There's no such law in the US and Tesla has not volunteered that it will surely take responsibility in an accident while in L3 and above modes.

That's the hint of the rate of progress for how soon it will be.


I think their vision approach can achieve this, because it can overcome the radar-signal ambiguities that led to these problems in the first place. (Radar detects an object, but assumes it is an overhead sign or part of the landscape.) Unfortunately I think the only way to prove it is by releasing it and seeing how many counterexamples there are. But before that, I wonder if they have adversarial internal teams trying to design or stage tests for this? The real world has a long long tail of weird weird stuff, and it will be hard to anticipate the failure modes that actually occur.

You are overthinking. There's nothing weird about the freeway comes to a standstill in a traffic jam or if there's an accident ahead.

If there's a huge stationary fire truck parking straddling the road shoulder and the part of the lane, and your Autopilot/EAP/FSD beta is running at 65MPH, can the Tesla automation system brake itself to avoid a collision or can it swerve out the of way to avoid the collision?

Pro-LIDAR says it's not a problem with stationary obstacles at all. Waymo has done this since 2009 with no deaths, no accidents even with stationary obstacles on freeways. There's nothing weird about it. LIDAR detects an obstacle each and every time. It knows exactly where the obstacles are and Waymo's system can either brake or steer around it.
 
Last edited:
That said, I've seen nothing that Tesla or Elon Musk has reported on self driving that is inaccurate.
So him reporting that we'd have self driving in 2017, 2018, 2019, 2020, 2021... Those aren't inaccurate? That "self driving is a solved problem" like he said in 2015?

I mean, if you fail to hold him to any kind of timeframe, then you can always say that he's never inaccurate. It's just that whatever he says will be in the future. By that definition me saying I'm richer than Elon musk is also not inaccurate, nor is it inaccurate to say you have been to space. I just have the time estimates a bit off and am failing to meet my goals.

Perhaps that's his way to drive himself and his team to work harder because they're always failing to achieve their goals.
If only he didn't use these timeframes to tell the public that a release of a software package was only days, weeks, or months away and they should pay $10,000 for it. It's pretty convenient that these messages don't just push his team, but also get Tesla major revenue and keep the stock price up. I'm sure that's just a secondary side effect though.
 
Presumably FSD 9.0 will be an order of magnitude safer, though maybe still not safe enough for L4.
Elon has tweeted that V9 is not yet as good as Highway AP, CJ has said it's nowhere near L3. but you're both suggesting it "may not" be L4 and it will only be an order of magnitude safer?

Current FSD beta requires intervention every ~3 miles. Elon says we need 100M miles to get to human safety. That's 6-7 orders of magnitude off. And we know city streets doesn't even change highway behavior right now and it's nowhere near L4.

Let's see if we can even start considering L3 in a single environment like a low density highway before we worry about L4.
 
When I see some FOIA request from reliable party...
Are you saying this is faked? It is irrelevant who requested the FOIA, the document is what matters.
And of course, Tesla has not refuted this in any way. All it would take is one tweet.

But I can see you are someone that cares a lot more about the messenger than the message:

1620510305287.png
 
Last edited:
3 weeks ago. He clearly says V9 FSD is not as good as "production" highway AP. (which itself is beta, not production)

I interpreted his quote differently. He says three things:

1. FSD 9.0 is still being improved.
2. The ultimate goal is to get FSD to 99.999999% on city streets at L4/L5.
3. Autopilot is already above 99.999999% on highways at L2.

This does not mean that existing AP is "better" than FSD in any given scenario, because the last two statements are about very different scenarios. It also doesn't mean that Autopilot would currently function at 99.999999% on highways at L4. Most of its failure modes would be averted by an alert L2 driver, and that's how 99.999999% is currently achieved.
 
Last edited:
Current FSD beta requires intervention every ~3 miles. Elon says we need 100M miles to get to human safety. That's 6-7 orders of magnitude off. And we know city streets doesn't even change highway behavior right now and it's nowhere near L4.
Every ~3 miles in city driving; much better on highways. FSD 8 causes me to intervene "for safety" every couple hundred miles on the highway in my experience, with perhaps 1% of those interventions "actually needed" (i.e. would avoid a crash), so I'd estimate FSD 8's L4 mean-distance-between-crashes on highways is currently once per 20k miles or so. FSD 9 will undoubtedly do far better, maybe 200k miles. That's one order of magnitude away from CJ's goal for highway L4, not 6-7 orders of magnitude. (The "once per 100M miles" figure is for L5, which I agree is very far away.)
 
FSD 8 causes me to intervene "for safety" every couple hundred miles on the highway in my experience, with perhaps 1% of those interventions "actually needed" (i.e. would avoid a crash), so I'd estimate FSD 8's L4 mean-distance-between-crashes on highways is currently once per 20k miles or so. FSD 9 will undoubtedly do far better, maybe 200k miles.
There is no such thing as FSD 8 on the highway, so I don't really know what you are talking about. Given my car absolutely would have crashed twice today on AP on the highway in 50 miles without my intervention, there is zero way the "production" AP code that is in cars today is 20k miles between incidents if no human is present.
What makes you think FSD 9 is better? There is zero data they are updating the highway code at all, just like they didn't update code for FSD 8. FSD code is only impacting city streets for beta testers.
CJ's goal did not say it was needed for highway L4. He said it was L2 currently , and 2 million miles was needed to move to the "next level", which is L3. Which is perfectly inline with needing 1:100M before they go L4. Even if you think it's currently 1:20k miles, that's still 5 orders of magnitude of for L4.
There is zero difference between L4 and L5 in terms of risk, it's just where it can drive. More than 1:100M is needed for highway L4 because the current human fatality rate is lower than that on highways, and nobody is going to use or insure an FSD system that is only equivalent to an average human.
 
... And some have argued that L5 is an asymptote, something that AV's can get really close to without ever actually reaching. Perhaps replacing L3-L5 with a more gradual L4, would be more useful.
This is a great topic. In my view, L4 with pre-drive calculated restrictions would be a major asset to those who cannot drive for reaons of disability, age, sobriety state, past-record legal restrictions etc.

Of course it is necessary that the projected probability of an accident should be extremely low. However, it's not necessary that the projected probability of controlled disengagement, autonomously stopping to park or at least to pull safely out of the roadway, must also be incredibly low. This is an important distinction, and it mitigates some of the "march of nines" performance goals that people bring up when discussing L4/L5 autonomy.

The idea is that the requested L4 drive is quite likely though maybe not extremely likely to be fully successful. Say 99%, 99.9%, we can debate the threshold and it could even be user-specified*. This pre-estimate would be based on knowledge of the available routes, traffic conditions, construction projects, weather forecast etc.

Being L4 with an unlicensed or currently driving-incapable passenger, the manual controls (if even present) would be offline. Again, the expectation is reasonably high that the drive will be autonomous and is unlikely to encounter a disengagement. But in the event of one, there is a set of recovery options:

  • Whatever situation that forced the disengagement is likely temporary and L4 re-engagement can be requested after a time.
  • If the car does have controls, a human driver can be called upon to take over:
    • Any licensed passenger if present,
    • A field servce agent,
    • Law enforcement/responder personnel (but perhaps only to move the vehicle from a "relatively safe" position to a "very safe" position well away from traffic),
    • Any trusted Good Samaritan (by judgment of the adult passengers and/or a contacted remote guardian).
  • If no controls, a remote-assistance operator can take over:
    • if necessary to the completion of the drive,
    • or more typically just long enough to clear the scene of the disengagement, enabling L4 re-engagement.

These points apply both to personally-owned L4 vehicles and to fleet RoboTaxis.

*The probability-of-success threshold can depend to a large degree on the tolerance of the user to a potential disengagement. A RoboTaxi fleet company and its customers presumably have a low tolerance for delays or problems. OTOH a blind or otherwise disabled adult, riding in their own AV, may be quite willing to accept an occasional delay event in exchange for the independence of life enabled by the AV technology.

I think these considerations should serve to broaden the definition of what L4 needs to achieve, and to the quoted point, allows for gradated and more quantifiable rating levels within L4.
 
  • Informative
Reactions: Ben W
You I love to play dumb, but you've been shown this information about plain shite on another thread here and we went through this discussion before.

Just because it fits your narrative doesn't mean that the person behind the site is not a complete psychopath that is literally in the process of trying to destroy people's lives.

I am anti-plainsite. Don't try to lump me in with the pro-plainsite side.