Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Waymo One launches

This site may earn commission on affiliate links.
When did Elon say it was going to be L5 in 2019?

L4 covers the base, no driver, FSD functionality, but doesn't require as much one off event handling.

Elon Musk clarifies Tesla’s plan for level 5 fully autonomous driving: 2 years away from sleeping in the car


So far nothing points to FSD being anything more than Level 2 and his tweet "with no driver input" confirms this. He should have written "eyes off" if this was a Level 3 system. Please don't use this expression in a misleading way. Elon's misleading terms are causing enough misinterpretation in public, no need to spread it. Level 4 can drive from start to finish without a driver and can handle all situations except if someone else is at fault.

Did you know he calls the coolant reservoir "superbottle"?
 
Elon Musk clarifies Tesla’s plan for level 5 fully autonomous driving: 2 years away from sleeping in the car


So far nothing points to FSD being anything more than Level 2 and his tweet "with no driver input" confirms this. He should have written "eyes off" if this was a Level 3 system. Please don't use this expression in a misleading way. Elon's misleading terms are causing enough misinterpretation in public, no need to spread it. Level 4 can drive from start to finish without a driver and can handle all situations except if someone else is at fault.

Did you know he calls the coolant reservoir "superbottle"?

Cars delivering themselves or summoning out of eyesight/ direct control would be level 4.
Level 4 can also be geofenced/ road type limited.
What you describe 'all situations' is level 5.
 
Cars delivering themselves or summoning out of eyesight/ direct control would be level 4.
Level 4 can also be geofenced/ road type limited.
What you describe 'all situations' is level 5.

All situations in the geofenced area.
To be more precise it either drives from start to finish or does not drive at all. It may reject the trip if the weather is bad for example.
 
When did Elon say it was going to be L5 in 2019?

L4 covers the base, no driver, FSD functionality, but doesn't require as much one off event handling.

Speaking of levels I believe Tesla has never mentioned any other levels than Level 5 which Elon used upon Autopilot 2 launch (YouTube at the end). They have never mentioned Level 3 or Level 4 as far as I remember in the context of Autopilot 2 features coming. Anyone have sources otherwise? Specifically Tesla also has denied them taking responsibility for the autonomous drive unlike European brands which are working on autonomous they are responsible for. Elon mentioned your insurance being responsible instead.

Level 4 already allows mind off driving which means you can sleep in the driving scenarios supported by the car. Level 3 is car responsible driving too like reading emails. Can anyone really see Autopilot 2 maturing to that car responsible level anytime soon? So far every issue with Autopilot 2 can be excused with driver responsibility but can you see how large a leap it would be from that to car responsibility...

Think of how every single Autopilot conversation on TMC would change dramatically if a mistake by Autopilot was really on Tesla instead of the driver? The sea change would be huge in how we perceived this and the quality of the product required massively different.

My expectation is that Autopilot 2 will remain Level 2 (driver responsible) for a very long time to come and car responsible solutions (Level 3 and above) will come first from other companies. Indeed the big difference for Tesla is they seem to be working on updating a Level 2 driver’s aid as their way to autonomous. The upside is we get interesting driver’s aid features beyond what others are shipping.

The downside is Tesla does not seem to be focusing so much on car responsible driving at the moment which would suggest reading a book or emails responsibly on the highway may happen on other brands much sooner than it will on Tesla. One reason is the maturity of Tesla’s vision overall which seems to be behind current leaders but more importantly without a Lidar Tesla needs an even more mature vision system than others to move to car responsible driving. (Unless Tesla changes their mind in future suites on that.)

I just can’t see it. I can see Tesla releasing interesting driver’s aid updates in 2019 and look forward to them but I can’t see how or where their car responsible driving would come at this stage for the Autopilot 2 hardware suite (even CPU/GPU upgraded).

 
Last edited:
  • Informative
  • Helpful
Reactions: Matias and mongo
Speaking of levels I believe Tesla has never mentioned any other levels than Level 5 which Elon used upon Autopilot 2 launch (YouTube at the end). They have never mentioned Level 3 or Level 4 as far as I remember in the context of Autopilot 2 features coming. Anyone have sources otherwise? Specifically Tesla also has denied them taking responsibility for the autonomous drive unlike European brands which are working on autonomous they are responsible for. Elon mentioned your insurance being responsible instead.

Level 4 already allows mind off driving which means you can sleep in the driving scenarios supported by the car. Level 3 is car responsible driving too like reading emails. Can anyone really see Autopilot 2 maturing to that car responsible level anytime soon? So far every issue with Autopilot 2 can be excused with driver responsibility but can you see how large a leap it would be from that to car responsibility...

Think of how every single Autopilot conversation on TMC would change dramatically if a mistake by Autopilot was really on Tesla instead of the driver? The sea change would be huge in how we perceived this and the quality of the product required massively different.

My expectation is that Autopilot 2 will remain Level 2 (driver responsible) for a very long time to come and car responsible solutions (Level 3 and above) will come first from other companies. Indeed the big difference for Tesla is they seem to be working on updating a Level 2 driver’s aid as their way to autonomous. The upside is we get interesting driver’s aid features beyond what others are shipping.

The downside is Tesla does not seem to be focusing so much on car responsible driving at the moment which would suggest reading a book or emails responsibly on the highway may happen on other brands much sooner than it will on Tesla. One reason is the maturity of Tesla’s vision overall which seems to be behind current leaders but more importantly without a Lidar Tesla needs an even more mature vision system than others to move to car responsible driving. (Unless Tesla changes their mind in future suites on that.)

I just can’t see it. I can see Tesla releasing interesting driver’s aid updates in 2019 and look forward to them but I can’t see how or where their car responsible driving would come at this stage for the Autopilot 2 hardware suite (even CPU/GPU upgraded).


My feels:
Tesla is working the car responsible route, but you can't say a software/ hardware set is safer than a driver until the set has been tested a sufficient amount. To do that, they'll need human in charge driving that is basically real world testing of the AP software. Once the set reaches x million miles with a low disengagements/ accidents (at fault) rate, then they can say the SW/HW set is good to go for primary control

This stage of development won't kick into place until after HW 3 rolls out to support the NN in development. EAP only 2.x HW will likely only support a downsized driver assist feature set (AK's SW 2.0 adjusted requirements model of things)
 
  • Like
Reactions: strangecosmos
My feels:
Tesla is working the car responsible route, but you can't say a software/ hardware set is safer than a driver until the set has been tested a sufficient amount. To do that, they'll need human in charge driving that is basically real world testing of the AP software. Once the set reaches x million miles with a low disengagements/ accidents (at fault) rate, then they can say the SW/HW set is good to go for primary control

Yes that is how Tesla expresses their plan. What I find we have very little vision on is will Tesla approach this in stages and if so how — or if not in stages how far are they from it. We know many other car companies are looking at Level 3 or 4 on highway in the rather near-term which means solving a hard but relatively specific problem. If Tesla’s approach is to build a driver’s aid for everything and then validate that everything to an autonomous Level that sounds like a long road indeed.

They could do both of course, work on the long-term and still release something car responsible near-term as I feel others are planning to. But just looking at the highway it seems hard to see how Tesla’s approach would produce autonomous (car responsible) driving even there anytime soon given how expansive a feature set they are working on for the highway. Everything to me has the feel of them working on this from a driver’s aid mentality at this stage — making feature rich stuff but not worrying about the car being responsible yet.

Contrast this to the likes of Audi who definitely limited their feature set and plans but at the same time have offered a vision of how exactly that will then translate into car responsbile driving. The thing about car responsible driving is that it is not enough to drive within normal lanes and sense traffic perfectly — which Autopilot 2 is far from doing anyway — you also have to be prepared for every enventuality beyond that because you can no longer rely on a driver (excluding some things like a tire exploding perhaps depending on the level). That is a difficult task. It is even more difficult for Tesla that eschwed traditional approaches like cross traffic radars and Lidar.

It is possible looks are deceiving but I have a hard time seeing where the car responsible effort is at Tesla. I would go as far as to say we have seen no sign of it yet at all. How will the Autopilot 2 progress turn into actual car responsible autonomous driving and when? Anyone see something I am missing?
 
  • Informative
  • Helpful
Reactions: Matias and mongo
Staying at level 2 and keep improving the system is dangerous.
People will get used to the system becoming better and better and they will pay less and less attention. Which will result in big accidents.

Versus level 3 where the car tells that hey, you can read emails now, but once we reach this and that, I switch back to level 2 and you will need to pay attention.
 
One more thing about responsibility. So far Autopilot has been guided in the manual for highways and limited-access road use only. With the features Elon lists in his tweet like roundabouts, traffic lights and stop signs I wonder what happens to such limitations.

This will eventually even more responsibility will fall on the driver that will have to responsibly monitor even a wider range of traffic situations while on Autopilot.

Indeed this change will also alter the Autopilot conversation a little because once these features ship we can no longer dismiss non-highway Autopilot use as irresponsible. Instead it will become normal and not just an illadvised exception to use Autopilot on many kinds of streets.

It is easy to see Tesla adding functionality but harder to see how it would be any more mature than we have seen so far for a long time to come. At the same time Tesla is approaching ever more challening situations while keeping the responsibility on the driver to handle any eventuality including any mistake by the system.
 
Last edited:
  • Like
Reactions: Matias
Just a note Elon never uses Lx to describe the FSD capability. Those definitions are too broad and can't characterize usefulness and difficulties among each individual system. No two systems are the same whether one puts them on the same level or not. This of course also could indicate different thinkings between Elon and others of how this is approahed.
 
Last edited:
Just a note Elon never uses Lx to describe the FSD capability. Those definitions are too broad and can't characterize usefulness and difficulties among each individual systems. No two systems are the same whether one puts them on the same level or not.

Elon did call the Autopilot 2 as ”Level 5 capable” at launch. It is true the talk of Levels has subsided since then but the association with FSD and Level 5 did not come out of nowhere. Elon of course also famously used the Summon your car from another coast example heavily implying driverless autonomous driving which would be Level 4 or Level 5.

The press call where Elon makes this ”Level 5” statement that was then widely quoted is in the YouTube link in my post #65 above.

As for the Levels in general basically when it comes to autonomous reaching Level 3 matters a lot. That is the part where the car becomes responsible for the drive. So that distinction between at or above Level 3 and being beneath Level 3 is probably the most significant part.
 
  • Informative
Reactions: Matias
My feels:
Tesla is working the car responsible route, but you can't say a software/ hardware set is safer than a driver until the set has been tested a sufficient amount. To do that, they'll need human in charge driving that is basically real world testing of the AP software. Once the set reaches x million miles with a low disengagements/ accidents (at fault) rate, then they can say the SW/HW set is good to go for primary control

This stage of development won't kick into place until after HW 3 rolls out to support the NN in development. EAP only 2.x HW will likely only support a downsized driver assist feature set (AK's SW 2.0 adjusted requirements model of things)

Agree completely. Once Tesla has full self-driving software that it believes is ready for production deployment based on internal testing and whatever fleet data can be usefully gathered (e.g. “shadow mode”), the logical thing to do is to release the full self-driving software initially as a Level 2 product that requires human monitoring and occasional intervention. In effect, the Tesla owners becomes a safety driver.

Once Teslas drive 6 billion to 11 billion miles in safety driver mode, there will be enough statistical evidence to benchmark Tesla’s AI against the average for human drivers. At that point, if the AI is safer, Tesla can decide to relinquish the safety driver requirement.

This process can also — in theory — work in a more incremental, modular way. Say, with stop sign detection. Benchmark the rate of stop sign detection failures against the rate that humans drive through stop signs without stopping. Or benchmark highway Autosteer against human lane keeping on highways.

HW2 Teslas currently drive something like 7 million to 10 million miles on Autopilot per week. The HW2 fleet is growing by 6,000 to 9,000 cars per week, so the number of Autopilot miles per week is also growing. As the production rate increases, the fleet will grow even faster. As more features are released, the rate Level 2 miles per car will go up (due to increased purchases of the software packages and increased use). So Tesla is in good shape to collect a lot of statistical data very quickly.

According to RAND Corp., 275 million miles would be enough to show that self-driving cars match human safety. So if regulators are cool with that, Tesla could launch after only a few months of safety driving.
 
Last edited:
  • Helpful
Reactions: mongo
Elon did call the Autopilot 2 as ”Level 5 capable” at launch. It is true the talk of Levels has subsided since then but the association with FSD and Level 5 did not come out of nowhere. Elon of course also famously used the Summon your car from another coast example heavily implying driverless autonomous driving which would be Level 4 or Level 5.

The press call where Elon makes this ”Level 5” statement that was then widely quoted is in the YouTube link in my post #65 above.

As for the Levels in general basically when it comes to autonomous reaching Level 3 matters a lot. That is the part where the car becomes responsible for the drive. So that distinction between at or above Level 3 and being beneath Level 3 is probably the most significant part.

OK he probably was using L5 interchangeably with FSD which is fine. He always uses more detailed descriptions like "freeway on ramp to off ramp", "garage at home to parking lot at work" or "summon the car from the other coast" to indicate the capability.

The different approachs I mentioned is Elon started by thinking only FSD and how to get there. EAP is just a FSD light not needed intrim step. Others all seem to have the goal of one level, whatever that is, at a time until it leads to the next level.
 
The different approachs I mentioned is Elon started by thinking only FSD and how to get there. EAP is just a FSD light not needed intrim step. Others all seem to have the goal of one level, whatever that is, at a time until it leads to the next level.

The way it looks to me currently is that everyone basically has a self-driving car in mind as the goal but there are roughly three different roads on the way there:

1. Level 2 implementation much if not all the way (driver responsible), until Level 5 (or advanced generic Level 4) can be offered: Tesla
2. Combination of Level 2 (driver responsible) plus Levels 3 and 4 (car responsible) features for limited scenarios, until Level 5 (or advanced generic Level 4) can be offered: many brands
3. Straight to Level 4/5 but stick to a controlled fleet until broader deployment is possible: Waymo, MobilEye’s taxi fleet etc

I am simplifying of course but there really does seem to be three different approaches. Tesla seems to be avoiding or delaying the question of true autonomous driving by developing more and more advanced driver’s aids instead that one day hopefully turn into autonomous driving while others are attacking the autonomous question sooner through limited scenarios (Level 3/4 pilots for certain scenarios in cosumer cars) or fleet development (like taxis).

I do believe generic autonomous is the goal for everyone though. The ways there differ.
 
Regarding overal plan. I think we can agree that Tesla's development of AP has had a few reboots on terms of approach.
My take on the current setup (based mostly on the Karpathy talk) is that Tesla is doing something like this:
1. Aquire large compote cluster for NN training and testing
2. Put all data, labels training images, test cases, NN shape guidance in their repository
3. Retrain entire NN on every check in, training against every source, validating against every test case
4. Testing includes running them against labeled real world data at silicon speed
5. Possibly putting a bunch of copies in a simulated driving environment for further testing

This setup would be why Elon says no one (that he knows of) is working on the level of generalized solution that Tesla is because Tesla has only one NN/ code base. It is not the normal SW route of adding features as seperate modules.

To add stop signs, for instance, they would take the current code base which trains a NN that handles NoA and such, then add in the stop sign positive and negative reference images along with test case and rebuild the entire thing. Based on the results, the training data and parameters get updated to correct the new behavior and address any issues generated in the original feature set.

As each feature is added, each previous feature gets better due to the increased library of things it 'understands' and things it is tested against (now that you know what a stop sign is, ignore stop signs not on your section of the road)

Net result is that progress comes in big chunks (when all tests pass), and that we won't see the big step change until the 3.0 HW is installed.
 
  • Like
Reactions: CarlK and J1mbo
According to RAND Corp., 275 million miles would be enough to show that self-driving cars match human safety. So if regulators are cool with that, Tesla could launch after only a few months of safety driving.

How can a company prove that they have driven x miles and they had no incidents? I wonder if sometime in the future all these systems have to pass a specific number of miles in an independent simulator like what AImotive has.
 
By the way one of the guy mentioned that they are having difficulties with figuring it out in which lane the vehicles are in a curve.
 

Attachments

  • 500_F_123617815_N5jwpSpNBTR8cljYJmGVA2jpweve7NDK.jpg
    500_F_123617815_N5jwpSpNBTR8cljYJmGVA2jpweve7NDK.jpg
    58.3 KB · Views: 54
Net result is that progress comes in big chunks (when all tests pass), and that we won't see the big step change until the 3.0 HW is installed.

One fallacy of predicting pregress is when people use human ability to assess machaine's. Progress of machine learning is exponential instead of linear. It does not follow human learning process even if NN goes by the similar principle. That's why when Elon talked about danger of AI he said machines can go from sub-human level intelligence to super-human without we even notice it.
 
  • Like
Reactions: mongo
One fallacy of predicting pregress is when people use human ability to assess machaine's. Progress of machine learning is exponential instead of linear. It does not follow human learning process even if NN goes by the similar principle. That's why when Elon talked about danger of AI he said machines can go from sub-human level intelligence to super-human without we even notice it.

Yar. There is also the software side of things. It doesn't work until i does... Humans are good at doing things mostly right, but if the software has a root issue, it just sucks until that issue is resolved, then it is great. Human baseball player batting average rises and plateaus, software batter could bat 0 then .900 .

I should also clarify my previous statement, HW 2.x can grow using the same setup and different NN training parameters (1/10 the available processing power). So we should be able to see jumps in current performance.
 
One fallacy of predicting pregress is when people use human ability to assess machaine's. Progress of machine learning is exponential instead of linear. It does not follow human learning process even if NN goes by the similar principle. That's why when Elon talked about danger of AI he said machines can go from sub-human level intelligence to super-human without we even notice it.

Earlier you criticized me that I don't understand software. Now I think you are the one who doesn't get it. NN is trained to reach a well specified behavior. It can't do more than what the target is. It can do a task better but it can't outsmart humans. Maybe 100 years from now on quantum computers.