Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
I saw it. I also know that it's not a single huge NN. It's several NNs running in separate threads / tasks. So I had a small hope it could be compiled for HW2+ and consume < 20% with sign reading.
My impression is that the NNs are in series. They consume output of other NNs. So, I don’t think an individual NN can be taken from HW3 and made to work in HW2 like it’s plug and play.

The whole architecture has to work together. For eg. The HW3 NN has a single NN that takes all camera inputs and does some basic processing to get the edges etc. Since there is no NN like that in HW2, the NNs that come after this all camera NN won’t work in HW2.
 
  • Like
Reactions: Bernesto
That said, I had a discussion with a colleague over lunch about this very thing. And, I personally don't believe Elon will roll this out until they hit six 9's of reliability. The cost of being wrong is just to high. Not just for Tesla, but for autonomous cars in general... Queue Good Morning America, "Killer Teslas" - Done, over...
They have definitely rolled out features that are less than 6 9s.

Though, it is an interesting point. When Tesla says they will be FC this year but will continue to improve reliability- will they release the FC features to the fleet ? My guess is they will. I fact they will probably rely on disengage and other data from the fleet to improve city NOA.

They do want to be quite reliable on individual tasks, may be even 6 nines. But overall city NOA will be unreliable because they will address only MVP features at first.
 
They do want to be quite reliable on individual tasks, may be even 6 nines. But overall city NOA will be unreliable because they will address only MVP features at first.

A lot of people have a pretty predictable commute, maybe that will lend itself to city NOA, with the driver knowing when and where the car has issues, and where it is reliable. We'll see soon I guess...
 
A lot of people have a pretty predictable commute, maybe that will lend itself to city NOA, with the driver knowing when and where the car has issues, and where it is reliable. We'll see soon I guess...
Yes, that's what I do even now. I know where to engage and when to disengage. Currently I disengage for roundabouts, stop signs, traffic lights and turns. The MVP they are working on will actually cover my commute.

Infact, I'd say, despite the name, city NOA will work well in suburbs with better roads, few if any, pedestrians and cyclists etc. It will probably not work well in downtown city centers.

Of course, initially won't cover exceptions like construction, school bus and emergency vehicles.
 
Simulation vs real-world testing is not an either-or affair. Simulation greatly reduces development time, but in the end it's in the real world where you learn if your simulation was valid. Everything designed nowadays goes through simulation first, then real-world testing. Once it's considered ready for market a few adventurous souls jump at the chance. A few more jump in when the first bunch survive. And then more and more until it goes mainstream.
 
A lot of people have a pretty predictable commute, maybe that will lend itself to city NOA, with the driver knowing when and where the car has issues, and where it is reliable. We'll see soon I guess...
The more I read stuff like this the more I'm convinced the city NoA will be horribly unsafe as a level 2 system. Let's say Tesla releases city NoA that is 4 9's of safety (one accident every 10,000 miles). So it might work great on your commute 500 times in a row, no disengagements. Obviously you know the car has no issues on your commute, it just did it 500 times in a row! On the 501st time, when you have to be paying attention to avoid an accident, will you be?
 
The more I read stuff like this the more I'm convinced the city NoA will be horribly unsafe as a level 2 system. Let's say Tesla releases city NoA that is 4 9's of safety (one accident every 10,000 miles). So it might work great on your commute 500 times in a row, no disengagements. Obviously you know the car has no issues on your commute, it just did it 500 times in a row! On the 501st time, when you have to be paying attention to avoid an accident, will you be?

Therein lies the rub. Everyone becomes complacent on the 501st trip. Human nature. Let's hope that scenario doesn't happen...
 
The more I read stuff like this the more I'm convinced the city NoA will be horribly unsafe as a level 2 system. Let's say Tesla releases city NoA that is 4 9's of safety (one accident every 10,000 miles). So it might work great on your commute 500 times in a row, no disengagements. Obviously you know the car has no issues on your commute, it just did it 500 times in a row! On the 501st time, when you have to be paying attention to avoid an accident, will you be?
Don’t worry- it will need attention and disengagement often enough to keep you on your toes.

The problem comes when they actually get good but not very good.
 
The more I read stuff like this the more I'm convinced the city NoA will be horribly unsafe as a level 2 system. Let's say Tesla releases city NoA that is 4 9's of safety (one accident every 10,000 miles). So it might work great on your commute 500 times in a row, no disengagements. Obviously you know the car has no issues on your commute, it just did it 500 times in a row! On the 501st time, when you have to be paying attention to avoid an accident, will you be?

Wouldn’t worry; my guess is we’re about 5 years from it being this good. And it’ll all be banned well before then.
 
  • Funny
Reactions: Bernesto
My impression is that the NNs are in series. They consume output of other NNs. So, I don’t think an individual NN can be taken from HW3 and made to work in HW2 like it’s plug and play.

The whole architecture has to work together. For eg. The HW3 NN has a single NN that takes all camera inputs and does some basic processing to get the edges etc. Since there is no NN like that in HW2, the NNs that come after this all camera NN won’t work in HW2.

Yeah if they use a specific NN or even a different set of algos in classical CV for edge detection, and the NNs are trained on that basis, then we are screwed.


It uses map data to do FSD!
 
I partially agree. You are right, you can not contrive a test for every situation. But, I'd bet that you can to exceed the 150,000 mile rate... Here's how.

Now this is going out on a limb as I know more than the average bear about software and neural nets, but just enough to be dangerous about vehicle autonomy.

At the end of the day, it's all software, some procedural code, some NN, and all of it fed a stream of input data that controls servos. Devs already write tests for all of the procedural stuff daily. That's easy... But, I've recently read that the engineers at Wamo are now creating virtual environments used to train their NN systems so they can time-compress the training process...

Here is the limb... I bet that they have also devised their own internal test jigs for these virtual environments too. My prediction is that this training tool evolves into a test platform, one where real world scenario data, say, 150,001 miles worth, or billions in Tesla's case, would be captured and fed into these simulation jigs, and then the various manufacture computers and software could consume this artificial environment and be scored on how well they do.

Just a theory...

Simulations have an important role to play for sure. They are great when you need to do very repetitive tests in a safe environment. For example, if I am trying to validate that my self-driving car can respond to pedestrians who are jaywalking, a simulation can probably run a million scenarios per hour without endangering any real people. A real world test on the other hand, would require taking the car on a track or street and have real people walk in front of the car over and over again. It would be time consuming and risky.

But I am skeptical that simulations alone are enough to completely validate an entire FSD car. The reason I am skeptical is because I doubt that simulations can completely capture the randomness of the real world. And ultimately, no matter how good simulations are, the FSD cars will operate in the real world so they should be tested in the real world. The cars should be tested in the same an environment that they need to operate in.
 
  • Like
Reactions: Bernesto
Yeah if they use a specific NN or even a different set of algos in classical CV for edge detection, and the NNs are trained on that basis, then we are screwed.
They should be able to put the entire HW3 NN in HW2 after cutting some layers (the accuracy goes down a bit). But they will have to retrain to optimize, which they will probably do after FC (and after major releases, perhaps).

It uses map data to do FSD!
Yes, they use normal map data (actually a slightly more detailed one, but still a publicly available commercial map, not HD map).
 
  • Like
Reactions: emmz0r
The more I read stuff like this the more I'm convinced the city NoA will be horribly unsafe as a level 2 system. Let's say Tesla releases city NoA that is 4 9's of safety (one accident every 10,000 miles). So it might work great on your commute 500 times in a row, no disengagements. Obviously you know the car has no issues on your commute, it just did it 500 times in a row! On the 501st time, when you have to be paying attention to avoid an accident, will you be?

Don’t worry- it will need attention and disengagement often enough to keep you on your toes.

The problem comes when they actually get good but not very good.

So at first this will not be an issue, but as they get close to the tipping point where FSD approaches human-level competency it could become an issue.

For AP at present, they have not reached this point. Right now we have to disengage AP often enough that only aspirants of the Darwin award take their attention away from the road. Highway AP will almost certainly reach 500 commutes without an intervention before NoA/city does. One fatal accident will have everybody paying attention again. For a while. It's a question of how long it takes them to get from one accident every 501 commutes to one accident every 50,001 commutes.

Simulations have an important role to play for sure. They are great when you need to do very repetitive tests in a safe environment. For example, if I am trying to validate that my self-driving car can respond to pedestrians who are jaywalking, a simulation can probably run a million scenarios per hour without endangering any real people. A real world test on the other hand, would require taking the car on a track or street and have real people walk in front of the car over and over again. It would be time consuming and risky.

But I am skeptical that simulations alone are enough to completely validate an entire FSD car. The reason I am skeptical is because I doubt that simulations can completely capture the randomness of the real world. And ultimately, no matter how good simulations are, the FSD cars will operate in the real world so they should be tested in the real world. The cars should be tested in the same an environment that they need to operate in.

In addition to the above, I think you need real-world testing so the software gets real-world input from the sensors rather than simulated input. Simulation will get you part, or most, of the way there, but the final testing has to be real world.
 
  • Like
Reactions: diplomat33
For AP at present, they have not reached this point. Right now we have to disengage AP often enough that only aspirants of the Darwin award take their attention away from the road. Highway AP will almost certainly reach 500 commutes without an intervention before NoA/city does. One fatal accident will have everybody paying attention again. For a while. It's a question of how long it takes them to get from one accident every 501 commutes to one accident every 50,001 commutes.
Chances of fatal accidents are much less than other kinds of crashes. So, likely there will be small fender benders, rather than fatal accidents.

If the commute is 20 miles - 500 commutes is 10k miles. 1 L4+ crash per 10k miles is the human average.

Now, if Tesla sends city NOA to 500,000 cars and even if 100k of them use City NOA once a day for a 20 mile trip, that would be 100k commutes/trips (2 Million miles) every day. If people are not paying attention when the reliability is 4 nines (1 per 10k miles or 500, 20 mile commutes) - you would expect 2,000 crashes every day ;)

crash-rates.PNG
 
  • Like
Reactions: Bernesto
If the commute is 20 miles - 500 commutes is 10k miles. 1 L4+ crash per 10k miles is the human average.

Now, if Tesla sends city NOA to 500,000 cars and even if 100k of them use City NOA once a day for a 20 mile trip, that would be 100k commutes/trips (2 Million miles) every day. If people are not paying attention when the reliability is 4 nines (1 per 10k miles or 500, 20 mile commutes) - you would expect 2,000 crashes every day ;)

I like that you're a numbers person ;)

I think attention monitoring may be key to a large rollout like that. The Model 3 has an interior camera. Musk stated that it is for monitoring robotaxi passengers, but, possibly it could be used in the interim to for drivers to stay attentive..?
 
But I am skeptical that simulations alone are enough to completely validate an entire FSD car.
There are two aspects - simulation and training data. Simulation is good to figure out whether the model is good for the training data they have - but it can't generate new training data.

For eg., if Waymo has never thought of "blue" traffic lights - it just won't be there in simulation and waymo won't even know they are missing something. So, whether they do 10 B miles of simulation or 20 B, they won't know how they will work in the real world.
 
I think attention monitoring may be key to a large rollout like that. The Model 3 has an interior camera. Musk stated that it is for monitoring robotaxi passengers, but, possibly it could be used in the interim to for drivers to stay attentive..?
How long will they take to train the model to understand the attentiveness of the drivers ? How many 9s in how many months ?

I'm not sure people will be ok with Tesla using the camera to monitor their attentiveness ...
 
For eg., if Waymo has never thought of "blue" traffic lights - it just won't be there in simulation and waymo won't even know they are missing something. So, whether they do 10 B miles of simulation or 20 B, they won't know how they will work in the real world.

Hell, I wouldn't know what to do with a blue traffic light o_O

That said, I think collecting and using fleet data to dynamically create simulations is the best solution. When drivers frequently take control at a location, a blue light for instance, that's a good signal for the engineers to take a look and simulate it.