Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Impossibility of L5 using any level of new hardware

This site may earn commission on affiliate links.
Still digesting the info from the Autonomy Day. Tesla explained how they train their cars. Well, I think unless Tesla changes how they teach their neural nets, with any future computer power, IMHO that level of L5 autonomy is impossible.

Why? Well, think about it. The way Tesla teach cars to drive is no split it into features, and train the NN to implement each feature. Some examples:

1. Traffic lights. Ok, let’s teach car to figure it out. One more NN.
2. Need to be able to interpret human signs? Ok, one more NN, feed lots of humans waving hands to train it.
3. Cars with bicycles? No need for bew NN, but people have to manually label data to mark ‘cars with bicycles’ as just cars.
4. Junk on the road? One more NN, feed lots of junk images to it.

So you get it 99%. Maybe 99.99%. And then you have to start to implement very weird, rare features, which maybe only 1 in a 1,000,000 person will ever encounter. Cost to teach the cars will become prohibitive.

So to go beyond that, someone has to figure out how to let the cars teach at a more abstract level on its own, basically feed traffic code + raw video feeds, without any labeling, triggers that feed corner cases back to Tesla for Humans to fugure out.

Now, there is no solution for that. At high level, car is constrained by a set of ‘features’, hard-coded by Tesla. This will stop the exponential improvement somewhere at 99.99%. This is not a Tesla-specific problem, of course. But this is why we are talking about 10+ years, until a more fundamental way for cars to learn by itsels how to drive is invented.

LOL....then you should be working for Tesla or Waymo or someone......

And not sit around here posting in a thread.
 
NoAP drove me last month on a 400 mile one-way trip from Northern to Southern California. 3 different highways. On those 400 highway miles, I personally drove for less than 1 minute - basically just got the car onto the freeway and also manually drove through a very short construction zone. The other 99.99% of the time, AP was driving....and most importantly I only had one single disengagement. One disengagement after 395 of the about 400 miles. First...that is amazing. Literally (not figuratively!) four hours and 45 minutes of continuous driving by AP with no disengagements on the highways.

Now...the single disengagement was interesting...and probably a corner case: it happened on a short connection between one highway and another. As AP exited the highway I was on and was preparing to take a corner and then accelerate to merge into the second highway, I saw a broken down truck sitting in my (connection) lane. Because there wasn't really a shoulder to speak of on that short (⅛ mile-ish) connector, the truck was sticking out maybe a foot or so into my actual lane.

Honestly I didn't even look at the IC to see if AP "saw" the obstacle (truck). I just immediately took over and steered wide to make sure I wouldn't clip the truck. AP might have handled that situation and also steered wide to avoid the truck. Or it might have failed to see it and collided with it like it has for some of those reportedly parked firetrucks. I wasn't going to take the chance. But bar that one outlier case (it even surprised me as a human as you could only see the parked truck after you started to come around a curve - it was just suddenly there) it was an exceptional NoAP controlled trip. Smooth and confident. It does seem to be getting better and better. I look forward to seeing HW3/FSD and what it brings to the table.
So you drove 400 miles and if you were not paying attention it is quite possible that you would have been killed (small offset crashes are among the most deadly). It only needs to get about 200,000 times better*! :p

*US death rate is 1.16 per 100 million miles
 
  • Like
Reactions: mattjs33
So you drove 400 miles and if you were not paying attention it is quite possible that you would have been killed (small offset crashes are among the most deadly). It only needs to get about 200,000 times better*! :p

*US death rate is 1.16 per 100 million miles

It's ok, this is just part of the long tail. As long as it's still much safer than a human driver on average, it's all cool.*
The fear of accidents is certainly understandable. But if you dissect that fear, it tends to come from fear of personal (or your family's) bodily harm, financial burden and headache and possibly liability, and probably lastly, causing harm to others.

If an accident is merely an inconvenience that you walk away from and hail another Robo, in which nobody is seriously harmed because of the system's ability to at least reduce collision severity, and from which you have no resulting headaches to deal with it (insurance, repair, etc.)...it kind of becomes less frightening.

"Yeah my Robo hit another car on the way to work. Sorry I'm 10 minutes late..."

Any accident even with reduced force is terribly dangerous. I accelerated too hard without appropriate warning (I did but did not provide enough lead time apparently - no matter what it was my fault ;) ) and my wife was complaining about her neck for a day. That was only 0.8G probably.

My idea of autonomy utopia generally doesn't include a bunch or robotaxis running into each other.

* I'm not cool with it.
 
  • Like
Reactions: Leafdriver333
Actually, you can. That IS what NN do, always. They extrapolate. They don't learn by rote they learn the rules, then apply the rules to every novel situation.

ALL of the driving that the NN does, is new to it. It has never seen that exact scene/situation before.

It's a matter of speed. Some new features will appear so fast the computer cannot incrementally learn and execute, in fractions of a second, what keeps it safe.

While a NN, or any scoring function, can extrapolate, the error value is higher as you diverge from the training data. The NN has actually been pre trained before you get into the car, ie... the starting set of coefficients is not random. Each frame is transformed into a set of measurable features (object recognition, distance, speed, and acceleration between objects, etc...) that are used to incrementally train the network. Think of the analogy to a facial recognition where we identify the eyes, nose, ears, lips, chin, temples, forehead and the size and distances between each feature. Computing those features and running the exceptionally complicated learning and scoring function in real time, UHD 30+ fps, takes a huge amount image processing and matrix calculations, thus the gpu. We can see even now that the object recognition models in AP 2.5 improves over time, but it does not seem like an incredibly fast learning rate. I would love to see the inner workings of AP 3 for the details on the learning rates over time; however, you always want to be making decisions based in the interpolation region.

This is like the ultimate entertainment, right ?
 
NoAP drove me last month on a 400 mile one-way trip from Northern to Southern California. 3 different highways. On those 400 highway miles, I personally drove for less than 1 minute - basically just got the car onto the freeway and also manually drove through a very short construction zone. The other 99.99% of the time, AP was driving....and most importantly I only had one single disengagement. One disengagement after 395 of the about 400 miles. First...that is amazing. Literally (not figuratively!) four hours and 45 minutes of continuous driving by AP with no disengagements on the highways.

Now...the single disengagement was interesting...and probably a corner case: it happened on a short connection between one highway and another. As AP exited the highway I was on and was preparing to take a corner and then accelerate to merge into the second highway, I saw a broken down truck sitting in my (connection) lane. Because there wasn't really a shoulder to speak of on that short (⅛ mile-ish) connector, the truck was sticking out maybe a foot or so into my actual lane.

Honestly I didn't even look at the IC to see if AP "saw" the obstacle (truck). I just immediately took over and steered wide to make sure I wouldn't clip the truck. AP might have handled that situation and also steered wide to avoid the truck. Or it might have failed to see it and collided with it like it has for some of those reportedly parked firetrucks. I wasn't going to take the chance. But bar that one outlier case (it even surprised me as a human as you could only see the parked truck after you started to come around a curve - it was just suddenly there) it was an exceptional NoAP controlled trip. Smooth and confident. It does seem to be getting better and better. I look forward to seeing HW3/FSD and what it brings to the table.

This kind of description is what gives me a lot of doubt when applied to what they say they're doing now and will do in the future. My NoAP experience is literally night and day different in the Houston area. I've never gotten through a single highway interchange without very dangerous behavior that caused me to intervene. Seriously, 75+ attempts across a number of highways and interchanges all across a 100 mile diameter metropolitan area. I can only assume that, even though they deny it, there's a bunch of HD maps and hardcoding that applies only to California. NoAP is unusable even in light traffic on my drives.
 
  • Informative
Reactions: kavyboy
This kind of description is what gives me a lot of doubt when applied to what they say they're doing now and will do in the future. My NoAP experience is literally night and day different in the Houston area. I've never gotten through a single highway interchange without very dangerous behavior that caused me to intervene. Seriously, 75+ attempts across a number of highways and interchanges all across a 100 mile diameter metropolitan area. I can only assume that, even though they deny it, there's a bunch of HD maps and hardcoding that applies only to California. NoAP is unusable even in light traffic on my drives.


I've used it across an even larger area than you, in North Carolina, and it works very well.... so it's not a CA thing.

Not perfect, but good enough I spent probably 95% of my highway time with it on.


NoA should observe my driving and mimic it.
I take the same route everyday to and from work.
After about a month, NoA should know where I change the lanes in what kind of speed and traffic volume.


AP does not change its behavior based on individual drivers.

If it did it'd make troubleshooting and overall NN improvements nearly impossible.
 
This kind of description is what gives me a lot of doubt when applied to what they say they're doing now and will do in the future. My NoAP experience is literally night and day different in the Houston area. I've never gotten through a single highway interchange without very dangerous behavior that caused me to intervene. Seriously, 75+ attempts across a number of highways and interchanges all across a 100 mile diameter metropolitan area. I can only assume that, even though they deny it, there's a bunch of HD maps and hardcoding that applies only to California. NoAP is unusable even in light traffic on my drives.

Sorry to hear that. You aren't the first person to say that AP/NoAP isn't working well for you, of course. And I do agree with you that is a concern that there isn't more consistency in its operation. I'd like to see more consistency between cars and owners reports.

That said, I can only report my personal observations. It did that trip with only one (edge case) disengagement required over 400 miles. So that was pretty amazing. At the same time, I have a local road (I've commented on multiple threads here on TMC about!) that has a S curve (though it's actually quite subtle) in the middle and a pretty sharp corner at the end - neither of which AutoPilot can consistently handle. Sometimes it does pretty well. Sometimes it's not great. And sometimes it is "take over or I'll crash" bad. Sometimes it can be all three of these in the same day if I run it multiple times. Inconsistent.

On the other hand...the other day, I had NoAP on and it took me to a local (Bay Area) offramp. There was a stop light at the end of the offramp. I was the first car. So I assumed that I would of course need to brake for that stoplight now that NoAP would end after it got me to the offramp. But to my surprise....the car started slowing on its own and actually came to a complete stop at the light, right behind the white line. And then the car said that NoAP had ended and to press the accelerator to continue. Never had that happen before. So that was a little glimpse into the future I guess even if it never happens again (haven't tried to reproduce it yet).
 
400ish miles (410 I think) in just under 5 hours is an average of almost 80mph. So not really.

It was a math joke.

If you were on AutoPilot 99.99% of the time (as you wrote) that means that for every second you were driving, AutoPilot was driving for 9,999 seconds, which is 2 hours, 46 minutes and 39 seconds. From your description you were driving for many seconds.
 
  • Funny
  • Like
Reactions: bak_phy and GeoX750
Just curious, on your trip, did you kind of check out and just let it get you from point A to point B or were you watching carefully?

I have yet to take a loooong NoA trip - hope to do Norcal to SoCal later this summer. Usually when I use NoA, I am watching everything like a hawk and get freaked out by small situations - mostly related to how I see other drivers interacting with my AP-driven car (e.g. passing it angrily because AP was brake checking a little early, passing it on the right because it wasn't picking up speed quite as quickly as it could when the lane opened up, etc.). In other words, it would probably do a very fine and very safe job if I wasn't bothered by how other drivers were perceiving its/my driving.

Definitely watching carefully! Especially going down over mountains with curves. And even though I only had that one real disengagement, I did also have two instances of phantom braking - not severe but still had to punch the accelerator a little. It wasn't severe and I didn't have anybody on my tail, so no other drivers were impacted.

There were several times when the car got a little close to neighboring cars in adjacent lanes and I almost pulled the wheel to take over - but then either my car or their car moved a little and there was again enough space so I was able to continue on without a disengagement. But, yeah, I am always watching out for other cars. This day there wasn't that much traffic. It certainly wasn't empty (California highways rarely are) but i'd describe it as light-to-moderate for most of the trip with a few patches of heavy traffic interspersed.
 
What does it even take to get "regulatory approval". Seems to me like this is undefined?

Great question that nobody I have heard ever ask. Someone should ask Elon sometime to explain which bodies (I am sure if varies by state and country) Tesla has to talk to and what is the output? Do they get a verbal "Ok you can do that" or is there a lot of documentation produced and does there need to be follow ups specified to ensure it is working as Tesla indicated? Would be interesting to know.
 
It was a math joke.

If you were on AutoPilot 99.99% of the time (as you wrote) that means that for every second you were driving, AutoPilot was driving for 9,999 seconds, which is 2 hours, 46 minutes and 39 seconds. From your description you were driving for many seconds.

Ha! Well that totally went over my head at the time. But makes sense and is humorous after you walked through it :)
 
Great question that nobody I have heard ever ask. Someone should ask Elon sometime to explain which bodies (I am sure if varies by state and country) Tesla has to talk to and what is the output? Do they get a verbal "Ok you can do that" or is there a lot of documentation produced and does there need to be follow ups specified to ensure it is working as Tesla indicated? Would be interesting to know.

Current traffic codes prohibit human drivers from being uninvolved in the driving process, so to speak. So at least to allow someone to not pay attention, or, for instance, use phone in states where it is prohibited, laws need to be changed. At least this part of approval is required.

More specific questions to ask:
1. Who, apart from Tesla, reviews new FSD features before deployment to general Tesla owners?
2. Is aproval required for each update, or per feature?
3. Assuming all liability is on the driver (i.e. it holds the steering wheel and looks at the road), should every new feature still be approved?

Something like that.
 
Last edited:
What does it even take to get "regulatory approval". Seems to me like this is undefined?
In California there are already regulations in place for autonomous vehicles. Right now it costs about $3k to register. I’m sure this primarily because most people think that autonomous vehicles will initially be used in small numbers for comercial purposes. I would imagine that Tesla could lobby to get that fee reduced for private use.
Here is the webpage with the regulations or California.
Deployment of Autonomous Vehicles for Public Operation
Every state has different rules. There will probably be some standardization of rules at the federal level at some point.