Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Firmware 9 in August will start rolling out full self-driving features!!!

This site may earn commission on affiliate links.
Thanks for the link. If you are expected to be the safety fallback and take over within 10 seconds if necessary, then you still have to be paying attention. So maybe you are not directly supervising the system, but you are still paying close attention to what it is doing. How could you be the safety fallback if you are not paying close attention? That's what I mean by "supervise". The driver is still expected to watch what the car is doing so as to intervene if needed.

Nope. The SAE document clearly made multiple statements against this and i will quote some of them below.
Basically this is why you are able to read a book, watch a movie, play games. You can't do all those things while "still paying close attention to what it is doing". The system alerts the driver of situations when it needs the driver to attend to that situation. Otherwise, their attention is back to watching Netflix.

The SAE document makes it clear

"Recognizing requests to intervene issued by a driving automation system is not a form of monitoring driving automation system performance, but rather a form of receptivity."

"At levels 1-2, the driver monitors the driving automation system’s performance. A conventional driver verifies that an engaged ACC system is maintaining an appropriate gap while following a preceding vehicle in a curve...monitors the pathway of the vehicle to ensure that it is free of pedestrians and obstacles."

"At higher levels of driving automation (levels 3-5), the ADS monitors its own performance of the complete DDT."

"The driver state or condition of being receptive to alerts or other indicators of a DDT performance-relevant system failure, as assumed in level 3, is not a form of monitoring. The difference between receptivity and monitoring is best illustrated by example: A person who becomes aware of a fire alarm or a telephone ringing may not necessarily have been monitoring the fire alarm or the telephone. Likewise, a user who becomes aware of a trailer hitch falling off may not necessarily have been monitoring the trailer hitch. By contrast, a driver in a vehicle with an active level 1 ACC system is expected to monitor the driving environment and the ACC performance and otherwise not to wait for an alert to draw his/her attention to a situation requiring a response"
 
The SAE document makes it clear

Thanks for the link to the full document. This had been my understanding of L3 but recently people have been posting these summaries that are very ambiguous and make it sound like L3 can rely on the driver to take over quickly. It is good to have the more detailed source.

Basically, if you're allowed to read a book, then you need something close to a 10s margin because that's how long it takes to regain situational awareness if you haven't been paying attention at all. A lot can happen on the road in 10s. This is why L3 is basically a non-starter; by the time you can do this you have pretty much achieved L4 anyway.
 
  • Like
Reactions: diplomat33
Thanks for the link. If you are expected to be the safety fallback and take over within 10 seconds if necessary, then you still have to be paying attention. So maybe you are not directly supervising the system, but you are still paying close attention to what it is doing. How could you be the safety fallback if you are not paying close attention? That's what I mean by "supervise". The driver is still expected to watch what the car is doing so as to intervene if needed.

I think the idea with L3 -- which is not a very realistic idea -- is that the car will alert you when you need to pay attention, and will alert you 10s before it actually can no longer handle the situation. So it needs to know 10s in advance that it's going to need your help. Or else it needs to do be able to do something safe for 10s after realizing it's out of its operating envelope. Neither of these is very realistic or practical. By the time you can do this, you have an L4 car.

It also means that you can have a very, very good driver assistance system and it's still not L3 -- it doesn't matter how confident and smooth it is, it's still L2 if it ever requires you to take over immediately in any circumstance.
 
  • Like
Reactions: diplomat33
I think the idea with L3 -- which is not a very realistic idea -- is that the car will alert you when you need to pay attention, and will alert you 10s before it actually can no longer handle the situation. So it needs to know 10s in advance that it's going to need your help. Or else it needs to do be able to do something safe for 10s after realizing it's out of its operating envelope. Neither of these is very realistic or practical. By the time you can do this, you have an L4 car.

Yeah, I get that now. I was thinking that the system would need to alert you right away which of course would be problematic if you were not paying attention but I see now that is not the case. It probably explains why companies like Waymo skipped L3 and just focused on delivering a L4 autonomous vehicle. A L3 autonomous vehicle is not that useful whereas a L4 autonomous vehicle could be used for things like ride-sharing.

I think it also might explain a big reason why Tesla is taking so long with FSD. If it was just a matter of doing L3, it would be one thing. But to actually deliver FSD, Tesla needs to skip over L3 and deliver a L4 autonomous system which is certainly a monumental task.

It also means that you can have a very, very good driver assistance system and it's still not L3 -- it doesn't matter how confident and smooth it is, it's still L2 if it ever requires you to take over immediately in any circumstance.

Perhaps that is why Tesla and probably other auto makers too, don't really emphasize the levels all that much. It's a good metric for engineers and such but the public probably won't relate to it as much. The public frankly does not really care if a car is L2 or L4. They want to know the driving or self-driving capabilities of the car like can I drive hands-free, can the car park itself, can I summon the car across the parking lot, or can the car drive itself downtown etc...
 
I've worked on many things that started as a lofty goal from upper management. The engineers come back to their desks and whine and groan and threaten to quit… and at the end of the day, mountains get moved and it comes together better than expected.

It's a really stupid way of "motivating" people to do work, but I feel like this is par for the course in Silicon Valley.
It's a bad company structure if the upper management sets the "move mountain" goals, and the workers just try their best to do it.

Every software company needs a few highly skilled and thus respected gurus working among them, that can set these goals themselves or atleast they should think it was their idea. Something you have been a part of deciding yourself is highly motivating to finish, and equally sad to not finish. That spirit affects the whole team.

I see a lot of software engineers, even with central roles in big software companies saying "No way, that's impossible". Even plausibly solvable problems. That attitude is never going to make one company better than the competition.
 
I think it also might explain a big reason why Tesla is taking so long with FSD. If it was just a matter of doing L3, it would be one thing. But to actually deliver FSD, Tesla needs to skip over L3 and deliver a L4 autonomous system which is certainly a monumental task.

Well, the other option is to deliver a very advanced L2 system and call it "Full Self Driving", which I think is the direction they're headed. It may drive itself 99% of the time, but watching out for the other 1% is always going to be the driver's responsibility.
 
Well, the other option is to deliver a very advanced L2 system and call it "Full Self Driving", which I think is the direction they're headed. It may drive itself 99% of the time, but watching out for the other 1% is always going to be the driver's responsibility.

I agree that is a very strong possibility. If that happens, I am sure there will be some who might cry foul, claiming Tesla is cheating by calling a L2 system FSD. But I wonder how many "average folks" would really care if the system was technically still L2 if it allowed the car to self-drive 99% of the time.
 
Well, the other option is to deliver a very advanced L2 system and call it "Full Self Driving", which I think is the direction they're headed. It may drive itself 99% of the time, but watching out for the other 1% is always going to be the driver's responsibility.
I agree. This will be the best we get for the next 3+ years. And hopefully the nagging will reduce as the confidence increases.
 
Well, the other option is to deliver a very advanced L2 system and call it "Full Self Driving", which I think is the direction they're headed. It may drive itself 99% of the time, but watching out for the other 1% is always going to be the driver's responsibility.
It will probably be like that for a long time, possible 3 years or more. Eventually and hopefully, but impossible to know guess time will show, it will eventually let go of driver having to pay attention on certain geo-fenced areas and iterate. Really depends in the confidence based on real life testing.
 
It will probably be like that for a long time, possible 3 years or more. Eventually and hopefully, but impossible to know guess time will show, it will eventually let go of driver having to pay attention on certain geo-fenced areas and iterate. Really depends in the confidence based on real life testing.

My biggest fear is that they geofence the cr**p out of it and restrict it to a small area around California, and label it full self driving globally.
 
My biggest fear is that they geofence the cr**p out of it and restrict it to a small area around California, and label it full self driving globally.

I doubt that they could do that. There is no way that they could call if FSD gobally if it only worked in California. Also, Tesla has not really geofenced AP all that much, have they? So it's not really something they do.
 
  • Helpful
  • Like
Reactions: emmz0r and HookBill
I doubt that they could do that. There is no way that they could call if FSD gobally if it only worked in California. Also, Tesla has not really geofenced AP all that much, have they? So it's not really something they do.

In North America at least they geofence auto lane change on the map road designation and speed limit. So it's not geo-fenced in the sense of only working in certain cities, but you need to be on a road that's designated as a limited-access highway.

Also back when they were having issues with the GPS locking up TACC didn't even work, so even TACC is geofenced in some way that still isn't super clear -- but basically if the GPS thinks you're on something that isn't even a road it won't engage TACC.
 
  • Like
Reactions: croman
In North America at least they geofence auto lane change on the map road designation and speed limit. So it's not geo-fenced in the sense of only working in certain cities, but you need to be on a road that's designated as a limited-access highway.

Also back when they were having issues with the GPS locking up TACC didn't even work, so even TACC is geofenced in some way that still isn't super clear -- but basically if the GPS thinks you're on something that isn't even a road it won't engage TACC.

Thanks. I would not be surprised if FSD were geofenced a little too. I just doubt that Tesla would geofence it so much as to only allow it in a small part of CA as @emmz0r says he is afraid of.
 
In North America at least they geofence auto lane change on the map road designation and speed limit. So it's not geo-fenced in the sense of only working in certain cities, but you need to be on a road that's designated as a limited-access highway.

Also back when they were having issues with the GPS locking up TACC didn't even work, so even TACC is geofenced in some way that still isn't super clear -- but basically if the GPS thinks you're on something that isn't even a road it won't engage TACC.

That's super weird. I know that Model S has a radar and ultrasonic sensors. Shouldn't that suffice, and be a local system?
 
That's super weird. I know that Model S has a radar and ultrasonic sensors. Shouldn't that suffice, and be a local system?

At a minimum, it gets the speed limit from the map database (on AP2; AP1 has speed limit sign recognition). When I was having the GPS lockup bug -- where the GPS showed the car wherever you parked it last time GPS worked, which was often my driveway or the parking lot at my office -- TACC would not be willing to engage. It is reasonable to refuse to engage off-road or in a garage. The issue there was the inaccurate GPS readings.

Well, that and my local service manager who insisted to me that cruise control and GPS are not "functionally related", as he put it. But he was easily proven wrong as fixing the GPS problem also fixed the cruise control problem.
 
  • Like
Reactions: emmz0r
In conclusion:

Level 2 - Hands off (ex: Supercruise)
Level 3 - Eyes off (ex: reading book, watching movie aka Audi Traffic Jam Pilot)
Level 4 - Mind off (ex: sleeping aka Waymo)

The important question is what level will the lawmakers allow you be driven home in and not be considered driving under the influence at any blood alcohol levels...
 
  • Like
Reactions: d21mike
The important question is what level will the lawmakers allow you be driven home in and not be considered driving under the influence at any blood alcohol levels...

L4, eventually, assuming your route home is within its operating area, and also assuming that legislation allowing L4 is eventually passed. L4 means that the occupants of the car need not be the ones legally responsible for operating the car. L3 would not cut it because you may need to take over.

Until then, there's always Lyft to get you home!