Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Using driver as the crutch - Tesla, Pronto.ai, Comma.ai...

This site may earn commission on affiliate links.
I thought it might be helpful to start a thread following what the driver’s aid approach startups and companies are saying about car responsible driving, SAE Levels and how that narrative evolves.

In summary, the car responsible driving SAE Levels are Levels 3-5:

Level 1: Driver’s aid, car has limited control (eg speed or steering, not both), driver responsible
Level 2: Driver’s aid, car has full control but driver remains fully responsible
Level 3: ”Eyes off” (read book), car responsible for drive in limited scenarios, driver gets warning and has time to take control
Level 4: ”Mind off” (sleep), car responsible for the full driving task in limited scenarios, driver can be missing
Level 5: ”Steering wheel off/optional”, car responsible in all scenarios

Here is Elon Musk stating Autopilot 2 is Level 5 capable hardware back in 2016. Tesla has not returned to car responsible SAE Levels since that introduction though.


Elon Musk also famously took a different, a contrarian view if you will to car responsible driving from the liability perpespective there too. Other premium manufacturers were and are saying they will take responsibility if their autonomous car technology crashes (car responsible driving with manufacturer responsible for the car) whereas Elon Musk was pointing this responsibility towards the car owner insurance companies in all but specific situations:

Elon Musk: Tesla will only pay for Autopilot accidents if the software makes a mistake

For example contrast this to Volvo:

Volvo CEO: We will accept all liability when our cars are in autonomous mode

George Hotz of Comma.ai was another early proponent of the idea that Waymo’s etc way was not the way you make an autonomous car. He also advocates the cameras, machine learning, fleet data, rinse and repeat approach. But also equally notably the past summer admitted trying to remove the driver from the equation is not the way:

George Hotz is on a hacker crusade against the ‘scam’ of self-driving cars
The flaw with Waymo and Tesla and all the companies working on autonomous driving, he says, is their desire to remove humans from the equation. “Every self-driving car on the road today is worse than a human,” he tells me. “Everyone, Waymo included. So with a human we believe these systems are safer than a human alone. And they certainly can be more convenient. The real money is in advanced driver assist systems (ADAS)...”

As the latest entrant to this school of thought seems to be Anthony Levandowski:

Pronto Means Ready – Pronto AI – Medium
Because for all the talk of going straight to level 4, we think it’s much more exciting and promising to first develop and scale a truly great level 2 system. Better to do several things—braking, steering, and throttle—super well on a wide variety of real roads and conditions rather than attempting to do everything else that driving entails in a very artificial manner.

Indeed, that is the difference between Level 2 and Level 3+ — the latter take into consideration all that ”everything else that driving entails” and the approach here with this wave of companies is to use the driver as the crutch for that. Interesting to follow how it goes.
 
Level 2: Driver’s aid, car has full control but driver remains fully responsible
Elon Musk also famously took a different, a contrarian view if you will to car responsible driving from the liability perpespective there too. [...] Musk was pointing this responsibility towards the car owner insurance companies in all but specific situations.
I get it now. Tesla’s aiming for Level 6, where driver has full control but insurance company is fully responsible
 
I think in spite of the whole SAE level things, you are always going to have some percent of trips that require human intervention because of some exceptional case. Like the car gets confused in a parking lot trying to reach the pickup area, or some unexpected obstacle, or poor weather causes confidence to drop too low, etc. You're not building a strong AI, you're building dumb driving automation.

I think anyone operating a robo-taxi service is going to have a 24/7 call center where stuck cars are remotely teleoperated back into service again, or sometimes requiring a physical visit in-person. So you'll have people working shifts who are responsible for all the cars in some geographical region. Then the exception is root caused and someone goes and sets a flag in the ADAS map data to blacklist that area or add some hint for cars in the future.
 
I get it now. Tesla’s aiming for Level 6, where driver has full control but insurance company is fully responsible

Just like everyone is having today?

I think in spite of the whole SAE level things, you are always going to have some percent of trips that require human intervention because of some exceptional case. Like the car gets confused in a parking lot trying to reach the pickup area, or some unexpected obstacle, or poor weather causes confidence to drop too low, etc. You're not building a strong AI, you're building dumb driving automation.

I think anyone operating a robo-taxi service is going to have a 24/7 call center where stuck cars are remotely teleoperated back into service again, or sometimes requiring a physical visit in-person. So you'll have people working shifts who are responsible for all the cars in some geographical region. Then the exception is root caused and someone goes and sets a flag in the ADAS map data to blacklist that area or add some hint for cars in the future.

That makes sense. There will always be some kind of human intervention unless human no longer exists. Those SAE levels are very crude classifications. Don't think any system can, or have to, fall exactly into one level or the other.
 
@eli_ @CarlK
I appreciate the comments.

Though I do not see a call center monitoring the car the same as ”driver as crutch”. Level 4 system can well have a support hotline for autonomous cars that face a scenario outside of their specification, that is completely within Level 4 as long as the car handles any immediate scenario safely.

For example if too many of its sensors are in the risk of being covered, parking on the side of the road and calling for help would be an okay Level 4 response. In fact it would seem to me a fairly good response too because the car would sense it needs someone to clean its sensors beyond what its own washers, wipers and heaters can.

The SAE levels, really, are very powerful yet simple once you grasp their true meaning. Level 3 is a system where the driver can relinquish control under select circumstances and has reasonable time to get back to driving — the car handles everything in-between. In Level 4 the driver can leave the car or sleep in select circumstances because the car is able to drive and also stop safely as needed.

Level 5 is easiest to think as a Level 4 that is so advanced that the manufacturer saw fit to leave out the steering wheel (optinally they could of course still choose to leave it in but it is not necessary). It too might still need help if it breaks down etc but this would be rare enough that it would be a useful product even without a steering wheel.

This of course really is why it is so hard to believe Tesla actually said Level 5 capable when they launched AP2. That is a tall order indeed. But Levels 3 and 4 give the manufacturer a lot of leeway to specify the scenarios where car is responsible for the drive — though in those scenarios themselves the demands for car responsible driving indeed are high because there is so much more to driving than just lane keeping, speed maintenance and navigation... It is still not an easy task at Levels 3-4 but immensely more doable than Level 5 at this time.

A self-driving, car responsible driving car is basically two things:

SAE Level (3-5) + Manufacturer’s specifications for the scenarios where self-driving is supported = (Partially) autonomous car
 
Last edited:
I thought it might be helpful to start a thread following what the driver’s aid approach startups and companies are saying about car responsible driving, SAE Levels and how that narrative evolves.

In summary, the car responsible driving SAE Levels are Levels 3-5:

Level 1: Driver’s aid, car has limited control (eg speed or steering, not both), driver responsible
Level 2: Driver’s aid, car has full control but driver remains fully responsible
Level 3: ”Eyes off” (read book), car responsible for drive in limited scenarios, driver gets warning and has time to take control
Level 4: ”Mind off” (sleep), car responsible for the full driving task in limited scenarios, driver can be missing
Level 5: ”Steering wheel off/optional”, car responsible in all scenarios

Here is Elon Musk stating Autopilot 2 is Level 5 capable hardware back in 2016. Tesla has not returned to car responsible SAE Levels since that introduction though.


Elon Musk also famously took a different, a contrarian view if you will to car responsible driving from the liability perpespective there too. Other premium manufacturers were and are saying they will take responsibility if their autonomous car technology crashes (car responsible driving with manufacturer responsible for the car) whereas Elon Musk was pointing this responsibility towards the car owner insurance companies in all but specific situations:

Elon Musk: Tesla will only pay for Autopilot accidents if the software makes a mistake

For example contrast this to Volvo:

Volvo CEO: We will accept all liability when our cars are in autonomous mode

George Hotz of Comma.ai was another early proponent of the idea that Waymo’s etc way was not the way you make an autonomous car. He also advocates the cameras, machine learning, fleet data, rinse and repeat approach. But also equally notably the past summer admitted trying to remove the driver from the equation is not the way:

George Hotz is on a hacker crusade against the ‘scam’ of self-driving cars


As the latest entrant to this school of thought seems to be Anthony Levandowski:

Pronto Means Ready – Pronto AI – Medium


Indeed, that is the difference between Level 2 and Level 3+ — the latter take into consideration all that ”everything else that driving entails” and the approach here with this wave of companies is to use the driver as the crutch for that. Interesting to follow how it goes.

Level 3: ”Eyes off” (read book),

Level 4: ”Mind off” (sleep),

I don't think we currently have such a level of application. In fact, Consumer Reports rated supercruise higher than AP because SC ensures your eyes are always on the road.

Heck, Waymo had to double the number of safety drivers in the car to mitigate driver's fatigue. GM's Cruise utilize two safety drivers right from the beginning. Remember the last time a so-called L4 system killed a pedestrian was because the safety driver operated "Eyes Off". The truth is, it will take a long before we have any reasonable application of "eyes off" or "mind off" as you're describing it. All the current AV efforts have drivers in the stack that are either customers or a safety drivers.
 
Level 3: ”Eyes off” (read book),

Level 4: ”Mind off” (sleep),

I don't think we currently have such a level of application. In fact, Consumer Reports rated supercruise higher than AP because SC ensures your eyes are always on the road.

Yes, Level 3 is eyes off, Level 4 is mind off and some call Level 5 steering wheel off.

There are no Level 3-5 products available in the consumer market yet, that is true. Super Cruise is Level 2 and merely an ADAS. What is great about Super Cruise is similar to what is great about Level 3 though that is ability to keep hands off the wheel — but with Super Cruise (Level 2) you still have to watch the road and be able to grab the wheel at a moment’s notice unlike with Level 3 where you don’t have to react so immediately nor watch the road.

Heck, Waymo had to double the number of safety drivers in the car to mitigate driver's fatigue. GM's Cruise utilize two safety drivers right from the beginning. Remember the last time a so-called L4 system killed a pedestrian was because the safety driver operated "Eyes Off". The truth is, it will take a long before we have any reasonable application of "eyes off" or "mind off" as you're describing it. All the current AV efforts have drivers in the stack that are either customers or a safety drivers.

I believe Waymo is the only (Western at least) company so far that has operated an actual Level 4 vehicle in a somewhat production scenario though the geographical scenario has been very limited. They have given rides to passengers on public roads without a driver. So that is actual Level 4. Geofenced for sure but levels 3-4 allow limiting the scenario. It is just that within that scenario the car has to be responsible for the full dynamic driving task.

Everyone else, Uber included, has been operating prototypes that basically amount to Level 2 because the autonomous features are not yet robust enough to function at an autonomous SAE level (3-5). It was not an autonomous car in the SAE Level sense that killed the pedestrian, at best it was a prototype of one with critical systems turned off for testing so not really that relevant to the point at hand.
 
Last edited:
They [Waymo] have given rides to passengers on public roads without a driver.

Do you've a source for this?

Because AFAIK, even journalist where not allowed into such vehicles. It's just a minute tech demo with employees. They're progressing but the tech is not there yet. As Krafchick said, it's going to take a long time to achieve that.

Everyone else, Uber included, has been operating prototypes that basically amount to Level 2 because the autonomous features are not yet robust enough to function at an autonomous SAE level (3-5).

Currently that's how all the trainings occur, Waymo's included. No matter what level you call the systems, they presently "all" require an attentive human behind the wheels.
 
@Engr It does not really matter what Waymo has done or has not, it is beside the point. (I may look into it later just for fun but it does not matter for this point.) The SAE Levels are very clear and prototypes that are not yet capable of driverless driving are simply not SAE Level 3-5 cars yet. The Lidar disabled Uber certainly was not a Level 4 car. It was a development prototype of one so a work in progress.

My guess is that Waymo is closest to a real Level 4 car and Audi is probably closest to a Level 3 car for consumers but until we see more of that in production action certainly it is okay to say we’re not there yet.

I only was really pointing out that the levels themselves are very clear in my view. There are the levels and then there are manufacturer specs within those levels are achieved for each autonomous car. It really is not ambiguous, other than perhaps this prototype stage where it comes down to how one defines a prototype level-wise. But in production it is very clear cut.
 
@Engr Not that it matters but here is one customer talking of his one driverless Waymo drive on Early Riders:

I rode in Waymo’s new self-driving taxi service

He’s also interested in riding in the fully driverless vehicles without trained drivers behind the wheel, which he has done only once before. Without that safety net to fall back on, he said he feels more present during the trip.

“Right now, riding with a safety driver, you kind of don’t pay attention much, you just kind of do your own thing,” he said. “I saw myself without a driver not really paying attention to my phone at all, being fully present, and kind of paying attention to what the car was doing, watching the pedals move on their own and the steering wheel on its own. So that caught me a little bit off guard, and then just really seeing people’s reactions around, that was kind of cool.”

Vera added, “It’s like riding in a ride at Disneyland.”
 
  • Like
Reactions: Engr