Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

How is your insurance handling "autonomy"?

This site may earn commission on affiliate links.
If you're going to have to get your car, FSD SW, and insurance all from the same company, just lease it all. Who would buy a car outright, then do a monthly sub on FSD/insurance? That company could just decide to stop selling you FSD and make your whole original purchase worthless (or worth much less). I would never buy a car from a company where I could only buy their tires, and I'll never buy a car where the company can legally devalue it after I own it. I'd lease that all day though if the lease agreement specified the functions guaranteed under the lease.


This would be it's own interesting discussion on this forum. The idea that Tesla will never do L3, and will go from "driver fully responsible" to "we can get you basically anywhere without any attention" all in one step sure isn't what I have been assuming. Seems so easy (and useful) to hit L3 on the highway with a driver as a delayed backup compared to handling all the surface street issues.
It's not the technical part of L3 that's the problem. It's the legal part. The corporate lawyers are not going to be happy with taking legal responsibility for L3 operation, a mode where by definition it can't reasonably guarantee to fail safely in the case the driver does not respond in time. I think the way Tesla originally developed AP, the mode is very similar to L3, but they obviously did not go that direction (went right back to fully L2, esp. after major accidents and NHTSA investigations).
 
It's not the technical part of L3 that's the problem. It's the legal part. The corporate lawyers are not going to be happy with taking legal responsibility for L3 operation, a mode where by definition it can't reasonably guarantee to fail safely in the case the driver does not respond in time. I think the way Tesla originally developed AP, the mode is very similar to L3, but they obviously did not go that direction (went right back to fully L2, esp. after major accidents and NHTSA investigations).
Getting to a minimal risk condition if the driver doesn't take over in 5 seconds or so seems like a much easier problem than the system controlling the vehicle during those 5 seconds. First the system has to recognize that there is a situation it can't handle, which already seems fiendishly difficult, and then it has to handle that situation for 5 seconds while the driver regains situational awareness!
I agree that companies don't want the liability but I bet the liability while the system is active is a much bigger issue.
Tesla has never been close to L3 because they've never had a system even close to human reliability. (I know technically the SAE does not specify reliability requirements but practically speaking a system has to be better than a human).
 
Last edited:
  • Like
Reactions: gearchruncher
The corporate lawyers are not going to be happy with taking legal responsibility for L3 operation, a mode where by definition it can't reasonably guarantee to fail safely in the case the driver does not respond in time.
Tesla already uses a 10 second timer to consider if an accident was on AP or not in their stats.

Given their aggressiveness towards being AP leaders, and the way they advertise current L2 systems, and the very iffy driver presence they have now, I don't think their legal team is stopping them at all. They already say "you must always pay attention" even though that gets them into some hard legal situations now. Saying "you must respond within 10 seconds of our alert" doesn't seem much different, especially if they have good driver presence monitoring and datalogging.

As I say, an interesting path I had never considered, that Tesla will never attempt L3. I don't think the group of common posters here generally agree that is their likely path, but it's a very interesting possibility.
 
Getting to a minimal risk condition if the driver doesn't take over in 5 seconds or so seems like a much easier problem than the system controlling the vehicle during those 5 seconds. First the system has to recognize that there is a situation it can't handle, which already seems fiendishly difficult, and then it has to handle that situation for 5 seconds while the driver regains situational awareness!
I agree that companies don't want the liability but I bet the liability while the system is active is a much bigger issue.
Tesla has never been close to L3 because they've never had a system even close to human reliability. (I know technically the SAE does not specify reliability requirements but practically speaking a system has to be better than a human).
As you mention, SAE does not specify any reliability requirements. You can have a fairly unreliable L3 system and it's still a L3 system by definition. The major point is you take legal responsibility for all aspects of that and given enough volume (which Tesla definitely has by far) there will be point of contentions about the point that take over happens (5 seconds is probably too short, there's been arguments about this). So in the end, the sticking point still remains the legal part, not the technical part. If the corporate lawyers and accountants sign off on taking legal responsibilities, this wouldn't be an issue (as we've seen in the Honda and Audi cases).
 
Yeah, that's completely insane.
It's even more annoying because the Technoking doesn't even understand system design and statistics, or worse, is purposefully misleading the public. The current AP is an L2 system, so it relies on humans as an integral part of the system. That part of the system is as integral as a brake caliper in brakes or electricity to a EV. You cannot remove the driver and maintain that level of performance as he insinuates, just like you can't say your EV can go 300 miles on a charge and then remove the charger. You literally have zero idea how the system will behave with that element missing unless you do other research, which Tesla conveniently never exposes.

Maybe Tesla has no lawyers.
They worked in the row over from the worthless PR department.
 
Tesla already uses a 10 second timer to consider if an accident was on AP or not in their stats.
That's only in order to be "conservative" in the statistics (as a lot of naysayers say many times AP just disables seconds before the accident happens), has nothing to do with actually giving the driver a true 10 second buffer.
Given their aggressiveness towards being AP leaders, and the way they advertise current L2 systems, and the very iffy driver presence they have now, I don't think their legal team is stopping them at all. They already say "you must always pay attention" even though that gets them into some hard legal situations now. Saying "you must respond within 10 seconds of our alert" doesn't seem much different, especially if they have good driver presence monitoring and datalogging.
The "you must always pay attention" is the handiwork of their legal team! With L3 none of that exists (for the period it is activated) and that is a whole other legal ball game, somewhere Tesla's legal team have not ventured into yet. Even for FSD Beta they are still having the same requirement of paying attention.
As I say, an interesting path I had never considered, that Tesla will never attempt L3. I don't think the group of common posters here generally agree that is their likely path, but it's a very interesting possibility.
 
As you mention, SAE does not specify any reliability requirements. You can have a fairly unreliable L3 system and it's still a L3 system by definition. The major point is you take legal responsibility for all aspects of that and given enough volume (which Tesla definitely has by far) there will be point of contentions about the point that take over happens (5 seconds is probably too short, there's been arguments about this). So in the end, the sticking point still remains the legal part, not the technical part. If the corporate lawyers and accountants sign off on taking legal responsibilities, this wouldn't be an issue (as we've seen in the Honda and Audi cases).
I'm saying that practically speaking there is no difference between 5 seconds and 5 minutes. The chances of a failure resulting in injury or death between 5 seconds and 5 minutes seems small relative to a failure while the system is active or in the 5 seconds after it decides to hand over control.

There is no way any government would allow a L3+ system to deployed for very long if was less safe than a human even if the company was willing to pay out for all lawsuits (and of course releasing a system less safe than a human would probably expose them to high punitive damages too). I don't understand how you're so sure that Honda or Audi actually has a system that is more reliable than a human? It's an incredibly difficult technical problem. Human drivers have a fatal collision every 100 million vehicle miles. How many miles of testing have Honda and Audi done?
 
I'm saying that practically speaking there is no difference between 5 seconds and 5 minutes. The chances of a failure resulting in injury or death between 5 seconds and 5 minutes seems small relative to a failure while the system is active or in the 5 seconds after it decides to hand over control.

There is no way any government would allow a L3+ system to deployed for very long if was less safe than a human even if the company was willing to pay out for all lawsuits (and of course releasing a system less safe than a human would probably expose them to high punitive damages too). I don't understand how you're so sure that Honda or Audi actually has a system that is more reliable than a human? It's an incredibly difficult technical problem. Human drivers have a fatal collision every 100 million vehicle miles. How many miles of testing have Honda and Audi done?
Maybe we are talking past each other, but when did I say Honda and Audi had a system more reliable than a human? That is absolutely not a requirement to have a L3 system. All I'm saying is Honda's legal team determined having a limited fleet of 100 lease-only L3 vehicles in Japan was a risk they are willing to take (does not necessarily mean the system has to be or is more reliable than a human, just that Honda is willing to legally cover for whatever reliability the system is at). Audi's legal team determined they were not willing to cover the costs of A8 vehicles using their L3 system (in a likely larger fleet covering more markets). This is aside from the technical capabilities of the cars (Audi seemed very confident in that during their demos).
 
  • Informative
Reactions: gearchruncher
Maybe we are talking past each other, but when did I say Honda and Audi had a system more reliable than a human? That is absolutely not a requirement to have a L3 system. All I'm saying is Honda's legal team determined having a limited fleet of 100 lease-only L3 vehicles in Japan was a risk they are willing to take (does not mean the system has to be or is more reliable than a human, just that Honda is willing to legally cover for whatever reliability the system is at). Audi's legal team determined they were not willing to cover the costs of A8 vehicles using their L3 system (in a likely larger fleet covering more markets). This is aside from the technical capabilities of the cars (Audi seemed very confident in that during their demos).
I mean you could argue that the only reason Tesla hasn't released FSD as a L5 system is legal because using it unsupervised would likely result in criminal prosecutions and jail time. I just assumed that any company releasing a L3+ system would have strong statistical evidence that it's safer than a human. Every AV company is very happy with their demos! I assume that after Honda gets enough real world miles on the 100 leased vehicles they'll expand to more vehicles. I think Audi claimed that they didn't release because there wasn't a legal framework. Now a bunch of countries do have a legal framework so it will be interesting to see what systems actually gets released.
I'm also saying I doubt the legal departments are worried much about what happens if the driver doesn't respond to a handoff request, that seems relatively easy and low risk to me.
 
I mean you could argue that the only reason Tesla hasn't released FSD as a L5 system is legal because using it unsupervised would likely result in criminal prosecutions and jail time. I just assumed that any company releasing a L3+ system would have strong statistical evidence that it's safer than a human. Every AV company is very happy with their demos! I assume that after Honda gets enough real world miles on the 100 leased vehicles they'll expand to more vehicles. I think Audi claimed that they didn't release because there wasn't a legal framework. Now a bunch of countries do have a legal framework so it will be interesting to see what systems actually gets released.
I'm also saying I doubt the legal departments are worried much about what happens if the driver doesn't respond to a handoff request, that seems relatively easy and low risk to me.
Yes the "official" reason Audi told the press was about legal frameworks not being there, but Automotive News dug deeper and found this:
"Sources close to Audi told Automotive News Europe that corporate lawyers in particular have been critical of any Level 3 system, warning Audi executives that there are no guarantees customers would properly service the vehicle. Should an accident then occur while the car is piloting itself, Audi would be liable even if the system was still 99.9 percent safe at the time it was delivered to the customer."
Audi quits bid to give A8 Level 3 autonomy
 
Yes the "official" reason Audi told the press was about legal frameworks not being there, but Automotive News dug deeper and found this:
"Sources close to Audi told Automotive News Europe that corporate lawyers in particular have been critical of any Level 3 system, warning Audi executives that there are no guarantees customers would properly service the vehicle. Should an accident then occur while the car is piloting itself, Audi would be liable even if the system was still 99.9 percent safe at the time it was delivered to the customer."
Audi quits bid to give A8 Level 3 autonomy
Yeah, it seems like the biggest issue is that for L3+ you have to make a car that can pilot itself without human supervision. It would be interesting to know how well they though it actually would work. 99.9% sounds horrifically bad but I have no idea what that means or whether it was something made up by the reporter.
 
SAE levels have no legal standing whatsoever unless they're specifically cited in said laws (which some states DO do, but many others do not)

The only reason I brought them up was as a frame reference. I'm sure most state, and hopefully Federal level laws will use them as a reference.

Not that its all that necessary as time goes on when L3 is kicked to the curb, and L5 is given up on. :p
 
I agree on L5 but it seems like a bunch of manufacturers are trying to release L3 systems for limited-access roads (freeways). That seems plausible to me.
It does seem like a subscription model with per mile billing would make the most sense to cover liability.

I haven't seen anything other than low speed limited-access traffic assist L3 functionality with a lot of restrictions.
 
I haven't seen anything other than low speed limited-access traffic assist L3 functionality with a lot of restrictions.
Yeah, I agree that L3 city driving makes no sense at all. It seems conceivable that speeds of limited-access road systems could be increased over time as more real world safety data is gathered. I think a lot of people would pay for a L3 highway system.
 
I think a lot of people would pay for a L3 highway system.
This is my biggest surprise if Tesla never goes for any kind of L3.
L3 is worth something and useful even if limited to more rural routes. There are places it could work for an hour uninterrupted. People will pay for it and use it, and Tesla can advertise they actually have an attention-free system. They can learn much from this in so many regulatory, legal, human factors, and other spaces.

City L2 is just a gimmick where you will have to be on top of it constantly, and mode confusion or over-reliance are going to be real and hurt people, just like current AP does, but with many more immediate threats. Limited city L3 is also a mess, where it's unlikely to find a route that works for more than a few minutes, and unexpected, unhandled events are much more common than the highway.

It seems if Tesla really has all the data, learning, and safety data they claim to, highway L3 isn't that far away. The idea that the next thing they are going to do is leave highway untouched and just grind away at L2 in the city for the next few years is very surprising to me, as it is unlikely to deliver customer and Tesla brand value as quickly as other paths.
 
This is my biggest surprise if Tesla never goes for any kind of L3.
L3 is worth something and useful even if limited to more rural routes. There are places it could work for an hour uninterrupted. People will pay for it and use it, and Tesla can advertise they actually have an attention-free system. They can learn much from this in so many regulatory, legal, human factors, and other spaces.

City L2 is just a gimmick where you will have to be on top of it constantly, and mode confusion or over-reliance are going to be real and hurt people, just like current AP does, but with many more immediate threats. Limited city L3 is also a mess, where it's unlikely to find a route that works for more than a few minutes, and unexpected, unhandled events are much more common than the highway.

It seems if Tesla really has all the data, learning, and safety data they claim to, highway L3 isn't that far away. The idea that the next thing they are going to do is leave highway untouched and just grind away at L2 in the city for the next few years is very surprising to me, as it is unlikely to deliver customer and Tesla brand value as quickly as other paths.
In April 2019 Elon said that they were six months away from L3 NoA. I also wonder why they dropped that project to pursue FSD. ;)
The only logical explanation is that he thought HW3 would bring about "the singularity." It seems plausible that the AI they created is intelligent enough but it doesn't actually want to drive us around. So they had to go back to the more conventional approach which has been a huge setback. Maybe they're just buying time while they try to convince it to change its mind.
 
  • Funny
Reactions: AlanSubie4Life
Maybe they're just buying time while they try to convince it to change its mind.
Based on the other recent Tesla news, it sounds like maybe it decided it wasn't going to do what dad did, an wants to be a chef.
I could go for an AI cooked burger and fries.

In April 2019 Elon said that they were six months away from L3 NoA.....
The only logical explanation is that he thought HW3 would bring about "the singularity."
Elon said Self driving was a solved problem in 2015, that they would drive cross country in 2017 in 2016, and that the first FSD features would exist in 6 months back in 2017. All before HW3 existed....
You'll get nowhere trying to suss out any logic from his tweets. They are aspirational, not driven by any real data or breakthroughs.