Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

What SAE Level 3-5 system will be released to consumers in 2020?

What SAE Level 3-5 system will be released to consumers in 2020?


  • Total voters
    36
This site may earn commission on affiliate links.
I would be very happy if Tesla has Level 3 working on major expressways after entrance and before exit. I think it wilt give them a lot of breathing room to continue work and avoid further black eyes.

I've been thinking of starting a new thread on this. Tesla seems to be going for a jack of all trades approach instead of a master of one approach. Tesla is focused on getting AP to work a little bit everywhere, highways and city streets, then working up to self-driving, instead of focusing on getting to highway self-driving first and then working on self-driving on city streets later. Personally, I am kinda leaning towards the master of one approach. I think it would really help Tesla if they got to L4 highway first before trying to do city self-driving. It would be something meaningful that they could market, and it would lend more credibility to their FSD claims. And as the saying goes, "jack of all trades, master of none". The current approach kinda leaves people thinking AP is not really a master of anything, even though AP is a good driver assist.
 
  • Love
Reactions: boonedocks
There are multiple other possibilities. There is no compelling reason why these two are the only possibilities.
Yeah, once another date passes the same will continue. The apologists will continue apologizing for Tesla: Oh perhaps it is only a little late, lets wait to see what the next major upgrade brings. They got a big upgrade waiting in the wings. Hope springs eternal.
 
Last edited:
I think a related question is, what will the liability clause be for L3 and up capabilities? If the car allows one to sleep, then the driver can't be made responsible for any accident the car would cause when it's operating in the supposed parameters. I think this question needs to be answered before any manufacturer will willingly release a truly L3 and up car.
 
I think a related question is, what will the liability clause be for L3 and up capabilities? If the car allows one to sleep, then the driver can't be made responsible for any accident the car would cause when it's operating in the supposed parameters. I think this question needs to be answered before any manufacturer will willingly release a truly L3 and up car.

Anything L3 or above means the car is autonomous and therefore capable of doing 100% of the driving when it is active. So I believe the auto maker would be liable for anything L3 or above when the system is active. The driver would not be liable.
 
  • Like
Reactions: Matias
I've been thinking of starting a new thread on this. Tesla seems to be going for a jack of all trades approach instead of a master of one approach. Tesla is focused on getting AP to work a little bit everywhere, highways and city streets, then working up to self-driving, instead of focusing on getting to highway self-driving first and then working on self-driving on city streets later. Personally, I am kinda leaning towards the master of one approach. I think it would really help Tesla if they got to L4 highway first before trying to do city self-driving. It would be something meaningful that they could market, and it would lend more credibility to their FSD claims. And as the saying goes, "jack of all trades, master of none". The current approach kinda leaves people thinking AP is not really a master of anything, even though AP is a good driver assist.

it seems normal to me, I guess. get the algorithms that create an internal world-view working and get it working well enough on all supported 'worlds'. another team gets the driving 'policy' refined over time. they merge at various times and integration-test and qa-test. at least that's how I would expect it.

ie, not vertical, like you are suggesting.
 
  • Like
Reactions: diplomat33
Anything L3 or above means the car is autonomous and therefore capable of doing 100% of the driving when it is active. So I believe the auto maker would be liable for anything L3 or above when the system is active. The driver would not be liable.

Ideally, that's how I think it should work too. However, which car manufacturer is willing to take that risk? I got to think the financial risk is huge!
 
These autonomous features are in perma-beta and will be for as long as I own my Tesla. Auto-park, auto-wipers, and auto-high beams don't work and they are much more constrained problems so L3+ is very very far away. If they were close, they would be upgrading the AP 2.0 and 2.5 hardware. I am guessing that AP 3.0 doesn't even have the capability of working because of the lack of redundancy.

There are some that have had AP 2.0 cars for over three years without much autonomous promises delivered. I am guessing the class action lawyers will be what brings Tesla down on this. How long does one have to wait for non-beta functionality that actually works? I think 3 years plus is pushing things since most don't keep their cars much longer than that. At what point does Tesla have to offer refunds?
 
Ideally, that's how I think it should work too. However, which car manufacturer is willing to take that risk? I got to think the financial risk is huge!

The SAE is clear though. If a system is autonomous, the driver is not responsible. So I don't think auto makers can really fudge that, although I guess lawyers would always try to.

I think auto makers will be shy to label their systems L3 or above precisely to avoid any liability. So if an auto maker like Audi announces a L4 system, it must means that they are confident it is 100% safe because they are comfortable accepting any risk.
 
The SAE is clear though. If a system is autonomous, the driver is not responsible. So I don't think auto makers can really fudge that, although I guess lawyers would always try to.

I think auto makers will be shy to label their systems L3 or above precisely to avoid any liability. So if an auto maker like Audi announces a L4 system, it must means that they are confident it is 100% safe because they are comfortable accepting any risk.

Definitely agree about the SAE definition. I am just wondering who is the first company to be brave enough to go there and take responsibility if anything happens. They can be the hero to the world of automotive autonomy, but they can also bankrupt themselves if they miss a few corner cases that end up with tragic results.
 
Definitely agree about the SAE definition. I am just wondering who is the first company to be brave enough to go there and take responsibility if anything happens. They can be the hero to the world of automotive autonomy, but they can also bankrupt themselves if they miss a few corner cases that end up with tragic results.
Companies are already liable when their employees get in to at fault accidents while on the clock. I suppose juries might award bigger settlements when a self driving car does it...
 
Definitely agree about the SAE definition. I am just wondering who is the first company to be brave enough to go there and take responsibility if anything happens. They can be the hero to the world of automotive autonomy, but they can also bankrupt themselves if they miss a few corner cases that end up with tragic results.

This could be a good thing though because it means that companies developing autonomous driving will have to really dot every i and cross every t before they deploy autonomous technology to the consumer. They have to be 100% confident that they have not missed any edge cases. I think this is precisely why companies developing autonomous driving have been so cautious and conservative in their deployment. They will make sure the tech is safe when it is on and they will also narrow the ODD so that any edge cases that the system can't handle are outside the ODD and therefore outside the liability. So it will take longer to get the tech to the consumer, but when it does reach us, I think we can more confident that the tech will be safer.

This is another reason why if Tesla wanted to deploy autonomous driving to the fleet as soon as possible, they would be better off making AP a L3 or maybe L4 system restricted to very narrow conditions. The smaller the conditions that the car has to operate in, the fewer the driving cases that the autonomous driving system has to solve and therefore the easier it is to get to a safe system. By aiming for L5 autonomy, Tesla has to solve every single driving case which is a much much bigger task. So, chances are that Tesla will require driver supervision for a very long time.
 
Wouldn't that mean Tesla can never fulfill Elon's promise of robo taxi or sleeping while car is driving?

No, it does not mean never. But I think it will take Tesla longer than Elon thinks.

Tesla has to solve every single driving case perfectly for the ODD that they want to operate in before they can reach that step of deploying robotaxis or letting people sleep in their cars. Now, Tesla could do what Waymo is doing and deploy a limited number of Model 3's as robotaxis in a tightly geofenced area to greatly reduce the number of driving cases that the system has to handle, to make it easier. But if Tesla is serious about L5 autonomy in the continental US then they have to solve every single driving case perfectly in the entire US. That's a much more difficult task. There could be hundred of thousands, maybe even millions, of driving cases to solve.

Elon is predicting "sleep in your Tesla" while driving by end of 2020 because he seems to think once they deploy "feature complete" to the fleet, it will be as straightforward as collecting disengagement data from the fleet and train the NN on whatever remaining driving cases they missed and voila, all edge cases solved, no more disengagements and hence FSD will be safe for robotaxis and sleeping in your car while driving. Personally, I don't think it will be that straightforward. Yes, Tesla can collect disengagement data from the fleet and solve a lot of edge cases. But I think Elon is underestimating the number of edge cases. So I think it will take longer than 2020 to get there.

Put differently, Tesla needs to reach a disengagement rate thousands of times better than what they have now. Will they never get there? No, I am sure Tesla will get there eventually someday. But I don't think they will get there by 2020.

That's not to say that Tesla can't still get to really good self-driving. Tesla can still solve enough of the driving cases to get to really good self-driving with driver supervision even if they are not at "sleep in your car" self-driving yet.
 
Last edited:
the legal issue of 'who is at fault on a self driving car' is yet to be resolved.

my view: this is actually a harder (legal, business) problem to solve than the TECH issues!

I also think that we'll never see a real buyable L4 car until companies feel 'ok' with the legal liability.

to me, that means they'll push for it being OUR responsibility.

I don't see any other way forward, sadly. companies simply will NOT accept bankruptable business models, and there's no way (at all) to be 100% safe in a mixed human/machine populated highway or city.
 
This could be a good thing though because it means that companies developing autonomous driving will have to really dot every i and cross every t before they deploy autonomous technology to the consumer.

lives are at stake. I'm not convinced that we'll ever get beyond 'driver ASSIST' tech, simply due to legal and liability issues.

I'm also not sure there is enough redundancy in the current teslas to ensure ASIL D compliance (Automotive Safety Integrity Level - Wikipedia)

100% impossible for companies to dot every i and cross every t. not in today's 'time to market' software and hardware development model! ;(

hell, even intel with all its power and might, still makes chips that are buggy as hell, and they've been at it for decades.

bug-free hw and sw is a unicorn. it simply does not exist.
 
the legal issue of 'who is at fault on a self driving car' is yet to be resolved.

my view: this is actually a harder (legal, business) problem to solve than the TECH issues!

I also think that we'll never see a real buyable L4 car until companies feel 'ok' with the legal liability.

to me, that means they'll push for it being OUR responsibility.

I don't see any other way forward, sadly. companies simply will NOT accept bankruptable business models, and there's no way (at all) to be 100% safe in a mixed human/machine populated highway or city.
Companies are already responsible for at fault accidents by employees. It's not a complicated issue at all. It will be exactly the same as an employee getting into an accident while on the clock.
 
Companies are already responsible for at fault accidents by employees. It's not a complicated issue at all. It will be exactly the same as an employee getting into an accident while on the clock.

'exactly' ? no, not quite.

a company does background checks on the employee, interviews them, knows them (works with them every day). companies can fire employees (maybe that employee is a risk to the company,etc).

the same model does not, AT ALL, apply to the company/consumer model. can tesla 'disown me' or 'fire me' as a customer? not really. they don't have that kind of say or relationship. they can't pick and choose who they have as customers and that means its entirely out of their control. how can you force a company to insure CUSTOMERS? is there any precedent for that?

finally, there's the sheer scale of it all. your employee count is a tiny fraction of your CUSTOMER count. and it just takes a few customers with terrible accidents to bankrupt you.
 
'exactly' ? no, not quite.

a company does background checks on the employee, interviews them, knows them (works with them every day). companies can fire employees (maybe that employee is a risk to the company,etc).

the same model does not, AT ALL, apply to the company/consumer model. can tesla 'disown me' or 'fire me' as a customer? not really. they don't have that kind of say or relationship. they can't pick and choose who they have as customers and that means its entirely out of their control. how can you force a company to insure CUSTOMERS? is there any precedent for that?

finally, there's the sheer scale of it all. your employee count is a tiny fraction of your CUSTOMER count. and it just takes a few customers with terrible accidents to bankrupt you.
The company writes and controls the software. Why would they need to fire a customer? The customer is not the one driving, the company's software/hardware is driving the car. That's what makes it an autonomous vehicle! The company is insuring its software/hardware, not its customers.
UPS has 70,000+ drivers and they've managed to stay in business. Obviously an autonomous vehicle will probably be required to have maintenance and inspections done by the company to allow it to be used in autonomous mode but that could easily be part of the purchase contract.