Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Automatic driving on city streets coming later this year!

This site may earn commission on affiliate links.
Much more likely is the blurring of the Levels. Tesla will call it Level 4 on the expressway and Level 2+ in the City. We’ve seen this before with cellphone companies, 3g, 4g, 5G, LTE. Companies define terms to match their technology.

Tesla can't do that. The SAE levels are distinct levels. It is not something that Tesla can fudge. There would be huge liability issues if they tried.

For example, L4 means that a human driver is not required at all. You can play a game, or even sleep in the back seat if you want in a L4 car. So if Tesla ever said that AP on the highway was L4, then they would be saying that it would be ok for the "driver" not to pay attention, play games, watch a movie or go to sleep while the car is driving. I think you can see how that would be a big problem if Tesla was fudging it and the car was not really L4.
 
Last edited:
Maybe in a ideal world. I haven’t seen one yet.

Much more likely is the blurring of the Levels. Tesla will call it Level 4 on the expressway and Level 2+ in the City. We’ve seen this before with cellphone companies, 3g, 4g, 5G, LTE. Companies define terms to match their technology.

Let’s see city driving with NOA real soon now. Then we can discuss what it’s called... in between drinks...
The SAE levels are defined in an SAE document and referenced in autonomous vehicle regulations (in states that have them). They can't really be blurred. Either the car is responsible for driving or it isn't.
 
Was that AP1, AP2, which version? We can have a good laugh on Elon if that happens on AP3.
Model 3, 9 months ago, so probably HW2.5. Elon said HW2.5 would be capable of FSD, I'm not sure why you think he's a definitive source on the subject.
Autopilot doesn't detect a truck partially in lane :( : teslamotors
2mWJg2S.jpg
 
Why do you incorrectly think that?
Your opinion runs contrary to Elon who says otherwise for a specific case, stop and go traffic. Hopefully his comment is based on data, that he has available.
Sorry, I should have said he's not a rigorous source. He often throws out things like "better than a human" when he has some sort of specific technical aspect of driving in mind. Sort of like how one could say that dumb cruise control is "better than a human" at highway driving because it can maintain speed much more precisely.
 
  • Like
Reactions: DanCar
Sorry, I should have said he's not a rigorous source. He often throws out things like "better than a human" when he has some sort of specific technical aspect of driving in mind. Sort of like how one could say that dumb cruise control is "better than a human" at highway driving because it can maintain speed much more precisely.
I do think he is right in this case. I see 90% of the cars following too closely. Doesn't seem hard for AP3 to do better. Even if AP3 is better no company would release it as level 3. They would get sued out of existence for the smaller cases where it wasn't better.
 
Last edited:
I do think he is right in this case. I see 90% of the cars following too closely. Doesn't seem hard for AP3 to do better. Even if AP3 is better no company would release it as level 3. They would get sued out of existence for the smaller cases that it wasn't better.
If it's better then the liability insurance would cost less than for a human driving. You're talking about some specific technical aspect of driving not everything that can happen while in stop and go traffic.
 
If it's better then the liability insurance would cost less than for a human driving. You're talking about some specific technical aspect of driving not everything that can happen while in stop and go traffic.
Yes, talking about easiest scenario for autopilot.
People treat companies, robots and other people differently. If there is an accident and someone dies, we say that is horrible. If a robot kills your loved one, then we want that company to pay billions in punitive damages.

Because of this, even if AP where to become 10x better than a human it wouldn't make financial sense to release it as Level 3. That is why I'm suggesting something less than level 3 will be released.
 
Last edited:
There are two sides to that coin. The system works very well, so maybe not that much. But since you are liable, you can take as much risk as you dare or take no risk and be constantly vigilant.

That line of reasoning would make me uncomfortable. Some people are not very risk averse at all (e.g. me at 21, thank goodness I didn't have a Tesla Model 3 then with its amazing acceleration and sort-of-autopilot, or I probably wouldn't be alive today) but they are driving a lethal weapon and they are putting others at risk as well who would probably not accept that risk if they knew about it.
 
It does not work that way. The SAE levels are not sequential from less self-driving to more self-driving. L3, L4 and L5 are all self-driving where the car has to be able to do 100% of the driving when the system is active. So even to be L3, Tesla would have to have true self-driving with no nags at all. Zero.
Do you think our existing cars will ever get to L3 with HW3 computer and existing cameras/radar/ultrasonic sensors? I hope so, but don't believe existing cars will ever see L3 - unless there is a sensor/camera upgrade.
 
That line of reasoning would make me uncomfortable. Some people are not very risk averse at all (e.g. me at 21, thank goodness I didn't have a Tesla Model 3 then with its amazing acceleration and sort-of-autopilot, or I probably wouldn't be alive today) but they are driving a lethal weapon and they are putting others at risk as well who would probably not accept that risk if they knew about it.
The assumption for " works very well " is that it works better than a human.
 
Last edited:
Yes, talking about easiest scenario for autopilot.
People treat companies, robots and other people differently. If there is an accident and someone dies, we say that is horrible. If a robot kills your loved one, then we want that company to pay billions in punitive damages.

Because of this, even if AP where to become 10x better than a human it wouldn't make financial sense to release it as Level 3. That is why I'm suggesting something less than level 3 will be released.
It seems very speculative to assume that juries will give larger settlements to accident victims of self driving cars.
The fatality rate in the US is about 1 per 100 million miles of driving. The lawsuit payouts would have to be astronomical to make self driving cars uneconomical from a liability standpoint (assuming they drive better than humans!). Take for example the GM ignition switch case. GM paid out $600 million to victims (399 total, 127 deaths). What you're theorizing is that self driving car accidents will result in payouts far more than any other sort of automotive defect that causes car accidents.
 
It seems very speculative to assume that juries will give larger settlements to accident victims of self driving cars.
The fatality rate in the US is about 1 per 100 million miles of driving. The lawsuit payouts would have to be astronomical to make self driving cars uneconomical from a liability standpoint (assuming they drive better than humans!). Take for example the GM ignition switch case. GM paid out $600 million to victims (399 total, 127 deaths). What you're theorizing is that self driving car accidents will result in payouts far more than any other sort of automotive defect that causes car accidents.
GM paid $900 million to the U.S. government for faulty ignition. 10 of the Largest Car Company Settlements in History
Perhaps these cases are relevant:
  1. Toyota pays $1.2 billion for unintended acceleration. Sudden unintended acceleration - Wikipedia
  2. In Aug. 1999, GM faced a personal injury and product liability lawsuit claiming a faulty gas tank on its 1979 Chevrolet Malibu caused gas tank explosions that killed six individuals. The plaintiffs sued for $4.9 billion in punitive damages. $4.9 Billion Jury Verdict In G.M. Fuel Tank Case
It will be different when there is video of people dying. That will make people's blood boil.

If we do the math, lets say $10 million average payout for each death, and Tesla is charging $7K for FSD, that means 1,428 sales to make up for that payout to break even for one death. That doesn't include the 10 times as many serious injury claims, legal costs to protect for frivolous lawsuits, lawsuits for car and property damage, and unknowns like being sued for hitting endangered species.
 
Last edited:
It will be different when there is video of people dying. That will make people's blood boil.
Uber has already killed someone while testing their self driving car (with video!) and they're still doing it. We don't know how much they settled for but they were testing with a complete disregard for safety. The lawsuits you cite include criminal and punitive damages and even those payouts wouldn't be large enough to make self driving unviable. Also those numbers that make headlines are usually reduced on appeal. For example the judge dropped the GM Malibu fuel tank payout to $1.2 billion, then GM said they would appeal, and then the family settled for an undisclosed amount. I think you you're vastly overestimating how much payouts will be and vastly underestimating the economic value of self driving cars.
 
Do you think our existing cars will ever get to L3 with HW3 computer and existing cameras/radar/ultrasonic sensors? I hope so, but don't believe existing cars will ever see L3 - unless there is a sensor/camera upgrade.

Depending on the ODD, yes. If you confined the ODD to stay inside the limits of the current sensors and HW3, then L3 would be possible. For example, something like driving on a limited access divided highway outside of the city, no construction zone, with clear lane lines, in clear weather with light traffic, I could see the current sensors and HW3 definitely being able to do L3 under those conditions. Then, as soon as the system detects that those conditions are not going to be met anymore, it would prompt the driver to hold the wheel again.

Obviously, a more complicated ODD like city driving, no, I don't think the current sensors and HW3 are sufficient for that. I am on record as saying that I wish Tesla had more sensor coverage that included lidar. I think to get to reliable and safe autonomous driving, you absolutely need more sensors than what current Teslas have. You need redundant coverage in 360 degrees around the car, meaning you need at least 2 different sensor types covering every angle around the car. That way you ensure that if one sensor misses something or is temporarily obstructed, the other sensors will catch it. That's critical for true autonomous driving because it ensures that your car can operate in a sustained way no matter the conditions like bad weather or an edge case. Also, you can't afford to have sensors miss something because that could cause an unacceptable crash, like what we've seen tragically with Tesla accidents.

The HW3 computer sounds pretty good. The issue with Tesla is that while vision alone is probably good enough to do self-driving (humans only use 2 eyes after all), without sensor redundancy, the system will fail if vision is impeded. So I think the current sensors and HW3 computer may achieve autonomous driving in fair weather and a simple ODD but probably won't be able to sustain the autonomous driving in every condition. So L3 in a limited ODD, yes, maybe even L4 in a limited ODD. But I think L5 is out of the question on the current sensors.

If you check out my thread on Lucid Air's FSD, you will see an example of the kind of FSD hardware that I think is necessary to get to true autonomy. Lucid Air autonomous driving features?

The Lucid Air will have a total of 8 cameras (3 front and 5 for the sides and rear), 2 long range radars for the front and rear and 4 short range radars for the corners of the car, 2 long range lidar for the front and rear and 2 short range lidar for the sides and a driver facing camera to monitor driver attention. In terms of computer power, the Lucid Air will have 2 Mobileye EyeQ4 chips. That's what I am talking about! It has multiple sensor types covering all angles.
 
Last edited:
  • Like
Reactions: jebinc
And the safety driver had no fault?
For sure she was at fault too and should have faced manslaughter charges (not clear if she ultimately did). Uber should have been better monitoring their safety drivers and had some sort of attention monitoring system in their vehicles. Testing self driving vehicles is dangerous. Maybe the Uber safety driver mistakenly thought that the vehicle was "better than a human" when driving down a suburban street at night.
 
  • Like
Reactions: DanCar
Tesla can't do that. The SAE levels are distinct levels. It is not something that Tesla can fudge. There would be huge liability issues if they tried.

For example, L4 means that a human driver is not required at all. You can play a game, or even sleep in the back seat if you want in a L4 car. So if Tesla ever said that AP on the highway was L4, then they would be saying that it would be ok for the "driver" not to pay attention, play games, watch a movie or go to sleep while the car is driving. I think you can see how that would be a big problem if Tesla was fudging it and the car was not really L4.

You are right, “Tesla can’t do that”. However... they don’t have to.

The International Telecommunications Union defined mobile communications, lte, 4g, 5G etc. It didn’t work out as planned. To the best of my knowledge, Tesla has never “defined” their autopilot to be any specific level. They talk about “features”, but don’t get SAE specific. I bet that is a planned strategy. Tesla likely will not define their FSD level until it “passes” regulation in Europe, USA and China. I expect them to be “feature complete” with a “driver” in the seat... I will probably be happy with that.

The development of robo-taxis and Tesla network might take years, “feature complete” could be 2020. SAE certification will be applied after regulations are “passed” and accepted likely years from now.
 
Last edited:
The words "any more" at the end of the sentence imply that at one time in the past, Tesla was talking about L3. I'm saying that is incorrect. Tesla never talked about L3. From the beginning, Tesla always promised L5. Even now, they are promising a first step of "feature complete" but they are promising L5 after that, when the software is fully validated.

Come on, you know what I meant! Yes, I expressed myself poorly. But what I was saying was clear: Tesla was saying that if I paid for FSD my 2018 car would be capable of driverless operation. Now they're not even promising L3 operation to people who pay for FSD today. They're just saying that FSD is their ultimate goal. All they're promising to people who pay for FSD today is City NoA at Level 2. Not even L3. They're not even promising that AP on the highway will become L3, which would be far less than the L5 everywhere they were promising on the cars being sold in 2018.

Tesla has backed way off from the promises it was making, which is a clear tacit admission that they cannot do what they were promising in a time frame that would be of any use for the cars it was being promised on. And at some point they will have to compensate the people who paid for driverless operation, and who, realistically, are not going to get that on their present car.
 
  • Like
Reactions: DanCar
The International Telecommunications Union defined mobile communications, lte, 4g, 5G etc. It didn’t work out as planned. To the best of my knowledge, Tesla has never “defined” their autopilot to be any specific level. They talk about “features”, but don’t get SAE specific. I bet that is a planned strategy. Tesla likely will not define their FSD level until it “passes” regulation in Europe, USA and China. I expect them to be “feature complete” with a “driver” in the seat... I will probably be happy with that.

The development of robo-taxis and Tesla network might take years, “feature complete” could be 2020. SAE certification will be applied after regulations are “passed” and accepted likely years from now.

Even without Tesla giving us levels, we can figure out the levels on our own since we know the SAE definitions. For example, we know that Autopilot is currently L2 even without Tesla telling us. Similar with FSD, Tesla does not need to tell us the levels for us to know what level it is. When Tesla removes the nags and removes the requirement of the driver to pay attention, that will be a pretty big clue that AP has become an autonomous system, L3 or above. And, at that point, it will be easy to figure out if it is L3, L4 or L5 too. If the ODD is limited, we will know that it is L3 or L4 depending on whether the car can handle its own fallback and if the ODD is not limited, we will know that it is L5, because that is what the SAE says.