Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Missy Cummings in IEEE:

WHAT SELF-DRIVING CARS TELL US ABOUT AI RISKS​

5 conclusions from an automation expert fresh off a stint with the U.S. highway safety agency​


A couple critiques:

1) I do think there is a good amount of AI fearmongering in the article. She makes it sound like self-driving cars are death traps because the AI is unreliable, does not understand what it is doing and will kill you when it makes a mistake. For example, the line "The difference is that while a language model may give you nonsense, a self-driving car can kill you." is an exaggeration. Certainly, self-driving cars are safety critical systems so a mistake can potentially kill you, yes. But the reality is that not every AI mistake will cause a fatal crash. And robotaxis like Waymo and Cruise have driven over 2M driverless miles and have not caused any fatalities. So we have real-world cases of self-driving cars driving safely and not killing anyone.

2) She seems to describe all self-driving cars as being the FSD equivalent to chatGPT. She is constantly making comparisons between chatGPT and self-driving cars. I think her comparisons might be appropriate if the self-driving car used one big chatGPT-like end-to-end NN to drive directly from vision input. But we know that the self-driving cars on the road today do not use end-to-end. Robotaxis like Waymo and cruise certainly use NN in their stack but they use a modular approach with distinct NNs instead of one big NN. This modular approach mitigates a lot of the concerns she seems to raise. The cars are not outputting driving decision directly from vision, like chatGPT might output a response directly from a prompt. There are many steps and redundancy before the driving output is made. There is redundancy of sensors to improve reliability. There is also redundancy in the software. For example, different NN that perform the same perception task so that if one NN fails, the other can catch it. There are also separate NN for perception and planning. So mistakes in perception can be mitigated by the driving policy. So a lot of her comparisons won't apply IMO.
 
Throughout my life, I tried not to be too cynical or contrarian about name-dropping or accreditation-dropping, though I never liked pretension.

But I have to say that in the past few years, I've developed an increasingly negative reaction to narratives that call on the authority of "experts". Even worse lately is the nearly empty reference to "researcher".

These things may help convince me to spend time listening to one source over another, but have little impact on my evaluation of the argument.

Tell me what you're trying to say and back it up with well-reasoned logic and data. That will win me over a lot sooner than assurances of someone's expert background and present or past associations.

OTOH, it is helpful and reassuring to cheerfully disclose, up-front, any possible conflicts of interest - whether financial, institutional or reputational. Glossing over these things, or deleting social media post and accounts, is the very opposite of helpful and reassuring.

And don't even try to denigrate another's argument by citing his iack of institutional accreditation or supposed lack of peer acceptance. Negative points assigned for that.
 
L4/5 means Tesla is the driver. The driver is the insured entity. There is no way they are changing laws to get out of that, period. They'll have to accept responsibility for driving because by definition the car is driving itself.
It seems the more people are sure about what "the law" is, the more wrong they tend to be. "The law" you are talking about is torts - it's civil liability that in most cases is based on precedent, facts, and equity. It's not like criminal law where everything is set forth in criminal code. For example, Tesla could easily bear some liability today for an accident while operating with AutoPilot/Autosteer on City Streets engaged, even if they do advertise it as an L2 Advanced Driver Assist feature. Further, even in a future case dealing with a truly autonomous system, I would argue that if the owner of the vehicle was behind the wheel and afforded an opportunity to intervene and avoid the accident, that the owner ("driver") still bore some responsibility regardless of the operating mode of the autonomous system. In each of these cases, it would be up to the litigants to argue, and a jury to proportion (depending on state law), how much liability belonged to Tesla and how much liability belonged to the owner/operator of the vehicle, as well as whether the liability was joint or severable.
 
A couple critiques:

1) I do think there is a good amount of AI fearmongering in the article. She makes it sound like self-driving cars are death traps because the AI is unreliable, does not understand what it is doing and will kill you when it makes a mistake.

2) She seems to describe all self-driving cars as being the FSD equivalent to chatGPT
Even though the analogy with LLM is "dumbed down" it still holds true that computer vision (alone) as of today isn't simply anywhere near to perform at the level of a human eyes+brains.

I think it's essential for the debate that we have critics. As a safety focused researcher that is her role. There is so much propaganda and marketing from OEM:s and robotaxi companies on safety. Take Cruise's recent advert as an example. Or Teslas L2 "self-driving" promises for the last 7-8 years. As someone that constantly debates Tesla shareholders and/or camera-only autonomy dreamers I am sure you can relate.

While some companies have more responsible deployment and/or engineering practices than others, meeting internal sales goals, deadlines or growth targets still seems to trump safety at most companies, perhaps with the sole exception of Waymo.

Many areas are active (unsolved) research problems such as validation, cv in general and even the NN:s themselves and their fundamental building blocks. Today's approaches are crude and brute force, with many drawbacks. If you listen to academics or partitioners they mostly agree on that the current path of adding more training data won't likely get us to where we want to be. In particular for security critical applications.
 
Last edited:
Even though the analogy with LLM is "dumbed down" it still holds true that computer vision (alone) as of today isn't simply anywhere near to perform at the level of a human eyes+brains.

I think it's essential for the debate that we have critics. As a safety focused researcher that is her role. There is so much propaganda and marketing from OEM:s and robotaxi companies on safety. Take Cruise's recent advert as an example. Or Teslas L2 "self-driving" promises for the last 7-8 years. As someone that constantly debates Tesla shareholders and/or camera-only autonomy dreamers I am sure you can relate.

While some companies have more responsible deployment and/or engineering practices than others, meeting internal sales goals, deadlines or growth targets still seems to trump safety at most companies, perhaps with the sole exception of Waymo.

Many areas are active (unsolved) research problems such as validation, cv in general and even the NN:s themselves and their fundamental building blocks. Today's approaches are crude and brute force, with many drawbacks. If you listen to academics or partitioners they mostly agree on that the current path of adding more training data won't likely get us to where we want to be. In particular for security critical applications.

You make some good points. Certainly, I think we should be clear eyed about the shortcomings of vision-only and areas of machine learning that are yet unsolved. But sometimes how an argument is presented can undermine the argument itself. And I fear that is the case with Missy's article. Dumbing down a concept to the point that it becomes misleading or flat out misinformation does not help. And fearmongering can turn people off. I also don't think we should lump all self-driving approaches together. The fact is that Tesla, Waymo, Cruise, Mobileye etc have very different approaches, not just in the hardware or software but also in the safety and validation methodologies that they use. It is not fair to lump them all together in the same "AI is dangerous" basket which Missy seems to do. As you rightly point out, some AV companies have more responsible deployment or engineering practices. It is unfair to them to lump them in with companies with less responsible practices.
 
  • Like
Reactions: willow_hiller
Further, even in a future case dealing with a truly autonomous system, I would argue that if the owner of the vehicle was behind the wheel and afforded an opportunity to intervene and avoid the accident, that the owner ("driver") still bore some responsibility regardless of the operating mode of the autonomous system.
Surely the owner bears responsibility at all times. They are the one responsible for putting the vehicle on the road. If the vehicle has a manufacturing defect, then the owner can turn around and sue whoever built it. In the case of injury or death, I fear that no individual would be sent to jail or even have their professional career impacted, only corporations fined.

So if I own a Tesla robotaxi for personal use, I'm responsible if it does anything wrong. If I'm riding in an Uber-owned Tesla robotaxi, they're responsible if it does anything wrong. But each of us will go after Tesla for a flawed product.
 
  • Like
Reactions: Goose66
If someone (legally) borrows your car and runs someone over- are you as the owner who chose to let another driver drive it criminally responsible for the death, or is the actual driver responsible?
There will be different responsibilities based on the severity. For a death event:
Damage, vehicle and medical, the vehicle owner and insurance company up to the maximums, and then personal liability
Criminal, the driver if law enforcement created legal action
Civil can also be paths against the driver and the owner

As an Uber vehicle owner and driver there is one level of responsibility
But as a rented out Robotaxi, it’s much more involved, complex and sticky

It will be amazing to see how Tesla insurance handles insuring Robotaxis
 
If someone (legally) borrows your car and runs someone over- are you as the owner who chose to let another driver drive it criminally responsible for the death, or is the actual driver responsible?
If you mean that I own a Level 5 robotaxi and I let someone travel in it, and the car strikes and kills someone, then I am responsible. Whether that means that I should be sanctioned in any way is another question.

I'm not a lawyer, so I'm more or less speaking of ethics now. The law will probably work differently.

As I see it, the question is whether there is any criminal intent or even negligence on my part by allowing that vehicle on the road. Was the car badly maintained according to the manufacturer's guidelines? If so, then I could be guilty of a criminal act. Were the circumstances such that there was no way for anyone or anything to prevent the event? If so, then it's simply a tragic accident. Is the vehicle's autonomy system flawed? In that case, it is still my responsibility, but there is no criminal intent (assuming I didn't know about the flaws). I'm certainly going to go after the manufacturer either way, because they're the one who built the flawed product. At that point, it probably descends into disclaimers, agreements and so on, as to whether any money can be extracted from the manufacturer. Certainly the odds are near zero that anyone at that company is going to jail.

The reason that I assign responsibility to the owner is that the owner has the final say on whether the car is put on the road. The illustrating case is that I buy a car where I know that the autonomy system is a piece of crap, but I send off my uncle in it anyway. He dies. If I'm not responsible for the car then I get my uncle's inheritance, the manufacturer gets the blame (more money for my Uncle's estate), and by the time anybody figures out what happened, I'll be sitting on a beach in Jamaica earning 20%.
 
If you mean that I own a Level 5 robotaxi and I let someone travel in it, and the car strikes and kills someone, then I am responsible. Whether that means that I should be sanctioned in any way is another question.

I'm not a lawyer, so I'm more or less speaking of ethics now. The law will probably work differently.

It most certainly does- and that's what was actually under discussion.



The reason that I assign responsibility to the owner is that the owner has the final say on whether the car is put on the road.


But you didn't answer my question.

Which was:
If someone (legally) borrows your car and runs someone over- are you as the owner who chose to let another driver drive it criminally responsible for the death, or is the actual driver responsible?

You instead gave me a reply about L5 systems. Which I didn't ask.


Now, obviously, your answer to the first SHOULD be the same answer for L5 systems- but then your answer about L5 makes no sense in that context.

If I lend my car to my cousin, and to my knowledge they are a competent and licensed driver, why would I as the OWNER be responsible for an accident their driving causes?

Now replace "cousin" as the driver with anyone else as the driver- even an L5 system- why would the answer change?


The illustrating case is that I buy a car where I know that the autonomy system is a piece of crap, but I send off my uncle in it anyway. He dies. If I'm not responsible for the car then I get my uncle's inheritance, the manufacturer gets the blame (more money for my Uncle's estate), and by the time anybody figures out what happened, I'll be sitting on a beach in Jamaica earning 20%.


But now you're moving goalposts-- the equivalent there would be you let your cousin drive the car knowing he's been in 27 accidents and drives drunk routinely.
 
It's not a mess at all legally. The driver is insured. That's why rental companies can rent without getting sued to pieces. It doesn't matter if the car is "borrowed" or "rented" or a robotaxi. A licensed driver is insured. Period. They are responsible. Period. The robotaxi software/ai will be insured and will be the driver. It's not complicated. It is already very well understood.

If waymo cars get in an accident and kill someone Waymo is liable, it matters very little that they own the car...what matters is the Waymo AI is driving the car.

Of course someone can still sue hertz or enterprise and they try all the time but fail almost always.

lets get back to actual discussion of driving
 
  • Like
Reactions: diplomat33
Ok

I’m addicted but not loving 11.3.6

FSD 11.3.6:
-Went through a turning to red, red light with a camera, waiting for ticket now
-Stopped at an empty roundabout, should only have yielded/slowed but progressed
-Did not see oncoming car in slight blind spot to the right, when FSD was making a left turn, crazy, could have been T boned
-Weird exits from the Highway with late entry into the exit lane, but did get one right out of four
-Come off Highway, back on main road cut across from entry ramp, middle and left lanes in one move which I remember from drivers Ed is a no no, must be a stepped move
-Occasional turn signal on/off, for no turn and no reason
****Otherwise, amazing ;)
Feels like 11.3.6 is 65-70% there
Much higher expectation for 11.4.X

11.4.6 looks amazing
 
If someone (legally) borrows your car and runs someone over- are you as the owner who chose to let another driver drive it criminally responsible for the death, or is the actual driver responsible?
Actually yes you can be held criminally liable, the most common cases being knowingly letting a drunk driver drive.
Of course civilly anyone even tangentially involved typically would be sued and thus is also potentially liable.
I bought up this point before, for L3 cars where there is still an operator, it's a question if the manufacturer can necessarily fully indemnify the driver.
Anyways these are all legal issues that vary by state or country and in the future the laws can also change.
 
Actually yes you can be held criminally liable, the most common cases being knowingly letting a drunk driver drive.
Of course civilly anyone even tangentially involved typically would be sued and thus is also potentially liable.
I bought up this point before, for L3 cars where there is still an operator, it's a question if the manufacturer can necessarily fully indemnify the driver.
Anyways these are all legal issues that vary by state or country and in the future the laws can also change.
Ok, yes but then that is the same liability as for a bar or a rental company. The point here is that Robotaxi software is going to be the driver. Not uber.
 
Ok

I’m addicted but not loving 11.3.6

FSD 11.3.6:
-Went through a turning to red, red light with a camera, waiting for ticket now
-Stopped at an empty roundabout, should only have yielded/slowed but progressed
-Did not see oncoming car in slight blind spot to the right, when FSD was making a left turn, crazy, could have been T boned
-Weird exits from the Highway with late entry into the exit lane, but did get one right out of four
-Come off Highway, back on main road cut across from entry ramp, middle and left lanes in one move which I remember from drivers Ed is a no no, must be a stepped move
-Occasional turn signal on/off, for no turn and no reason
****Otherwise, amazing ;)
Feels like 11.3.6 is 65-70% there
Much higher expectation for 11.4.X

11.4.6 looks amazing
Not sure why you are posting this here, did you go to the wrong thread?

This is the thread for FSD Beta 11:
 

Some interesting bits.

On generalization:
Both Cruise and Waymo have found that their technology adapts well across cities, without having to retrain it from the ground up. After adjusting for some city-specific features — like the shape of traffic lights or the nature of traffic circles — they can start driving through new cities fairly quickly.

“Our initial testing in Austin, that piece took a few weeks,” said Aman Nalavade, a Waymo product manager. On Thursday, the company announced it would begin initial operations in Austin by this fall and roll out its ride hailing service a few months later.

On highway driving:
Waymo is also testing on freeways in the San Francisco area, taking on autonomous driving’s next frontier. Currently, neither Waymo nor Cruise offer ride hailing customers the option to ride on freeways. But it shouldn’t be that far away. “On 101, 280, 380, you'll see our cars at all times of day driving with other cars, at speed, making lane changes, etc,” Nalavade said. “Hopefully in the coming months, there'll be some announcements about our freeways.”
 
Last edited:
Actually yes you can be held criminally liable, the most common cases being knowingly letting a drunk driver drive.

Actually no because I specifically said you let them borrow it legally.

Lending it to a drunk driver would be illegal. Hence why I specifically said that.

And then called that out again, specifically, in the second post when JB seemed unclear on this which was over an hour before your reply above.