Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
I can't see how you were able to draw that conclusion based on the very little info published?

Well you see. The self driving cars hit a physical object and nearly killed it. Lidar is supposed to see objects and make sure self driving cars don't hit them. Let's say the human driver was driving instead. The lidar still should have warned the driver not to hit the cyclist. Maybe the cyclist saw the self driving car coming and hurled themselves at the vehicle looking for a Payday. I guess we will find out soon enough.
 
  • Disagree
Reactions: cwerdna
from what i can see
its a clear sky, night time accident (immediate thoughts of reduced survival rates due to alcohol? definitely limited visability of pedestrian and/or bikes perhaps unfair)
there is a bike with meaningful damage, and some debris
there is a pedestrian fatality.

Uber has camera's and lidar but is assumed to be a radar centric sensor system. Not a direct equivalent to either camera or lidar centric system, Generally not a good outcome for Tesla, and a bad outcome for Uber, and a terrible outcome for the victims & family. Uber was assumed to be closest to Waymo in sensor suite.

Arizona is far easier than many many populated places on earth for self driving vehicles.


here is an old Nissan 'future' video

one of the aspects that stand out
2 way communication between autonomous car and pedestrian
 
perhaps the cyclist and pedestrian is one and the same person.

I tell my kids to dismount and walk the bike across the road if there is danger.

I now assume this was a cyclist who had dismounted and was walking her bike across the road. So in general, the victim was a safety conscious individual of society...
 
from what i can see
its a clear sky, night time accident (immediate thoughts of reduced survival rates due to alcohol? definitely limited visability of pedestrian and/or bikes perhaps unfair)
there is a bike with meaningful damage, and some debris
there is a pedestrian fatality.

Uber has camera's and lidar but is assumed to be a radar centric sensor system. Not a direct equivalent to either camera or lidar centric system, Generally not a good outcome for Tesla, and a bad outcome for Uber, and a terrible outcome for the victims & family. Uber was assumed to be closest to Waymo in sensor suite.

Arizona is far easier than many many populated places on earth for self driving vehicles.


here is an old Nissan 'future' video

one of the aspects that stand out
2 way communication between autonomous car and pedestrian
Please clarify what you mean by "Generally not a good outcome for Tesla". Unless you mean any self driving accident is negative news for Tesla which self driving is an important part of their future. If so, then would you say the same for Waymo? Just curious.
 
Please clarify what you mean by "Generally not a good outcome for Tesla". Unless you mean any self driving accident is negative news for Tesla which self driving is an important part of their future. If so, then would you say the same for Waymo? Just curious.

ok, Tesla is far more dependent on expedient arrival of self driving cars than Google is.

and the more bad stuff that happens in self driving cars, the greater the caution required, the more difficult the laws etc etc etc.

some day (and it will happen), All self driving car manufacturers will each have their own fatality, and will stand alone in front of a normal jury. Tesla got through the fire accidents a few years back, but had to retrofit the fleet, its likely that something similar will happen with driver less vehicles.

Is it bad news for Waymo, sure, but its close to insignificant to Google.
 
Well you see. The self driving cars hit a physical object and nearly killed it. Lidar is supposed to see objects and make sure self driving cars don't hit them. Let's say the human driver was driving instead. The lidar still should have warned the driver not to hit the cyclist. Maybe the cyclist saw the self driving car coming and hurled themselves at the vehicle looking for a Payday. I guess we will find out soon enough.
Point is that there is no data to make such conclusion. For all we know, it could be a software glitch. And I'm pro vision too.

Your same arguments could be used by unknowing people as reasons to not allow self-driving cars at all. Someone probably will too, although they won't get far with that, it still has the potential to delay FSD in general (Tesla too).
 
and the more bad stuff that happens in self driving cars, the greater the caution required, the more difficult the laws etc etc etc.

Surely, apart from peoples emotional response, the important thing is that the machine is found not to be at fault, each time? because then progress can continue to be made. If the machine is at fault then I can well imagine that a STOP will be placed on progress (by legislature, if not anyone else). Important that Car Companies are not trigger-happy in releasing updates, as failures will hurt not only their brand but also the whole industry..
 
Surely, apart from peoples emotional response, the important thing is that the machine is found not to be at fault, each time? because then progress can continue to be made. If the machine is at fault then I can well imagine that a STOP will be placed on progress (by legislature, if not anyone else). Important that Car Companies are not trigger-happy in releasing updates, as failures will hurt not only their brand but also the whole industry..

I think whether the machine is at "fault" is the wrong question, at least in the long run. Autonomous vehicles should reduce the number of serious accidents, injuries, deaths, regardless of "fault." Little kids run into the streets all the time. People jaywalk. Drivers run stop signs, forget to signal, speed, drift out of their lane and do all sorts of other things that could cause them to be at "fault" in an accident. Human drivers learn to avoid them most of the time.

Autonomous vehicles' serious accident rates should be as good or better than human drivers', regardless of "fault," and I think they will be eventually.
 
Little kids run into the streets all the time.

if a kid runs out straight in front of a car then the car cannot stop, machine or not. In that instance the machine is not at fault, any more than a human would have been (assuming not speeding etc.) so I don't see that such an accident should count against the machine. It doesn't make the machine less-good than a human ... and in other situations the machine will perform far better.

There will always be "impossible to avoid" accidents like that. Following a vehicle at a safe distance and something falls off the back of the lead vehicle, and the following vehicle has no place to swerve to. Probably the machine will react far faster than a human, and will have braked 100% optimally, so lost more speed than a human would ... but, let's say, its still going to hit the obstacle.

The machine can still perform better than the best driver - the rest of the time.
 
if a kid runs out straight in front of a car then the car cannot stop, machine or not. In that instance the machine is not at fault, any more than a human would have been (assuming not speeding etc.) so I don't see that such an accident should count against the machine. It doesn't make the machine less-good than a human ... and in other situations the machine will perform far better.

There will always be "impossible to avoid" accidents like that. Following a vehicle at a safe distance and something falls off the back of the lead vehicle, and the following vehicle has no place to swerve to. Probably the machine will react far faster than a human, and will have braked 100% optimally, so lost more speed than a human would ... but, let's say, its still going to hit the obstacle.

The machine can still perform better than the best driver - the rest of the time.

I hear what you are saying but my point is that in the grand scheme of things, assigning fault for any particular accident is not the best way to look at the issue.

Let me give you an example. For many years I commuted into San Francisco. When crossing San Francisco, bicycles routinely cut directly in front of me with no warning, pedestrians (including drugged out homeless) popped out behind parked cars in front of me frequently, Uber drivers would cut across 3 or 4 lanes of traffic to make a turn on the opposite side of the road or pick up a passenger, tourists weaved across lanes in go-kart like contraptions that probably should not be allowed on the road, pedestrians frequently crossed against red lights/do not walk signs, etc. I probably could have maimed or killed multiple people per month without it ever being my "fault."

That's obviously unacceptable. If an autonomous vehicle fleet has a higher rate of serious or fatal accidents than the average human driver, it is less safe. Full stop. If safety is the concern, boring statistics are what matter most.

That is not to say the circumstances in a particular accident are irrelevant. As you mention, there are some accidents that are unavoidable by even the most careful driver and if these are the only accidents an autonomous vehicle fleet gets in it should show up in its accident statistics. (You could also make the case that a temporarily higher rate of accidents is tolerable in the short run because the technology will save so many lives in the future, but I don't hear people making that argument.)

Bottom line: I don't think it is a valid excuse for an autonomous vehicle fleet with a high rate of serious injury or fatal accidents that most of the accidents were not its "fault." Autonomous vehicles need to be able to co-exist with human drivers, pedestrians, cyclists, scooters, skateboarders, with all of their human foibles and imperfection.
 
If an autonomous vehicle fleet has a higher rate of serious or fatal accidents than the average human driver, it is less safe

I agree.

On reflection I now realise that my thinking is (and what I should have articulated earlier) that we have so little data - few %age of AP cars, few AP accidents - that I have been thinking "for this low-volume data, discount anything unavoidable, but treat as extremely serious any other accident". If there continue to be no / very very few, "other accidents" then that suggests that the Machine is doing well :)

There are other, new, types of accident to consider however ... e.g. people running into the back of Teslas that brake for no apparent (to the following driver) reason, because the Machine perceived a potential risk.
 
  • Helpful
Reactions: EinSV
I agree.

On reflection I now realise that my thinking is (and what I should have articulated earlier) that we have so little data - few %age of AP cars, few AP accidents - that I have been thinking "for this low-volume data, discount anything unavoidable, but treat as extremely serious any other accident". If there continue to be no / very very few, "other accidents" then that suggests that the Machine is doing well :) <snip>

And I agree that without more data if there were a single accident that was difficult or impossible to avoid that is less concerning, although with any fatality there obviously will be a significant amount of attention/scrutiny/investigation, as there should be.
 
  • Like
Reactions: WannabeOwner