You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Self-driving Uber car hits, kills pedestrian in Arizona
(This is a bad one)
I can't see how you were able to draw that conclusion based on the very little info published?I guess Lidar isnt a panacea for self driving cars. This is why Vision is critical.
I can't see how you were able to draw that conclusion based on the very little info published?
Please clarify what you mean by "Generally not a good outcome for Tesla". Unless you mean any self driving accident is negative news for Tesla which self driving is an important part of their future. If so, then would you say the same for Waymo? Just curious.from what i can see
its a clear sky, night time accident (immediate thoughts of reduced survival rates due to alcohol? definitely limited visability of pedestrian and/or bikes perhaps unfair)
there is a bike with meaningful damage, and some debris
there is a pedestrian fatality.
Uber has camera's and lidar but is assumed to be a radar centric sensor system. Not a direct equivalent to either camera or lidar centric system, Generally not a good outcome for Tesla, and a bad outcome for Uber, and a terrible outcome for the victims & family. Uber was assumed to be closest to Waymo in sensor suite.
Arizona is far easier than many many populated places on earth for self driving vehicles.
here is an old Nissan 'future' video
one of the aspects that stand out
2 way communication between autonomous car and pedestrian
Please clarify what you mean by "Generally not a good outcome for Tesla". Unless you mean any self driving accident is negative news for Tesla which self driving is an important part of their future. If so, then would you say the same for Waymo? Just curious.
Point is that there is no data to make such conclusion. For all we know, it could be a software glitch. And I'm pro vision too.Well you see. The self driving cars hit a physical object and nearly killed it. Lidar is supposed to see objects and make sure self driving cars don't hit them. Let's say the human driver was driving instead. The lidar still should have warned the driver not to hit the cyclist. Maybe the cyclist saw the self driving car coming and hurled themselves at the vehicle looking for a Payday. I guess we will find out soon enough.
and the more bad stuff that happens in self driving cars, the greater the caution required, the more difficult the laws etc etc etc.
Surely, apart from peoples emotional response, the important thing is that the machine is found not to be at fault, each time? because then progress can continue to be made. If the machine is at fault then I can well imagine that a STOP will be placed on progress (by legislature, if not anyone else). Important that Car Companies are not trigger-happy in releasing updates, as failures will hurt not only their brand but also the whole industry..
Little kids run into the streets all the time.
if a kid runs out straight in front of a car then the car cannot stop, machine or not. In that instance the machine is not at fault, any more than a human would have been (assuming not speeding etc.) so I don't see that such an accident should count against the machine. It doesn't make the machine less-good than a human ... and in other situations the machine will perform far better.
There will always be "impossible to avoid" accidents like that. Following a vehicle at a safe distance and something falls off the back of the lead vehicle, and the following vehicle has no place to swerve to. Probably the machine will react far faster than a human, and will have braked 100% optimally, so lost more speed than a human would ... but, let's say, its still going to hit the obstacle.
The machine can still perform better than the best driver - the rest of the time.
If an autonomous vehicle fleet has a higher rate of serious or fatal accidents than the average human driver, it is less safe
I agree.
On reflection I now realise that my thinking is (and what I should have articulated earlier) that we have so little data - few %age of AP cars, few AP accidents - that I have been thinking "for this low-volume data, discount anything unavoidable, but treat as extremely serious any other accident". If there continue to be no / very very few, "other accidents" then that suggests that the Machine is doing well <snip>