Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
ok, Tesla is far more dependent on expedient arrival of self driving cars than Google is.

and the more bad stuff that happens in self driving cars, the greater the caution required, the more difficult the laws etc etc etc.

some day (and it will happen), All self driving car manufacturers will each have their own fatality, and will stand alone in front of a normal jury. Tesla got through the fire accidents a few years back, but had to retrofit the fleet, its likely that something similar will happen with driver less vehicles.

Is it bad news for Waymo, sure, but its close to insignificant to Google.

No, Tesla would be fine with not having self-driving cars. It can miss and still sell AutoPilot, while the more expensive systems are only viable if they can achieve full autonomy.
 
  • Love
Reactions: mongo
reply from the other thread, but suitable for here
I just noticed in the median that there's a spot that looks like a large/wide sidewalk/path but there are signs indicating "no pedestrians" and "Use crosswalk". Seems like really poor design to have an area that appears to be a sidewalk going down the center of a large median. Isn't this just across the street from where the pedestrian was hit?
View attachment 287892
(I'm from a RHD country, so my interpretation will suck at this,)

this is confusing infrastructure, but EinSV's made a very relevant comment about assigning fault

It seems that very legally the car was obeying the law (autonomous or human)
its a night time accident (can be very difficult to see pedestrians)

could an alert driver avoid this? Yes, in the day time, but not at 10pm at night.

A question that come to be asked later? what standard do Autonomous need to be?
better than a drunk?
better than a sleepy person?
better than a competent human driver?
better than an alert competent human driver?
better than an alert competent human driver with full "robot" collision avoidance?

I suspect it will be fairly soon for leading autonomous cars to be safer than the humans that cause accidents. but perhaps the standard should be. Is an autonomous car safer than the humans that don't cause accidents? (particularity alert humans with full robot collision avoidance.)

In-regards to Tesla and Waymo and Uber and Nissan and Baidu etc

I would expect that a privately owned/operated autonomous car requires significantly less 'safety quality' than a autonomous car that services the public. So Tesla owners can go autonomous long before Tesla Network can exist. I really don't think that Tesla has a sensor suite appropriate for Tesla Network. but that it is sufficient for private autonomous Teslas.
 
I think whether the machine is at "fault" is the wrong question, at least in the long run. Autonomous vehicles should reduce the number of serious accidents, injuries, deaths, regardless of "fault." Little kids run into the streets all the time. People jaywalk. Drivers run stop signs, forget to signal, speed, drift out of their lane and do all sorts of other things that could cause them to be at "fault" in an accident. Human drivers learn to avoid them most of the time.

Autonomous vehicles' serious accident rates should be as good or better than human drivers', regardless of "fault," and I think they will be eventually.

Here is a complication I see with this approach to whether the autonomous car is "as good or better" than human drivers. Not saying I know the right answer or have a better approach. To make that kind of assessment, you need a lot of real world data collected from autonomous cars running autonomously.

Here is a thought experiment: Waymo releases 100 autonomous cars into the world on day 1. On day 5, one of the cars gets in an accident and causes a fatality, doesn't matter whose fault it is. Your data set is so small, you can't really compare the 100 Waymo cars 5 days of driving to the total US deaths to car accidents. So if the data set is too small, do you let the Waymo cars keep driving around autonomously? And if so, what happens if there is another fatality on day 6? Maybe they got unlucky on days 5 and 6, and they wouldn't have another accident for 100 straight days.

Second question related to data set size would be concerning software changes to the driving algorithm based on lessons learned from prior accidents. Lets say that bad accident on day 5 was the result of a software bug that was easily located and fixed. So Waymo rolls out new software to the entire 100 car fleet. Do you then reset the data collection statistics to zero to account for the fact that a prior known problem was fixed? If you do something like that, then your data set may never get statistically large enough to make valid comparisons. If you don't reset the counter, then you are penalizing Waymo going forward for a problem they fixed that shouldn't happen again.

RT
 
  • Like
Reactions: WannabeOwner
Point is that there is no data to make such conclusion. For all we know, it could be a software glitch. And I'm pro vision too.

Your same arguments could be used by unknowing people as reasons to not allow self-driving cars at all. Someone probably will too, although they won't get far with that, it still has the potential to delay FSD in general (Tesla too).

My bad.. Apparently the person basically threw themselves in front of a moving car while crossing in an unsafe place. But still, I would hope that any autonomous car would be a bit more super human in seeing the risk and anticipating the action. But I will eat my serving of crow and hope that Tesla self driving cars are super human in anticipating crazy people.
 
  • Like
Reactions: ChrML
Here is a complication I see with this approach to whether the autonomous car is "as good or better" than human drivers. Not saying I know the right answer or have a better approach. To make that kind of assessment, you need a lot of real world data collected from autonomous cars running autonomously.

Here is a thought experiment: Waymo releases 100 autonomous cars into the world on day 1. On day 5, one of the cars gets in an accident and causes a fatality, doesn't matter whose fault it is. Your data set is so small, you can't really compare the 100 Waymo cars 5 days of driving to the total US deaths to car accidents. So if the data set is too small, do you let the Waymo cars keep driving around autonomously? And if so, what happens if there is another fatality on day 6? Maybe they got unlucky on days 5 and 6, and they wouldn't have another accident for 100 straight days.

Second question related to data set size would be concerning software changes to the driving algorithm based on lessons learned from prior accidents. Lets say that bad accident on day 5 was the result of a software bug that was easily located and fixed. So Waymo rolls out new software to the entire 100 car fleet. Do you then reset the data collection statistics to zero to account for the fact that a prior known problem was fixed? If you do something like that, then your data set may never get statistically large enough to make valid comparisons. If you don't reset the counter, then you are penalizing Waymo going forward for a problem they fixed that shouldn't happen again.

RT

You raise good points but I think it would be in the best interests of any company experimenting with autonomy to be able to report meaningful safety statistics, if only to ward off the impulse of regulators and the public to overreact to a serious accident or fatality.

Fatalities are too rare (thankfully) to be a useful metric. Accidents per mile are relevant but for a test program where operations are at a low speed and result in mostly fender benders it is probably not the best benchmark. I thought Tesla's use of airbag inducing accidents per mile was as good a metric as any because it is relatively objective and a decent stand-in for serious accidents.

Given the promise of self-driving to save lives I think overregulation creates bigger risks than underregulation. But having solid data that a test program is being conducted safely can be a useful tool to assure regulators and the public that unnecessary risks are not being taken, as when the NHTSA found that activating AP1 resulted in a 40% reduction in airbag-triggering accidents after the Josh Brown accident in Florida. It seems to me that companies that can't back up the safety of their programs with data are at risk of getting shut down when bad things happen, which they inevitably do eventually with cars since even safe drivers (human or autonomous) are not perfect.
 
My bad.. Apparently the person basically threw themselves in front of a moving car while crossing in an unsafe place. But still, I would hope that any autonomous car would be a bit more super human in seeing the risk and anticipating the action. But I will eat my serving of crow and hope that Tesla self driving cars are super human in anticipating crazy people.


Where are you getting this info from?
All evidence points to the opposite.
The SDC should have saw her and stopped or slowed down.

JO9Pp8u.jpg


1521589321946.jpg
 
I began to hate Uber after watching how they handled things in Austin a few years ago, they don’t seems to care about their drivers and are aggressive legally and corporately, reminds me of the folks at Yelp.

Blader, for all the hating you do on Elon, my sense is that the Tesla autopilot death was taken seriously and was devastating to the people that work at Tesla.... the people working at Uber in the autonomous car program are likely devastated by this, but they are attached to a corporation that has such a negative halo. Which sucks.
 
Where are you getting this info from?
All evidence points to the opposite.
The SDC should have saw her and stopped or slowed down.

JO9Pp8u.jpg


1521589321946.jpg

Then I reverse my crow eating. You are right. Lidar sucks again..

BTW, the police have stated that there is nothing anyone could have done. I didn't realize that the great bleederscab had more info then the police. My source is every news station today reporting that the first inclination that something was going to be hit was the sound of the collision. I agree that lidar should have seen the object and the system should have identified the collision course. At the very least it should have warned of a collision coming before it happened.
 
Last edited:
Where are you getting this info from?
All evidence points to the opposite.
The SDC should have saw her and stopped or slowed down.

placeholder_image.svg


placeholder_image.svg

From Article
From viewing the videos, “it’s very clear it would have been difficult to avoid this collision in any kind of mode [autonomous or human-driven] based on how she came from the shadows right into the roadway,” Moir told the Chronicle. Police don’t have plans to release the videos while the investigation’s ongoing.
 
Then I reverse my crow eating. You are right. Lidar sucks again..

BTW, the police have stated that there is nothing anyone could have done. I didn't realize that the great bleederscab had more info then the police. My source is every news station today reporting that the first inclination that something was going to be hit was the sound of the collision. I agree that lidar should have seen the object and the system should have identified the collision course. At the very least it should have warned of a collision coming before it happened.

Oh look its the appeal to authority!

Because a police officer who knows nothing about self driving cars and their technology who doesn't realize that the human eyes sees a far better dynamic range and at night than that blurry video they watched, Is the supreme authority.

Coming from a Tesla fan, how typical.
 
What would Tempe police think about this then?

As I understand it, in USA the oncoming traffic (lorry) would have to stop when the (school) bus was dropping off - the school bus would have its hazard lights on - for exactly this reason, that someone crossing "blind" behind the bus cannot see the oncoming traffic (if they bothered to look even!)

I think that's very sensible, and we could do with that here ...

... except we don't have Yellow School Buses, we just have knackered busses that farmers and the like keep to make a bit of extra money doing school runs, and regular buses taking kids as well as normal travellers.
 
Oh look its the appeal to authority!

Because a police officer who knows nothing about self driving cars and their technology who doesn't realize that the human eyes sees a far better dynamic range and at night than that blurry video they watched, Is the supreme authority.

Coming from a Tesla fan, how typical.

Does knowing the person is there help?

If the car was going 35 MPH, that is 51 ft per second. A standard interstate vehicle lane is 12 feet, urban lanes are more like 10 feet, say it is 11 feet. The Volvo XC90 is 79 inches wide (6 foot 7inches). So the most gap there could be between a car still in its lane and the curb is ~4.5 feet. If the car is centered, then the gap is 2.25 ft. A 5'4" woman has an average stride of 2.2 ft. So one step puts them in the lane and two steps puts them in front of the car even if it is hugging the far edge. A person can go from standing to walking pace in one stride. Walking pace is 3.1 MPH or ~4.5 fps. So in slightly over a second a person can go from off the road to in the vehicle's path.

The only way to avoid a collision is to stay far enough away from the curb that if the person decided to cross, the car could stop in time. An XC90 has a 70-0 stopping distance of 180 ft. Based on a graph I found, that puts 35MPH braking at 1/3 or 60 ft. Since the person can cross into the lane in that time/distance, the only options are to go less than 35 and hope they are not faster than average, or be a lane over.
 
  • Informative
Reactions: RubberToe
Does the Machine know it has hit something?

Thought occurred to me that (most? many? on average?) human drivers, on hearing the sickening thud or seeing debris etc on bonnet / over windscreen would stop and investigate and, potentially, then be in a position to carefully back up / forward to release anything trapped underneath.

If a Machine were unaware that the collision had occurred it would be compounding the injuries by "carry on", and I am having difficulty imagining (with no steering wheel /controls) how one would release something trapped underneath. "Summon" type APP perhaps?
 
As I understand it, in USA the oncoming traffic (lorry) would have to stop when the (school) bus was dropping off - the school bus would have its hazard lights on - for exactly this reason, that someone crossing "blind" behind the bus cannot see the oncoming traffic (if they bothered to look even!)

I think that's very sensible, and we could do with that here ...

... except we don't have Yellow School Buses, we just have knackered busses that farmers and the like keep to make a bit of extra money doing school runs, and regular buses taking kids as well as normal travellers.

We also have bus stops every 100m. Traffic both ways would be start stopping every 10s, rather than just the traffic following the bus.
 
Well I believe something must have been wrong with the car and the safety driver was out if the victim came from the left. If the victim came from the right, the police statement can make sense, but only if the car was in the most right lane.
 
From the police statement given to the SFChronicle newspaper and from all the news stories reporting on this accident, she came from the center median on the left. Have not seen a retraction or corrected report anywhere. While I at first thought she must have traveled from the right side due to the damage on the car, the police said they did an initial review of the dashcam footage (front and back cameras from what I understand); and therefore, I have to believe in light of that that they indeed saw her move in front of the car from left to right. For the car to be so smashed in on the right side hood like that, was she walking the bike on the right side of it (body closest to car) or on the left side of the bike (bike closest to car)? I don't think I've seen anything mentioned about this. In other words was her body fully visible from the car or partially hidden? Guessing her body was on the car side otherwise more damage to the car from the bike might have been observed but that's just my guess. No apparent windshield damage. No mention of the car not having it's headlights on either...as silly as that is to say given it was night, I understand from reading news article comment posts from people living in the Tempe/Phoenix area that this is not that uncommon there.

In any event not seeing something in front of you that would span several feet wide and at least 5 feet high crossing a car's path is pretty scary to think of. I can only imagine how traumatic it was for the safety driver as well I imagine it would be for passengers in the Uber if there had been any.

I always feel a bit bad discussing events like this especially when someone is seriously injured or died as a result, but I do think it's important to understand what transpired. I think it's important for Uber to halt road testing of the fleet until more is learned from Elaine's death. Her friends according to a The Guardian article would like to see Uber shut down but I seriously don't think that will happen. I'm glad this is being investigated by the NTSB who I'm sure have a more thorough understand of self-driving car technology than the local police.

I have to say all this has me wondering how well self-driving system out there can do with motorcycle riders who in California can split lanes. I find it scary as a regular driver to have motorcycles suddenly drive by you in limited space when traffic is slow or stopped. Much smaller footprint traveling with traffic than a car and probably about the same overall footprint as a bicycle when traveling in the cross direction (albeit it at a much faster rate of speed).
 
Last edited:
Does knowing the person is there help?

If the car was going 35 MPH, that is 51 ft per second. A standard interstate vehicle lane is 12 feet, urban lanes are more like 10 feet, say it is 11 feet. The Volvo XC90 is 79 inches wide (6 foot 7inches). So the most gap there could be between a car still in its lane and the curb is ~4.5 feet. If the car is centered, then the gap is 2.25 ft. A 5'4" woman has an average stride of 2.2 ft. So one step puts them in the lane and two steps puts them in front of the car even if it is hugging the far edge. A person can go from standing to walking pace in one stride. Walking pace is 3.1 MPH or ~4.5 fps. So in slightly over a second a person can go from off the road to in the vehicle's path.

The only way to avoid a collision is to stay far enough away from the curb that if the person decided to cross, the car could stop in time. An XC90 has a 70-0 stopping distance of 180 ft. Based on a graph I found, that puts 35MPH braking at 1/3 or 60 ft. Since the person can cross into the lane in that time/distance, the only options are to go less than 35 and hope they are not faster than average, or be a lane over.

Wrong as usual. Plus driver wasnt paying attention.

Front View Video of Accident

Twitter
 
  • Like
Reactions: zmarty
Wrong as usual. Plus driver wasnt paying attention.

Front View Video of Accident

Twitter

Yep, the hypothesis I proposed (person stepping off curb) does not apply to this case. Does that invalidate the logic/ physics around it? It seems reasonable to discuss not just specific events, but also the general systemic issues of driving with uncontrollable external actors.

In this case, the car was overdriving its headlights (although the camera's dynamic range likely does not match human vision). At 40 MPH with about 2 seconds from first glimer of the shoe until impact, that is about 117 feet, a little under double the stopping distance at full braking. So reaction time would need to be less than 1 second (slightly more time if the car swerves). Vision system could have detected and stopped car in time, strange the LIDAR didn't.