Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Driving Accident Scenarios

This site may earn commission on affiliate links.
Forgive me if a thread like this has already been started. I'm still kind of new to the forum.

Probably one of the areas of the car that will be changing the most in the next decade. It's difficult to predict all the hardware that will be needed, and a cars hardware can become outdated so quickly. In the end the computer will be safer than a human because of the reaction time and plethora of sensors of various types all over the vehicle. Still, at this point, good drivers can pin point bad drivers and hopefully use intuition to avoid a situation or maybe decrease the amount of time or speed before you are in the situation.
Do you think Tesla is picking up human intuition habits while in shadow mode? Today, I could see a Tesla Model S owner exiting out of autonomous driving mode because they felt the situation was dangerous. Eventually, If the situation was dangerous, maybe that is when you engage the autonomous driving mode because your insurance company will give you a reduced deductible if you are in an accident while the car is autonomously driving. Would it be silly to have a "red alert" button?

Scenario 1: You are traveling on a two lane road at 55mph, and a car going the other direction is not traveling properly in their lane because they are either talking on their cell phone or drunk. A human in control of the car might stay wide before they get to the car in case they cross the yellow line. What would a Tesla vehicle do?

Scenario 2: A ball rolls onto a street and then a second later a child chasing the ball goes into the road. Would the car sensors pick up the motion of the ball, and flag that as a potential high risk situation? Does the car veer away from the child or try and do a straight line stop?

What other scenarios might today's Tesla have trouble with as it takes the more difficult road situations?
 
Neither scenario can be answered without knowing about the programming that goes behind the collision avoidance logic. In Scenario 1, it makes sense that the car would be programmed to violate the lines on the road if necessary to avoid an imminent collision, so long as it doesn't veer into oncoming traffic or hit pedestrians to do so. In Scenario 2, if the ball is big enough to be detected as a road hazard, the car may stop or swerve to avoid it, thus avoiding the child too. If the ball is too small to be seen as a road hazard, then the car will likely drive over it and swerve to avoid the child once it appears.
 
Be careful using the word "programming" with regards to autonomous driving... These are a group of deep neural networks they aren't programmed per se instead they are taught using data gathered from real drivers, video, and image databases. These cars will never be "programmed" to know what a child or a car looks like. Humans are horrible at describing things and when they do it actually leads to less accurate networks. Instead the machine is shown millions of images and for lack of a better phrase the machine "learns" what things are or what to do. see: machine learning

For scenario 1 on a two lane road the car would probably might in all likelihood jerk to the right and/or stop but at the last minute. Swerving doesn't necessarily mean the drunk person is going to swerve in front of you. Also it would depend how far ahead the sensors are calibrated to see and react. As a human you stay wide because you are trying to prepare yourself but a computer is always prepared. On a two lane road there's not always much extra room to stay wide if there's no shoulder. The car has very little trouble determining road from non-road in mobileye demos.
An accident might be unavoidable, but if it is avoidable then it really would make very little difference if you stayed wide the entire time or if the car detected and swerved in a safe manner before you even have time to react.

For scenario 2 the ball would be non-road. The new trifocal cameras can see wide angle as well but shouldn't engage AEB for every tiny human running toward the road that wouldn't make sense unless the kid actually enters the roadway. Assuming you are going at neighborhood speeds the car *should* have enough time to brake. If the kid runs out from behind a car and is 100% obscured the entire time and exits right in front of the car then there's nothing the car or even a human could do about that. If they aren't 100% obscured then neural networks usually recognize objects like that.
I hear the ball scenario constantly. Mobileye can sense an object in the road a few centimeters in size quite easily, but if the ball comes and goes then who knows, humans miss this all the time or things happen really quickly. Rest assured though that the car can react faster than a human so if a child were to run out and you're going at normal neighborhood speeds there should be plenty of room to brake assuming it;s not an unavoidable surprise. If a ball crosses a highway and a kid chases it then you've got other problems...

Parent should still teach their kid to look both ways and not to blindly chase balls into the street, even with autonomous cars.

The training data is already being collected from all Mobileye clients Tesla being one of them as well as nearly all the major auto manufacturers. Tesla, of course, has some of their own data as well. There are tens of millions of miles of training data available to train the networks including human reactions to situations. They'd been collecting data for well over a decade.
 
Any machine learning algorithms in the software are part of the programming and should be referred to as such. The adaptations and optimizations that neural networks provide are limited by design constraints which are also programming. You can collect all the data sets you want, but it won't make sense after the fact if it lacks sufficient context. That context, I.e. improved sensors, is an ongoing development, making 10 year old data practically useless.
 
You can say that someone designed, trained, or even tweaked a network using code, but you can't say that programmed the final network weights using code.... that defeats the purpose of machine learning.
As far as the data being stale, it's not. Last time I checked people still look like people and road still looks like road. If context changed that much humans wouldn't be able to adapt either. Some of the advances in sensor tech are simply camera sensors with better dynamic range and sensitivity.
We've come a very long way.
 
Most of the stuff I've seen on TMC is a little outrageous ... Like: what would it do if an elephant was balancing on a ball in the middle of the road and it was raining while there are solar flares. Of course I'm exaggerating but mobileye has been working on edge cases for a while now.
 
  • Funny
Reactions: Tes LA
Thanks for the input everyone. I honestly don't know much about the current hardware around the new Tesla models, and haven't found a thread that does a good job of explaining that. The trifocal camera sounds cool. Is a trifocal to really help with position in space (like our eyes do)? I've heard that some sensors could bounce off the pavement and still see a vehicle that is blocked by another vehicle. I wonder if it could then do the same thing to pick up movement of child? If assume a child is smart enough not to run in the road after a ball, the same scenario would hold true about a dog. For that matter, what does the car do when a dog tries to eat the tires on a Tesla. Some dogs make chasing a car a real game.
 
there is the fundamental piece of programming, code, prime directive, billion dollar question....whatever you want to call it that needs to be addressed:

How does Autopilot associate a "cost" in any given scenario? ie, what "weight/cost" does it put on human life over property? Whose human life? The occupants of the Tesla's life/lives? a pedestrian's? someone in another vehicle?

When it has to make the toughest decision, what does it choose? We know from over 100 years of human driving that we (humans, collectively) get it "wrong" thousands of times a day.

When do we decide that AP or some other "brand" of autonomous driving is better/safer than "us"?

Personally, I'd love to see the interconnected driving AI, where all cars talk to each other, meaning less scenarios where that jerk who saw the "Right Lane Ends- 1 mile" sign waited until .9999999999 miles to try to squeeze in ahead of you........
 
We know from over 100 years of human driving that we (humans, collectively) get it "wrong" thousands of times a day.

I've personally never had to make such a tough decision, I'd imagine that autonomous driving will actually further reduce such incidents from occurring in the first place due to safe driving in general.

Statistically it really doesn't matter what it chooses, if as a whole it makes better decisions than a humans and reduces overall incidents then is still clearly better than a human. Granted in the foreseeable future humans can always take back control.

If a machine could get it wrong hundreds or even tens of times a day instead of a thousand times a day, is that not better for the world? Does that not save human lives regardless if every now and then an accident isn't preventable?

Car networks would be cool if implemented in a unified way across all manufacturers.
 
Personally, I'd love to see the interconnected driving AI, where all cars talk to each other, meaning less scenarios where that jerk who saw the "Right Lane Ends- 1 mile" sign waited until .9999999999 miles to try to squeeze in ahead of you........

The Zipper method is the best way to reduce traffic actually. People that don't wait until the last second are just extending the reduced lanes. Simple as that. Sorry if I'm getting Off topic.

It's definitely an interesting topic. Google has been working on those corner cases for almost a decade and they say there is still lots to figure out which makes me wonder how other manufactures claim they are much farther ahead.


Redundancy is also key for autonomous driving let's not forget. I think the true revolution is indeed the neural network of learning. I think companies are keep all this very close to the chest so as to be first to the market. When hear about Uber buying 100,000 Mercedes autonomous cars. Lyft doing the same with GM, Volvo coming out with fully autonomous cars for 100 people and paying for any accidents, Apple putting a $1B bid on China's version of Uber and of course Telsa with mobileye and the amount of data they gather. It's going to be an interesting couple of years as this pans out.
 
I've personally never had to make such a tough decision, I'd imagine that autonomous driving will actually further reduce such incidents from occurring in the first place due to safe driving in general.

Statistically it really doesn't matter what it chooses, if as a whole it makes better decisions than a humans and reduces overall incidents then is still clearly better than a human. Granted in the foreseeable future humans can always take back control.

If a machine could get it wrong hundreds or even tens of times a day instead of a thousand times a day, is that not better for the world? Does that not save human lives regardless if every now and then an accident isn't preventable?

Car networks would be cool if implemented in a unified way across all manufacturers.


the problem for a portion of this is going to be the mix of cars on the road. there will still be plenty of unpredictable human-driven cars for a while.
 
When it has to make the toughest decision, what does it choose? We know from over 100 years of human driving that we (humans, collectively) get it "wrong" thousands of times a day.

When do we decide that AP or some other "brand" of autonomous driving is better/safer than "us"?

.

When they only get it wrong less than "thousands of times a day"

I am sure they are already at that point.

How does Autopilot associate a "cost" in any given scenario? ie, what "weight/cost" does it put on human life over property? Whose human life? The occupants of the Tesla's life/lives? a pedestrian's? someone in another vehicle?

I would hope that it would chose the option that has the lowest overall impact on human life. For instance, if a single person in a car is charging towards a family of two in a crosswalk, I hope it would swerve the car off the road even it it means crashing into a tree.
 
You can say that someone designed, trained, or even tweaked a network using code, but you can't say that programmed the final network weights using code.... that defeats the purpose of machine learning.
As far as the data being stale, it's not. Last time I checked people still look like people and road still looks like road. If context changed that much humans wouldn't be able to adapt either. Some of the advances in sensor tech are simply camera sensors with better dynamic range and sensitivity.
We've come a very long way.
As I understand it the neural net is primarily for understanding the environment but the actions the car takes are programmed based on the neural net input.