Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Google 2016 California Autonomous results: 5,000 mi. between incidents

This site may earn commission on affiliate links.

RubberToe

Supporting the greater good
Jun 28, 2012
3,567
9,910
El Lay
Here is the link to the keynote where the results were discussed:

Alphabet Says its Autonomous Cars Needed Less Human Help in 2016

Driver required intervention dropped from 0.8 per 1,000 miles driven in 2015 down to 0.2 per 1,000 miles driven in 2016. When California releases the annual report in about a week, we will see how the other manufacturers are doing with respect to that.

Thats setting the bar pretty high (actually, low would be the better word). :)

RT
 
I was looking through the 2015 year end report, and came across this interesting graph. This shows the cars improvement over time, ending in Q4 2015 with one disengagement about every 5,000 miles...



Going back to my first post, they said they lowered the rate in 2016 down from one every 1,250 miles in 2015, to one every 5,000 miles in 2016.

But the graph shows that in late 2015, they were already at basically one in 5,000 (0.2 per 1,000 miles), so the rate looks to have remained steady during 2016, without much overall improvement like the graph shows during 2015.

I wonder if this is simply a matter of them addressing all the corner cases seen during 2015, and being able to handle them all in a generic manner. Basically unexpected things that happen on a quasi-repeatable basis. Then maybe the reason why 2016 was flat, is that they are now encountering random things that do not have any sort of correlation to prior things, and which the car is not able to handle.

Taking this a step further, maybe they are at the point of diminishing returns (i.e. the disengagement rate may never continue falling below 0.2 per 1,000 miles). So maybe the cars are now "running into" (pun intended) the corner cases which have never been seen and coded to handle previously, and may never repeat. And there is likely an endless supply of those.

RT
 
"Driver required intervention" equals "not autonomous" whether it is once every mile or every 5000 miles.

Herein lies the problem - Google have been at this for years at huge cost with esoteric equipment and in very controlled circumstances and still are way from perfection.

The general public is not yet ready to accept the argument that autonomous or even assisted driving is safer, one accident as Tesla have found will result in endless hyperbole.

When it gets to one in a billion miles then the case might become overwhelming.
Until then, much work to do.
 
  • Like
Reactions: kort677
"Driver required intervention" equals "not autonomous" whether it is once every mile or every 5000 miles.

Herein lies the problem - Google have been at this for years at huge cost with esoteric equipment and in very controlled circumstances and still are way from perfection.

The general public is not yet ready to accept the argument that autonomous or even assisted driving is safer, one accident as Tesla have found will result in endless hyperbole.

When it gets to one in a billion miles then the case might become overwhelming.
Until then, much work to do.
I think that is why there is a shift by automakers to use data gathered by regular cars, not only by a limited test fleet. You can't gather data fast enough and can't eliminate enough corner cases with only a test fleet.
 
  • Like
Reactions: ValueAnalyst
"Driver required intervention" equals "not autonomous" whether it is once every mile or every 5000 miles.

Herein lies the problem - Google have been at this for years at huge cost with esoteric equipment and in very controlled circumstances and still are way from perfection.

The general public is not yet ready to accept the argument that autonomous or even assisted driving is safer, one accident as Tesla have found will result in endless hyperbole.

When it gets to one in a billion miles then the case might become overwhelming.
Until then, much work to do.

I don't see the problem?
Advancement is good.

And as the work you mention is done, further improvements will be made. That is the nature of things.
 
  • Like
Reactions: ValueAnalyst
The point I am making is that one driver intervention on average every 5000 miles means that Google's cars are not Autonomous, and really should not be described as such.

I have every respect for and applaud the work Google is doing but with all the effort that they are putting in and still can only achieve 1:5000 intervention:miles just goes to show how damn hard full autonomous is.

I also feel that Tesla are likely to run into difficulties with their claims of full autonomous driving, to the point that I doubt it will be achieved other than on interstate and even then in a limited way. It is worth observing however that not for the first time Tesla have been a bit cute with words .. hardware capable of fully autonomous driving ... is clearly not the same as full autonomous implemented everywhere.

imho autonomous cannot happen until systems can anticipate the driving scene in the same way a human does. Kids playing on the sidewalk with a ball, dog chasing a cat, partygoers spilling out of a nightclub, an incident the other side of the median ... all that sort of stuff.

Sure you can have autonomous without some of these edge cases, but then you get 1:5000 and that just isn't going to be acceptable when an autonomous car hits a young kid that just ran into the road without warning after their ball. A human driver can do this and there is a legal challenge to establish the events, but society accepts that this happens however unfortunate to the individuals concerned, society imho is a long ay from accepting that an autonomous car killed/injured a kid in the same circumstance.

On top of which the instant a car becomes autonomous, the manufacturer becomes responsible as the driver by definition is no longer in control and is relying entirely on the vehicles systems. That Manufacturer is going to need one heck of an insurance policy.

The only route to manufacturers absolving themselves of responsibility is by standards body accreditation/testing.
This I see as the route forward but yet again attitudes have to come a long way.

All good fun though, let's enjoy the ride :)
 
  • Like
Reactions: ValueAnalyst
imho autonomous cannot happen until systems can anticipate the driving scene in the same way a human does. Kids playing on the sidewalk with a ball, dog chasing a cat, partygoers spilling out of a nightclub, an incident the other side of the median ... all that sort of stuff.
A human's anticipation does nothing, it might cause the human to be on the lookout maybe. It certainly isn't guaranteed to speed up the the human's reaction time.

The computer is always on the lookout and always alert. The computer can react faster than a human and track multiple targets at once instead of a typical human's 5-6 degrees of sharp, central, field of view. If the computer can't track a child running into the street in time then there's no hope for a human in the same situation.

Keep in mind 1:5000 is an improvement from 2015 due to more miles driven, but guess who gets more miles than that in a single day?

Here's a note from Tesla:

Pursuant to the requirements of 13 CCR § 227.46 § 227.46 Reporting Disengagement of Autonomous Mode, Tesla Motors Inc. is hereby reporting that there were Zero (0) autonomous mode disengagements during the period from the date of issuance of our testing permit.

but this is for 2015 so lets wait for the 2016 reports to become public.
Autonomous Vehicle Disengagement Reports

It could mean they didn't do any testing at all... or it could mean they are just that good. :cool:
 
Last edited:
The point I am making is that one driver intervention on average every 5000 miles means that Google's cars are not Autonomous, and really should not be described as such.


Autonomous vehicles don't have to be perfect, they just have to be measurably and consistently better than humans. FUD will drive that criteria to 'significantly' better, but nevertheless...not perfect.

Waymo's cars are making less than three errors a year if you normalize to average annual [American] mileage...On surface streets, where many corner cases will occur. Not to mention their slow speed makes them pretty annoying for the rest of us. People sometimes get get quite aggressive toward them, to the point where some actively @#$% with them. Those are some pretty good odds already compared to humans, in pretty rough situations, and we haven't seen anything yet when it comes to autonomous data collection.

imho autonomous cannot happen until systems can anticipate the driving scene in the same way a human does. Kids playing on the sidewalk with a ball, dog chasing a cat, partygoers spilling out of a nightclub, an incident the other side of the median ... all that sort of stuff.

There's a strong presumption that these kind of situations aren't like the first special cases investigated. Waymo cars drive around the surface streets of Mountain View, Palo Alto, and surrounding areas, through residential neighborhoods and commercial districts. That kid or that dog is WAY safer with a Waymo car today, let alone an autonomous car of the future.

Not to mention that higher end cars have had 'dumb' e-brake systems for years that will automatically slam on the brakes when a basketball or drunk appears from behind that parked car.
 
The difficult edge cases are going to be where the car has to break rules;

The blocked lane with a person directing traffic to use the opposite side of the street against the painted lines / signs.
The emergency vehicle approaching stopped traffic at an intersection from behind and in the same direction of travel, where the correct response is to drive forward and enter the intersection against a red light in order to clear a path for the emergency vehicle to pass.
 
That kind of corner case isn't that difficult to overcome. I mean, in the farther future, it will be OBE by networked driving--when cars are all communicating with each other, you don't need someone directing traffic.

In the more near term future, that's a situation where the car simply stops and requires the driver to take over.

Human interaction does not necessarily mean unsafe interaction.
 
....Google's cars are not Autonomous, and really should not be described as such...

Autonomous is a special terminology, not a generic understanding.

It's the same way with AutoMobile: It doesn't mean a vehicle that can "Auto"matically drive itselft.

Or Autopilot: It doesn't mean there's a human android pilot eliminating a need for human pilot.

My impression prior to autopilot introduction is an Autonomous must be perfect that causes zero accidents and zero deaths.

I later understand the same way as bxr140 said that to be eligible for Autonomous terminology, it does not have to be perfect.

In the case of Tesla, Full Self-Driving Capability is defined as "a probability of safety at least twice as good as the average human driver."

... standards body accreditation/testing.

I agree: Lots of claims but where are the independent testing from Consumer Reports or other testers to prove what an Autonomous can do?

Can they handle a high speed Lateral Turn Across Path (LTAP) detection as in case of Florida accident?

Can they avoid being T-boned while running a green light and seeing a cross traffic red-light speeder?

Can they safely maneuver around a damaging pot hole or worse an uncovered manhole?

and so on...
 
The point I am making is that one driver intervention on average every 5000 miles means that Google's cars are not Autonomous, and really should not be described as such.

I have every respect for and applaud the work Google is doing but with all the effort that they are putting in and still can only achieve 1:5000 intervention:miles just goes to show how damn hard full autonomous is.

I also feel that Tesla are likely to run into difficulties with their claims of full autonomous driving, to the point that I doubt it will be achieved other than on interstate and even then in a limited way. It is worth observing however that not for the first time Tesla have been a bit cute with words .. hardware capable of fully autonomous driving ... is clearly not the same as full autonomous implemented everywhere.

imho autonomous cannot happen until systems can anticipate the driving scene in the same way a human does. Kids playing on the sidewalk with a ball, dog chasing a cat, partygoers spilling out of a nightclub, an incident the other side of the median ... all that sort of stuff.

Sure you can have autonomous without some of these edge cases, but then you get 1:5000 and that just isn't going to be acceptable when an autonomous car hits a young kid that just ran into the road without warning after their ball. A human driver can do this and there is a legal challenge to establish the events, but society accepts that this happens however unfortunate to the individuals concerned, society imho is a long ay from accepting that an autonomous car killed/injured a kid in the same circumstance.

On top of which the instant a car becomes autonomous, the manufacturer becomes responsible as the driver by definition is no longer in control and is relying entirely on the vehicles systems. That Manufacturer is going to need one heck of an insurance policy.

The only route to manufacturers absolving themselves of responsibility is by standards body accreditation/testing.
This I see as the route forward but yet again attitudes have to come a long way.

All good fun though, let's enjoy the ride :)

Though I generally agree with you, I believe you may be underestimating how quickly corner cases could be resolved with the 1 million+ car fleet Tesla will have by the end of 2018.
 
The emergency vehicle approaching stopped traffic at an intersection from behind and in the same direction of travel, where the correct response is to drive forward and enter the intersection against a red light in order to clear a path for the emergency vehicle to pass.

I am a former Paramedic and that is *not* the correct response. It is illegal, at least in California and Texas and dangerous everywhere. When approached by an emergency vehicle, put your right turn signal on, move as far to the right as possible, slow and stop if safe. Running red lights is a good way to get killed.
 
  • Like
Reactions: JeffK