Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Mobileye and the Finger Of Blame

This site may earn commission on affiliate links.

J1mbo

Active Member
Aug 20, 2013
1,609
1,460
UK
A new paper by Shai Shalev-Shwartz, Shaked Shammah & Amnon Shashua off of Mobileye has been posted on Arxiv: "On a Formal Model of Safe and Scalable Self-driving Cars" attempts to provide a standardised "responsibility-sensitive safety system" (aka "The Finger Of Blame") for self-driving cars using hard-core maths. There's an 8-page overview to go with it.

As to why you would need a formularised blame-determining *cough* safety *cough* system when there are already a hundred years of legal precident for assigning blame in human-enabled accidents is not clear. Maybe this is why the "Economic Scalability" piece is to follow later... once Mobileye are offering a "responsibility-sensitive safety" certification service to official licensing bodies, perhaps? Something like this?

BLAME.JPG


As they have partnered with BMW in this, "Cautious Commands" specifically exclude: indicating, braking before exits, and using less than 100% throttle when pulling away. Mobileye can't decide on a single default emergency policy, so have gone for a quantum state instead: braking or steering (but never both).

Probably the most dull piece of PR you'll read this year, but for those who like a bit of humour with their maths, there's a useful summary over at The Register.
 
  • Informative
Reactions: croman and lunitiks
Excellent find @J1mbo!

Interesting opinion from the authors:
The “Winter of AI” is commonly known as the decades long period of inactivity following the collapse of Artificial
Intelligence research that over-reached its goals and hyped its promise until the inevitable fall during the early 80s.
We believe that the development of Autonomous Vehicles (AV) is dangerously moving along a similar path that might
end in great disappointment after which further progress will come to a halt for many years to come.
 
The “Winter of AI” is commonly known as the decades long period of inactivity following the collapse of Artificial
Intelligence research that over-reached its goals and hyped its promise until the inevitable fall during the early 80s.
We believe that the development of Autonomous Vehicles (AV) is dangerously moving along a similar path that might
end in great disappointment after which further progress will come to a halt for many years to come.

This is my fear as well. People are getting ahead of themselves. People have seen the success of automatic lane following, and assume that it's a stone's throw to full self driving.

Automatic lane following is new and impressive, but it's basically a simple application of deep CNNs for image processing. Yes they both require neural networks but that's where the similarity ends. The Wright Flyer and a 777 both have wings, and both fly. But it took a long time to get from one to the other.
 
So the paper proposes a set of simple rules which, so long as they are adhered to by the AV, the AV's manufacturer can disclaim any 'fault' for incidents in which the AV is involved. You can see why an AV manufacturer's legal department might be in favor of this. I kind of doubt that human drivers will find it amusing, however.

Currently a major source of incidents with AVs is suspected to come from them having behavior that is different from the majority of human driven vehicles, which leads to the surrounding humans mis-predicting the probable actions of the AV, resulting in a collision. The human is almost always at fault, but it's little consolation if the presence of the AV is leading to an increase in accidents.

Grandpa can provide himself with plenty of 'safety margin' if he drives on the freeway at 45 MPH, waits 5 seconds after the light turns green to proceed, and refuses to enter a 4 way stop until everyone else has gone ahead. But the other drivers all hate grandpa. (This was my grandpa, by the way). And grandpa might not get the blame, but he's causing plenty of problems and more than his share of accidents.

I think one of the reasons that Google's car hasn't proceeded to real world use is that the gap between 'blameless' driving and sharing the road well has been found to be unexpectedly large. Being a good citizen of the roads often means that you get less margin than you might otherwise like because the drivers around you place a high priority on smooth and rapid traffic flow. Ignore this and you save yourself at the cost of others, and become an impediment to efficient use of the roads we all share.

This proposal smells like an attempt to reverse the idea that AV's should be good citizens as well as safe drivers. Even if it somehow makes it into our legal framework and helps AV's avoid 'blame' the result could be that AVs become despised by other drivers. That saves the industry from one ugly problem but creates another, perhaps even uglier one.
 
  • Like
Reactions: J1mbo and Carl
that was dry reading

But the intent is important, as a developer, who would want to be criminally liable if (through now fault of their own - ie car boxed in) for say a fatality.

Somehow I suspect, that the application of autopilot sans driver responsibility will first scale out in a non USA jurisdiction. But where? (my guess would be on islands like SIngapore, Jeju or Japan)

I also think driving 'policy' is extraordinarily compute light. and will be solved long before situational awareness is demonstrated with sub 1 min / year downtime due to rain/hail/shine.
 
(fwiw) i had a colleague who had to 'defend himself' to a courtroom overseas, because our submission at pre 'ideas' tender stage was with hindsight significantly better than the customer eventually did (they went with a competitor).

inconceivable, but it happens.