Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Deer and autopilot

This site may earn commission on affiliate links.
Tesla has a way of implying/promising/marketing a feature and then quietly making it disappear if it cannot be done with software upgrades. Some examples include, backseat HVAC control with app, WiFi hotspot capability, traffic light detection, blind spot warning, etc.
At least two of those are currently available, or in development, while the others you mention are from 2010 or 2011 before the Model S was launched.
 
With the right sensors, the car should be able to see the deer from further away and possibly anticipate their reactions. AP2 doesn't have the sensors to do that IMHO.
Anticipate sounds like a high expectation. I'm just hoping for an alert from the car when it 'sees' a deer off to the side of the road. Even better would be for an object-alert to be painted on to my HUD or screen at night. I'd like the same alerts for pedestrians and bicyclists.
 
Anticipate sounds like a high expectation. I'm just hoping for an alert from the car when it 'sees' a deer off to the side of the road. Even better would be for an object-alert to be painted on to my HUD or screen at night. I'd like the same alerts for pedestrians and bicyclists.

That's a great start. If they ever plan on FSDC, I'd like to see them reacting to the deer based on location and movement and typical habits - anticipating the deer and helping them to avoid discovering fundamental physical principles about two objects and one location.
 
Are you claiming that you can describe a moral dilemma and its resolution in a way that I can't model with a Turing machine? I would like to see that.

Thank you kindly.

Turing machines are not useful for automotive technology anymore except for simple switches.

Engine and transmission controls have been learning machines since at least 2000. I cannot think of a decent engine control after 1996 that was not a learning machine.

Bring it on today. The car gets a brake impulse every time a bicycle object appears. A Turning machine does nothing unless it has been told what a bicycle is and how you react. Next year it will be a GizBang instead that causes an acceleration response. Gizbangs were not programmed.

The best possible AP system would be a learning system derived from very experienced drivers, or instead a vast pool of drivers, but it discards all data from drivers who crash. I like method one.

Turing Machine:

Is object a human? If so, use table to weight it's value.
If collision is possible, assign risk value to all vector options including human.
If driver risk exceeds pedestrian risk, protect driver using that vector.

Learning Machine:

Is object human? Look at stored values for outcomes.
If collision is possible, find best outcome from history.
Store outcome, and recalculate outcomes.
 
Which problem? Modeling moral decisions or making them? And what is your proposed correspondence with a problem which is known to be NP-hard?

Thank you kindly.
Describing and modelling should be easy, it's the resolution of all possible outcomes in a "morally satisfactory" way which might be a problem. For starters due to the lack of a concrete mathematical definition of morality and I believe the ability to prove that it'll be always correct, make a decision in time, and halt, is similar to the halting problem itself.

It's easy to get stuck in a loop especially if considering time and/or inherent worth of human life and the worth of that life to humanity as a whole.
Trolley problem: three people vs one person might seem easy, but what if during the life of that one person was very important to humanity as a whole. Who are we to judge that life as we cannot know the future? How could a machine judge without having all the facts past, present, and future and still make a correct decision? There'd be a ripple effect through time based on the one decision. Keeping in mind there's a very small finite time to make this decision before the inevitable happens.

The problems are easy to describe and yet impossible to prove that for all given input moral problems it will produce a correct output and not get stuck.
Of course this all hinges on a mathematical definition of morality for it to be provable given a solution in the first place.

I prefer this solution to all trolley problems :p it seems the most fair:

 
For starters due to the lack of a concrete mathematical definition of morality and I believe the ability to prove that it'll be always correct, make a decision in time, and halt, is similar to the halting problem itself.

Not at all similar to the halting problem.

If you don't have a resolution for a given problem, you can't blame the car for not having it either. This is why I specified the question as given a problem and a resolution. I do NOT want to get into the question of having an AI making different moral determinations than humans. That way seems fraught with existential peril.

Thank you kindly.
 
This is why I specified the question as given a problem and a resolution..
That gets into people's arguments that it must be pre-programmed with all possible scenarios with solutions.

I like a quote from Andrew Ng on Twitter
Why do people think the trolley problem is critical for self-driving cars? The trolley problem wasn't critical even for trolleys.
 
  • Like
Reactions: garsh
That gets into people's arguments that it must be pre-programmed with all possible scenarios with solutions.

Perhaps people who are unaware of how mathematics works. Is EVERY possible scenario pre-programmed into spaceships, or just the rules of astrodynamics, with new solutions computed on the fly? Every possible addition problem? Every possible youtube video?

Thank you kindly.
 
Perhaps people who are unaware of how mathematics works. Is EVERY possible scenario pre-programmed into spaceships, or just the rules of astrodynamics, with new solutions computed on the fly? Every possible addition problem? Every possible youtube video?

Thank you kindly.
Exactly, those rules can be described mathematically with known outcomes whereas moral dilemmas cannot in the same way. You can't weight the outcomes unless you've assigned a weight/priority. Best bet is to simply avoid the scenarios in the first place.

We could always program in some basic priorities and have everything else be random

1) prioritize driver and passengers
2) pedestrians/bicyclists
3) domestic animals
etc
where at each layer you avoid the object unless it it would negatively impact the outcome of the higher layers. It'd be in the spirit of Asimov's laws of robotics.
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The randomness comes in where if the only possible choices violate the same layer and have good outcomes in the above layers then pick a random choice. There's no good solution.
In this way we could have concrete rules/guidelines and you can apply logic and mathematics. We just need that human assignment of object weights/priority.
 
Last edited:
Why not?
[note: think about gambling, no known outcomes, but the best strategy is often known. And computers win ALL of those games.]
Thank you kindly.
With gambling humans have assigned weights as described above. A good example is counting cards. Computer gambling based on statistics which can only be performed when you've adequately represented scenarios (and outcomes) mathematically. The objective outcomes have weights assigned to them. Moral dilemma problems are objective and can easily be described mathematically but outcomes are subjective at best. Humans would have to concretely agree on weights to make it objective.

Those certainly don't match our current traffic laws.
There are no laws that I know of which say you explicitly cannot hit a dog to save a child's life. This doesn't mean it's illegal.

If there's a freak accident where you were following all laws and you had to choose between you and your family or a pedestrian (lets say pedestrian or drive off a cliff)... They aren't going to put you in jail for hitting the pedestrian. An accident is an accident.
It's different if there was an alternative but in the scenario there's not.
 
Those certainly don't match our current traffic laws.

Thank you kindly.

There is no point scoring system for kills while driving.

In fact, you can kill pedestrians, bicyclists, motorcyclists, animals, and other car drivers legally in the US if you meet the right conditions.

We have the concept of "accidents" that includes killing folk if you are just a bad driver, but broke no traffic law. Not all countries are as forgiving. They require you to do everything reasonable to avoid a collision, or you can go to jail if someone dies.