Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autopilot lane keeping still not available over 6 months after delivery

This site may earn commission on affiliate links.
One thing that I think is difficult is this rare case - what if the human driver is choosing to get into a minor accident in order to avoid something more serious? Or, looked at another way, should the autonomous system prioritize the occupants of the vehicle, or the most number of lives/automobiles saved or unaffected? This is where things will get complicated.
Exactly. This is a problem that current technology cannot possibly handle, and that we are likely decades away from solving. It has been wrestled with for nearly a century. From Wikipedia, quote:
------------------------
The Three Laws of Robotics (often shortened to The Three Laws or Three Laws, also known as Asimov's Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround", although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
-------------------
Source: Three Laws of Robotics - Wikipedia, the free encyclopedia
 
Last edited:
Exactly. This is a problem that current technology cannot possibly handle, and that we are likely decades away from solving. It has been wrestled over for nearly a century. From Wikipedia, quote:
------------------------
The Three Laws of Robotics (often shortened to The Three Laws or Three Laws, also known as Asimov's Laws) are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story "Runaround", although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the "Handbook of Robotics, 56th Edition, 2058 A.D.", are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
-------------------
Source: Three Laws of Robotics - Wikipedia, the free encyclopedia

Pretty sure the Three Laws need a little tweaking. You can't fool me. I saw iRobot. :cool:
 
Also, kind of off-topic, but stop signs aren't as big of a deal as people are making them out to be. The car reads a stop sign (like Tesla says it does) and slows to a stop. You then figure out when it's your turn, and press the pedal to continue your trip. Not rocket science. The next step is just having cars that can communicate with each other to know who gets to go when. Adding a sensor to broadcast the status of the light in an intersection doesn't sound that hard either.
That is not autonomous driving though (which is what is being discussed). Autonomous driving requires no driver input. The Google car being proposed has no driver controls (no steering wheel or pedals) other than an start and stop button. A smartphone tells the car to come pick you up and also has the destination preset.

It is supposed to be able to replace a taxi driver, and something that requires passenger interaction would not serve that purpose.
 
That is not autonomous driving though (which is what is being discussed). Autonomous driving requires no driver input. The Google car being proposed has no driver controls (no steering wheel or pedals) other than an start and stop button. A smartphone tells the car to come pick you up and also has the destination preset.

It is supposed to be able to replace a taxi driver, and something that requires passenger interaction would not serve that purpose.

It doesn't sound like you read everything I wrote. Tesla is doing it in phases, with less and less driver intervention required. Currently, Google cars have no problem navigating a 4 way stop. It's not that hard to wait 10 seconds, and check the status of the intersection.
 
It doesn't sound like you read everything I wrote. Tesla is doing it in phases, with less and less driver intervention required. Currently, Google cars have no problem navigating a 4 way stop. It's not that hard to wait 10 seconds, and check the status of the intersection.

You're really oversimplifying the problem though. You can't have autonomous cars wait 10 seconds at an intersection for it to be clear. For starters, any vehicles with human drivers will quickly get upset. There will be road rage accidents. Second, if multiple autonomous cars come to an intersection, using this method they'd sit there forever--none would move. There would be traffic jams, etc.

The Google vehicles will slow considerably when they encounter an unexpected situation. That might be ok for a prototype vehicle to maintain safety, but it doesn't really work in the real world. It leads to congestion and all of the other issues I mentioned.
 
It doesn't sound like you read everything I wrote. Tesla is doing it in phases, with less and less driver intervention required. Currently, Google cars have no problem navigating a 4 way stop. It's not that hard to wait 10 seconds, and check the status of the intersection.
The article linked goes into a lot of detail on the limitations of the google car. Currently for 4-way stops, it "cheats" by going extremely slow. There are also plenty of other situations described there that it can't handle at all (like for example making a left turn while there is continual oncoming traffic).
http://www.technologyreview.com/news/530276/hidden-obstacles-for-googles-self-driving-cars/
 
If you can't stay in your own lane while driving then i have something to tell you...


Literally the most useless feature a car could have. Would never want it or need it.


Sounds like something that would assist a drunk person to get home... but then a drunk person shouldn't be driving in the first place, therefore whats the purpose of such feature?


I sure as heck don't want a car driving me. I do the driving, not the car, simple stuff. Everything is perfect the way it is. No such feature is needed. :smile:
 
If you can't stay in your own lane while driving then i have something to tell you...


Literally the most useless feature a car could have. Would never want it or need it.


Sounds like something that would assist a drunk person to get home... but then a drunk person shouldn't be driving in the first place, therefore whats the purpose of such feature?


I sure as heck don't want a car driving me. I do the driving, not the car, simple stuff. Everything is perfect the way it is. No such feature is needed. :smile:

I respect your totally incorrect assumptions. :) You're wrong though. My Model S rumbles the wheel when I drive over a line without using my turn signals. I consider myself to be an experienced driver and a cautious and attentive one at that - and I'm not embarrassed to say I feel that rumble often. With lane keeping, instead of me turning the wheel to stay in the lane, the car would do it for me. It's as simple as that.

As you're not a Tesla owner, you don't yet realize how often you likely drive on or over the lines because you don't have the future space car alerting you. It happens. It's not safe. And with lane keeping, I'll be a safer driver.
 
I respect your totally incorrect assumptions. :) You're wrong though. My Model S rumbles the wheel when I drive over a line without using my turn signals. I consider myself to be an experienced driver and a cautious and attentive one at that - and I'm not embarrassed to say I feel that rumble often. With lane keeping, instead of me turning the wheel to stay in the lane, the car would do it for me. It's as simple as that.

As you're not a Tesla owner, you don't yet realize how often you likely drive on or over the lines because you don't have the future space car alerting you. It happens. It's not safe. And with lane keeping, I'll be a safer driver.

I think Mr X was being totally sarcastic, at least that's how I read it :biggrin:
 
You're really oversimplifying the problem though. You can't have autonomous cars wait 10 seconds at an intersection for it to be clear. For starters, any vehicles with human drivers will quickly get upset. There will be road rage accidents. Second, if multiple autonomous cars come to an intersection, using this method they'd sit there forever--none would move. There would be traffic jams, etc.

The Google vehicles will slow considerably when they encounter an unexpected situation. That might be ok for a prototype vehicle to maintain safety, but it doesn't really work in the real world. It leads to congestion and all of the other issues I mentioned.

Two autonomous cars would realize they were both there, and communicate with each other.

We could argue for days about possible solutions to problems, and we would be no closer to convincing each other. I am just listening and looking at the big players, and they are all incredibly optimistic. Uber wants autonomous fleets as early as 2020.

If we have exponential progress, then it will probably be here sooner than that. Humans are horrible at thinking exponentially. Time will tell. Hopefully we are all around to remind each other who was right :tongue:
 
If you can't stay in your own lane while driving then i have something to tell you...
Literally the most useless feature a car could have. Would never want it or need it.
I sure as heck don't want a car driving me. I do the driving, not the car, simple stuff. Everything is perfect the way it is. No such feature is needed. :smile:

I totally agree with the bolded sentence.

I won't ever have autopilot (unless... TSLA does REALLY well :)) so this is academic for me. But from a recent trip to Reno, I can't agree with lane keeping being a useless feature. And it has nothing to do with me being unable to stay in the lane. When you're driving for 2 or 3 hours a stretch on gently curving interstates, it would help fatigue a lot being able to let go of the wheel, stretch the arms, just relax for a bit while the car is still safely motoring on. There were several times when my left wrist (which does the lion's share of holding/adjusting) felt a little repetitively strained. Lane keeping would significantly reduce that.

And I'm not saying do that in traffic or around Oregon's mountain curves or anything like that.
 
I think we need to keep any moral decisions away from the cars. It should simply try to avoid a collision while remaining on the roadway or otherwise designated "free space" (including medians or unoccupied grass/fields to the sides of the roads). In other words, it should never intentionally veer off the road into a crowd, but there should be no logic prioritizing occupants. The priority in an emergency should be to stop the vehicle quickly and safely (to prevent rear collisions). If it has to do an evasive maneuver beyond braking, it should choose the clearest "free space" path. If a collision is unavoidable, the car should chose to hit the object that is furthest away and/or traveling in the same direction. Specifically, using steering and braking to shed as much kinetic energy as possible before a collision. I know I'm oversimplifying, but I think allowing physics to be that "priority system" will do much better than human drivers.

I think this is a really interesting issue. I'm not sure how to keep all moral decisions away.

Take for example a situation where the car has to either run down a pedestrian in the roadway, very probably killing them, or at least causing very severe injury, or has to veer off the road into a solid object, like a cement divider of some kind, or a post, that would probably result in a great deal of damage to the car, and some injury, but in all likelihood, thanks to the crumple zone and the airbags, not life-threatening injury, to the occupants of the car. What decision does the car's computer make?

You could easily make an argument for either option. The option to save the life is easy to argue for. But on the other side you could argue that your own car should never choose to injure you, no matter the cost to someone else, and that its number one priority should be to protect its occupants.

I think this is very interesting stuff to think about.
 
Can you tell me how to make a left turn when there is continual oncoming traffic? Because I don't know how to do that either, and I'd certainly like to know...
You have to make a judgement whether the traffic gap is large enough for you to make it. Examples of roads where this can happen:
1) middle turn lane left turning into a smaller road
2) going from a smaller road (in a Y shape) to the main road, where you have to yield to the main traffic.

I have been on a road with the latter example and it took a good 10-15 minutes before I was able to wait for a gap large enough to make the turn. My guess is the frame rate of the lidar is not fast enough to detect the gap reliably. And it's a situation where defaulting to going extremely slow will not solve the problem.
 
Last edited:
I think this is a really interesting issue.

For me, it is the most interesting issue in the debate. Autonomous driving is coming - like it or not. I'll argue that it's the right direction, but I'm sure I'll have wise, well informed opponents. The details of how it deals with 4-way stop signs and left turns is negligible in the long-term. Difficult now for sure - but it'll be a hurdle easily passed. What's less simple are the ethical questions of how the car should behave when it's smart enough to see what we don't.
 
Take for example a situation where the car has to either run down a pedestrian in the roadway, very probably killing them, or at least causing very severe injury, or has to veer off the road into a solid object, like a cement divider of some kind, or a post, that would probably result in a great deal of damage to the car, and some injury, but in all likelihood, thanks to the crumple zone and the airbags, not life-threatening injury, to the occupants of the car. What decision does the car's computer make?
You could easily make an argument for either option. The option to save the life is easy to argue for. But on the other side you could argue that your own car should never choose to injure you, no matter the cost to someone else, and that its number one priority should be to protect its occupants.
And that is the type of situation that Isaac Asimov was struggling with over 70 years ago when he formulated his now famous Three Laws of Robotics that I quoted up thread. I agree that this sort of issue is the most interesting part of the autonomous driving technological challenge. Clearly the Google autonomous cars currently do not attempt to evaluate those kinds of complexities when faced with a potential accident situation. But at some point autonomous driving software could evolve to deal with such complexities.
 
Take for example a situation where the car has to either run down a pedestrian in the roadway, very probably killing them, or at least causing very severe injury, or has to veer off the road into a solid object, like a cement divider of some kind, or a post, that would probably result in a great deal of damage to the car, and some injury, but in all likelihood, thanks to the crumple zone and the airbags, not life-threatening injury, to the occupants of the car. What decision does the car's computer make?

In your specific scenario, I believe the fallback should be: Damage to the car, while maintaining a low probability of injury to the occupants, is preferable to hitting an unprotected pedestrian/cyclist/motorcycle. Other people can argue about that exact number, but I believe the probability of injury should be 1% or less. So, it's still physics based with a priority to the occupants. I say this because we're assuming the car is following the rules, so it's on an active roadway and has right of way. Situations like you're describing would essentially be random or intentional "breaches" of the active roadway.

We can keep coming up with more and more complicated moral dilemmas, like "What if an adult jumps out from the left side of the road and a child jumps from the right side with nowhere to turn? Who does the car hit?" I propose it should select the pedestrian that would be hit with the least energy. Maybe they're three inches further away? With more sophisticated modeling, it could probably be designed to hit the unavoidable pedestrian in a way that's least likely to injure them.

Most importantly, I guess what I'm getting at is that we need to stop somewhere well before complicated ethics and morals. And, for me, that has to be somewhere before it starts accessing a database of the pedestrian's net worth or education level or age or "value to society" and compare it to the occupants of the vehicle before deciding who to injure or kill. Imagine the horrors of some kind of real-time point value based on a pedestrian's age or clothes or ethnicity. Or, special "Avoidance Insurance" you can purchase to be added to a database so a car gives you and your family priority against being hit (using some kind of facial recognition or maybe a bracelet you wear or something transmitted from your cell phone)?

I'd be very interested to know how many of today's serious accidents would be classified as completely unavoidable given a perfect, distraction-free reaction time with expert defensive driving techniques. If I had to guess, these ethical decisions would be incredibly rare, akin to being hit by lightning.

For now, if we can just get started with Autopilot steering... :smile:
 
Last edited: