Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autopilot lane keeping still not available over 6 months after delivery

This site may earn commission on affiliate links.
You will not find this on any production car, available for legal use, in the next 5 or so years--maybe 10.

Nissan has a stated goal of autonomous driving by 2020. I have a friend who works in that group. He's smart enough to not divulge any details, but I don't think they're far behind the 5 years left.

Google has just started their autonomous city cars, not "production" yet but functional.

In summary, I believe you're wrong above.
 
My degree is in bioengineering, so I'm like half an engineer. I remember reading that it took 8 years for 1% of the human genome to be sequenced. Everyone thought it would take decades to finish. It took around 3 years if I remember right. Exponential growth is nit intuitive, but it makes fools of us all.
 
That's funny. Are you the CEO of an auto company (serious question)? I ask, because at the shareholder meeting Elon said it would be 1-3 years before you could basically sleep in the car (technology wise, not legal wise). I would think if anyone had a good idea about the timeline it would be him, but I don't actually know you. Google, Uber etc. all think driver-less cars will be here by 2020.

I fully realize I will remain responsible for what my vehicle does until the laws catch up. I'm just looking for a car that drives as good as me on the freeway.

I'm not asking to sleep in the back. I am just asking for less attention than I already put in. Maybe I need to look up every 15 seconds or so. Does a pilot constantly look around and check everything constantly as if he is flying the plane when it is on autopilot? Probably not. They relax and let it do their thing, knowing that if things change the system will let them know and they will take over ASAP. FYI, this is only slightly less engagement than the average "texting while driving" teenage girl drives even without autopilot.

I am imagining a system, that on the freeway, would require you to be present, but not really all that alert (assuming good weather, not a lot of merging in your lane, etc.). Seems like this should easily be possible with the current hardware.

"Less fatigue" by having to turn the steering wheel 75% less doesn't even sound worth it to me. In fact, it sounds irresponsible to release something like that to the public. I already have trouble paying attention in stop and go traffic with the current TACC. If I have to steer significantly less, but still pay attention, that sounds worse than just being engaged with the road.

FWIW, I say this having been in a 65 mph accident in a Tesla, and hopping out without so much as whiplash. So, we aren't talking about a super high bar here. If a deer jumps in front of the car, I don't expect not to crash, just like I probably would crash even if I were in full control.

Does a plane have to worry about dogs jumping out in front of the plane at altitude? What you are asking for is not what Elon had talked about.

Paying attention means being able to intercede if needed but you can lean back and look around and enjoy the scenery.
 
Does a plane have to worry about dogs jumping out in front of the plane at altitude? What you are asking for is not what Elon had talked about.

Paying attention means being able to intercede if needed but you can lean back and look around and enjoy the scenery.

*This* is what's possible.

I've been very very negative about "driverless cars" because the pattern-matching problem is damn near impossible. When I drive, if I see deer standing 5 feet off the side of the road, I slow way down because they are quite likely to jump out in front of my car. You *cannot* currently build an automatic system to do that, I and I don't think you will be able to in my lifetime. Can the car handle "Please navigate around these randomly placed cones through dirt to bypass the emergency constuction crew?" Can the car handle "bridge out, no warning"? Can the car handle whiteout road conditions? How about children at the side of the road, much like deer, likely to dart out in the street in front of you? How will the car do at driving on grass, and identifying which parts are actually road and which are swamp? Etc. etc. etc. Fully automated cars are a pipe dream.

What is technologically possible is for the car to take over the *routine* stuff. Now, will this cause a problem with distracted drivers, who having nothing to do most of the time, stop paying attention at all? Maybe it will. But this is what is technologically possible.

- - - Updated - - -

There is no way it's going to be able to autopilot for stop signs etc. For example if it comes to a 4 way stop it would have to determine if it should go next etc.
Good point. That's another thing which is well beyond the existing tech level.

Actually, there's no way it can tell a all-way stop from a not-all-way stop. That's hard for *humans* -- you have to look for the *back sides* of the other stop signs! You can't rely on a database, because this could change quite quickly.

I doubt the current tech can even reliably read traffic lights, which can be mounted in weird places, vary in size, don't necessarily have red on top (really, some are side-to-side), etc., and which are often turned from "red/yellow/green" mode to "blinking yellow/blinking red" mode at night. While this is a solvable problem... how about trying to convince the car to handle a *power-out* traffic light automatically? (It's a four-way yield!)

Fully automated cars are impossible because *road conditions have a ridiculously high number of corner cases*. This is why Elon is talking about something much more restricted.
 
Nissan has a stated goal of autonomous driving by 2020. I have a friend who works in that group. He's smart enough to not divulge any details, but I don't think they're far behind the 5 years left.

Google has just started their autonomous city cars, not "production" yet but functional.

In summary, I believe you're wrong above.

Optimistic goals are nice to have. They drive you forward. Like I said, I didn't say it's technically impossible to have such technology ready by 2020--Google basically has it now. But I'm willing to bet that it won't be legal to use in a production car. We're talking about legalizing the release of the driver's liability for being "pilot in command" of their vehicle. It's one thing to be working in an R&D group on a prototype vehicle. It's an entirely different animal to make it production ready, tested, and road-legal. If Nissan thinks they'll hit that 2020 deadline, well great. Awesome. I tend to lean toward a realistic view.

Remember when Elon said that autonomous cars would have to demonstrate that they're something like 10 times safer than a human driver before they'd be considered road legal? Ok, that's just one car company CEO's opinion...but let's say that's true. That study could easily take years. Congress and state legislatures aren't going to just legalize autonomous driving after a two-week study performed by a car manufacturer. It's going to take multiple independent studies, probably several years and several million miles of testing. That's just to gather that initial data, and assuming there are no hiccups along they way.

We can't even get congress to legalize the use of side cameras instead of mirrors, and Tesla's been pushing that for years now...

But if cars are driving around with sleeping drivers in 4.5 years, great. I'm not that optimistic, and there's no evidence that 4.5 years is a realistic timeframe either.
 
Last edited:
*This* is what's possible.

I've been very very negative about "driverless cars" because the pattern-matching problem is damn near impossible. When I drive, if I see deer standing 5 feet off the side of the road, I slow way down because they are quite likely to jump out in front of my car. You *cannot* currently build an automatic system to do that, I and I don't think you will be able to in my lifetime. Can the car handle "Please navigate around these randomly placed cones through dirt to bypass the emergency constuction crew?" Can the car handle "bridge out, no warning"? Can the car handle whiteout road conditions? How about children at the side of the road, much like deer, likely to dart out in the street in front of you? How will the car do at driving on grass, and identifying which parts are actually road and which are swamp? Etc. etc. etc. Fully automated cars are a pipe dream.

What is technologically possible is for the car to take over the *routine* stuff. Now, will this cause a problem with distracted drivers, who having nothing to do most of the time, stop paying attention at all? Maybe it will. But this is what is technologically possible.

- - - Updated - - -


Good point. That's another thing which is well beyond the existing tech level.

Actually, there's no way it can tell a all-way stop from a not-all-way stop. That's hard for *humans* -- you have to look for the *back sides* of the other stop signs! You can't rely on a database, because this could change quite quickly.

I doubt the current tech can even reliably read traffic lights, which can be mounted in weird places, vary in size, don't necessarily have red on top (really, some are side-to-side), etc., and which are often turned from "red/yellow/green" mode to "blinking yellow/blinking red" mode at night. While this is a solvable problem... how about trying to convince the car to handle a *power-out* traffic light automatically? (It's a four-way yield!)

Fully automated cars are impossible because *road conditions have a ridiculously high number of corner cases*. This is why Elon is talking about something much more restricted.


I'm not talking about full autonomy right now either. I'm talking about being able to watch a video or text while my car drives itself down I5 in Seattle. There are no deer, if there's construction I would know before hand and be ready for it. If someone falls from the sky and starts running across the road, or an accident happens, then all I would expect is some sort of warning sound to get me to pay attention. I'm pretty sure this level will be available with the current hardware.

Also, kind of off-topic, but stop signs aren't as big of a deal as people are making them out to be. The car reads a stop sign (like Tesla says it does) and slows to a stop. You then figure out when it's your turn, and press the pedal to continue your trip. Not rocket science. The next step is just having cars that can communicate with each other to know who gets to go when. Adding a sensor to broadcast the status of the light in an intersection doesn't sound that hard either.

These are solutions that will start in big cities. That's where driver-less cars matter, because they eliminate traffic, parking and make our lives way better. It will also start at lower speeds (up to 30 mph) That's what Uber and Google are going after.
 
Watch this TED talk about driverless cars. Seems like it's already done http://ted.com/talks/chris_urmson_how_a_driverless_car_sees_the_road

If it were already done, it wouldn't be undergoing ongoing development. At the very end of the talk he talks optimistically about 4.5 years having it ready.

Unfortunately, Google has no control over legislation to make this technology legal.

It also doesn't have expertise in manufacturing cars at high volume--QA on these types of systems has to be especially high at those high volumes--and it doesn't have expertise trying to manufacture those cars within realistic costs that give a car company the ability to stay afloat.

Google's goal is to have it ready in about 5 years. Based on their progress, that to me is believable--at least for it to be close. But that's only the first (and granted, hardest) piece of the puzzle.
 
Google is posting monthly reports on accidents.

"In the six years of our project, we’ve been involved in 12 minor accidents during more than 1.8 millionmiles of autonomous and manual driving combined. Not once was the self-driving car the cause of theaccident." - 3rd page, 2 paragraph. http://static.googleusercontent.com.../selfdrivingcar/files/reports/report-0515.pdf

Isn't it >5 times safer than an average driver in the US? I just can't understand why this cannot be a backup system for now, with NO control over the car unless accident is possible.
 
Isn't it >5 times safer than an average driver in the US? I just can't understand why this cannot be a backup system for now, with NO control over the car unless accident is possible.

One thing that I think is difficult is this rare case - what if the human driver is choosing to get into a minor accident in order to avoid something more serious? Or, looked at another way, should the autonomous system prioritize the occupants of the vehicle, or the most number of lives/automobiles saved or unaffected? This is where things will get complicated.
 
Fair enough. I was talking about the engineering part of it, not the legal, regulatory and political part. Sorry if I misread your statement.
I think the engineering obstacles are non-trivial too. The google cars are heavily map based. It can't handle traffic lights not on its map, nor 4 way stop signs, nor construction zones or potholes. What it does there is fall back to a slow speed mode. It also can't tell if an obstacle can be safely driven over so it'll drive around it no matter what (even if it is harmless like crumpled paper).
http://www.technologyreview.com/news/530276/hidden-obstacles-for-googles-self-driving-cars/

There are other issues too, like reducing the cost of the Lidar sensor (which is $70k for the Google Car) and also making it smaller and not have to be mounted so high.
 
One thing that I think is difficult is this rare case - what if the human driver is choosing to get into a minor accident in order to avoid something more serious? Or, looked at another way, should the autonomous system prioritize the occupants of the vehicle, or the most number of lives/automobiles saved or unaffected? This is where things will get complicated.

I think we need to keep any moral decisions away from the cars. It should simply try to avoid a collision while remaining on the roadway or otherwise designated "free space" (including medians or unoccupied grass/fields to the sides of the roads). In other words, it should never intentionally veer off the road into a crowd, but there should be no logic prioritizing occupants. The priority in an emergency should be to stop the vehicle quickly and safely (to prevent rear collisions). If it has to do an evasive maneuver beyond braking, it should choose the clearest "free space" path. If a collision is unavoidable, the car should chose to hit the object that is furthest away and/or traveling in the same direction. Specifically, using steering and braking to shed as much kinetic energy as possible before a collision. I know I'm oversimplifying, but I think allowing physics to be that "priority system" will do much better than human drivers.
 
The speculation of Model S 2.0 with different sensors is cool. I'm fine with upgrades coming out that make what I have obsolete without being upset about it. Heck if it's actually better and actually does something besides lugging around useless hardware, I may even upgrade. However if they release Model S 2.0 with new sensors before they make good on existing promises for autopilot with the existing hardware......... we're going to have some serious problems.
 
I am hoping for a system that can let me dink around on my phone, or browse this forum while cruising along the freeway at 60 mph. If visibility becomes limited, weather becomes bad, or construction is sensed, I would hope that it alerts me so I can step in.
What could have possibly given you the impression the autopilot would allow that? Reducing fatigue for the reasons cited above (long, open stretches, stop and go traffic) is what lane keeping and TACC are designed to do. The auto pilot is driver assistance, no autonomous driving. You are going to be disappointed, but it's due to your own unrealistic expectations.
 
Elon keeps using the aircraft autopilot comparison which assumes everyone is a pilot with AP familiarity. Those that use APs frequently will know that they will not stop you from hitting something (CFIT, other aircraft, dirt in general, B1RDs, etc.). Tesla will need to do some education in addition to a software roll out if they are going to prevent the misconceptions from turning into accidents.
 
There is no way it's going to be able to autopilot for stop signs etc. For example if it comes to a 4 way stop it would have to determine if it should go next etc. At a regular intersection it would have to watch for cars coming from the side. Trying to merge into traffic, nope not going to happen.

.

Every self driving car being tested in California is managing four way stops everyday. We will likely have low speed Google cars as an urban Uber-like service by the end of this decade. I don't believe we will have self driving Teslas for many years, however. Higher speed off interstate has a lot of risks.

- - - Updated - - -

The speculation of Model S 2.0 with different sensors is cool.

My recollection is that a fully self driving Tesla will require redundant, not new, sensors. Presumably more processing power too. But if they can't get lane keeping released, how close can we be to full self driving?