Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Why AP 2.0 Won't Be Here Soon, and It Won't Be What You Think It Is

This site may earn commission on affiliate links.
The argument in the opening message is, because level 4 autonomy won't be here for years, people should stop waiting for autopilot 2.0 hardware and buy the car now. I see too many problems with this argument.
 
Last edited:
  • Like
Reactions: Canuck
Or how about a pedestrian or bike running across the street to catch the last seconds of the walk sign?

I'm all for autonomous driving on the highway where things are somewhat more uniform and repeatable.

But AD for local, city driving that has thousands of more unpredictable events/situations? Never going to happen. The google examples are extremely controlled, slow moving experiments over the same small set of roads. I would never call that "AD".

Also, all this virtual mapping of highways? That's all great to understand the static or marginally slow changing nature of the highways... that does NOTHING for the other 95% -- the other traffic on the highway when I'm on it. They're really separate problems. Tesla can collect billions of miles of data on these highways to get a really good idea of the landscape. Doesn't do me a damn bit of good when that pickup truck cuts me off because he didn't see me in the rear view mirror.

At this point, I think all that is being proposed is the car stop at traffic lights and stop signs, then wait for the driver to tell it to go.

Obviously the current cars don't begin to have the sensor package to clear stop signs and traffic lights, but I think they should be able to recognize and react to them. The current package will also never be able to make 90 turns.

Tesla is getting to the limits of their sensors already for dealing with traffic, and has defenses in place for cars merging into you from some videos I've seen - it was the stationary objects the system wasn't already handling, which is why they are mapping to handle them.
 
Just curious: what are people imagining that autopilot is going to do at traffic lights? Let's hand-wave away the reliable red-light
detection problem and focus on where, exactly, to stop: some lights are immediately above the entrance to the intersection, some
hanging over the middle, some at the far edge. Sense both the light and the "limit line" on the road? What if the limit line is worn
away, or obscured by water or snow? One can, again, wave the fleet learning magic wand at this, but what are cars supposed to do
as they approach (thus far) "unknown" red lights? Perhaps a new AP behavior is required that allows it to convey "I detect something
you, the driver, need to react to but I don't have enough information (yet) to handle it myself." In order to be useful it would need to be
more specific than a generic "take over immediately -- I'm giving up" like it does today.

Yeah, this will likely be: geotagging each red light/stop sign, a camera capable of clearly seeing the road markings, inferring the location based on the location of the light/sign, and eventually Car-to-X (Infrastructure) where the lights communicate their current status, such as when they're going to turn yellow and how long before it turns red.

I think this stresses the importance of fleet learning. When you have multiple cars detecting a red light and stopping at a specific point, future cars can look for the light in a specific location based on the GPS data and work more reliably. Personally, I'd expect some form of camera or radar based triangulation to help refine the GPS location data. Not unlike navigating with stars, but using landmarks like a building, store sign, or street lamp to pinpoint the car's location at each red light.
 
At this point, I think all that is being proposed is the car stop at traffic lights and stop signs, then wait for the driver to tell it to go.

OK, another scenario I thought about... think about secondary roads with a 45/50 or 55 mph speed limit and stop lights every few miles. In some states I know of (like New Jersey), these intersections have massively long yellow lights. What is the car to do if you're traveling along at 50mph, and the light turns yellow? Most people who travel these roads know that the long yellow lights allows you a lot of time to breeze right through long before a red light. But the car doesn't know that. If the car is close enough to detect a light changing from green to yellow, will it be too close to safely stop, when the driver behind you fully expects you to run the yellow light? At some point the car is going to have to decide that it's actually OK to run the yellow light to prevent being rear-ended or stopping so quickly it's unsafe for the passengers.

And the idea of Car-to-light infrastructure to communicate light status to the car is a pipe dream. The municipalities that have to pay for, install and maintain this equipment aren't going to spring for more complicated hardware so your car can know what your eyes already see. There's nothing in it for them to support that.
 
  • Like
Reactions: RogerHScott
When you have multiple cars detecting a red light and stopping at a specific point ... [emphasis added]
I might buy that if the Teslas stopping at the light were always the first in line, but they're not. So there will actually be a blurry region
where they stop, (hopefully) no closer to the limit line than the front of a Tesla, but extending back arbitrarily far, depending on traffic
patterns.

You didn't address my basic doubt about the fleet learning approach: what is the default behavior going to be for lights it has't yet learned
about?
 
Huh? Come again?

Sorry -- how close does the car have to be to detect the status of the light? What if the light is already yellow when it is close enough to detect it (meaning, the car doesn't know how long the light has been yellow). If you're traveling pretty fast (say 45mph) and you get close enough to detect a yellow, what does the car do? Slam on the brakes, possibly causing a rear-end collision? Does it decide that there isn't enough room to stop safely on the stop line, so it decides to run the yellow light, which could at any moment turn to a red light, and create another possible accident? (or generate a ticket at a red-light camera?). "I'm sorry judge, but it wasn't me that ran the red light, it was the car driving!"

What I'm saying again, even on straight roads and one traffic light and no turns, there are still plenty of ambiguous situations that will be hard for a computer to decide what to do.
 
The argument that "I'm not smart enough to think of a solution to this outlier problem." does not mean that the problem is not easily solvable. And the emphasis on perfection is misplaced -- the system has to be only ~10x better than humans to be viable, a rather low bar. Remind me how many people will die in traffic accidents and how many more will be injured even with a 10x better system. This is not going to be all that difficult (technologically). What's going to be disturbing to people is that its failure modes are going to be different, strange and unfamiliar; so we'll ask how the system could be so stupid rather than being impressed that it is so much safer overall.
 
Just curious: what are people imagining that autopilot is going to do at traffic lights? Let's hand-wave away the reliable red-light
detection problem and focus on where, exactly, to stop: some lights are immediately above the entrance to the intersection, some
hanging over the middle, some at the far edge. Sense both the light and the "limit line" on the road? What if the limit line is worn
away, or obscured by water or snow? One can, again, wave the fleet learning magic wand at this, but what are cars supposed to do
as they approach (thus far) "unknown" red lights? Perhaps a new AP behavior is required that allows it to convey "I detect something
you, the driver, need to react to but I don't have enough information (yet) to handle it myself." In order to be useful it would need to be
more specific than a generic "take over immediately -- I'm giving up" like it does today.
Roger, you seem to be implying this is a difficult problem, when it's already been solved by Google and others.

First, I don't see this being accomplished with AP 1.0 hardware. This is a Level 3 task which will require AP 2.0 hardware, which means multiple full color cameras and multiple radars.

Identify:

- Intersection
- Stoplight
- Stoplight status
- Crosswalk or stopping line
- Extents of intersection

When identifying red light, stop at crosswalk or stopping line. If line is not visible or available, stop at intersection extents derived from sidewalks, traffic, etc. If all else fails, warn the user, log the event.

Tell me what I'm missing?
 
The argument that "I'm not smart enough to think of a solution to this outlier problem." does not mean that the problem is not easily solvable. And the emphasis on perfection is misplaced -- the system has to be only ~10x better than humans to be viable, a rather low bar. Remind me how many people will die in traffic accidents and how many more will be injured even with a 10x better system. This is not going to be all that difficult (technologically). What's going to be disturbing to people is that its failure modes are going to be different, strange and unfamiliar; so we'll ask how the system could be so stupid rather than being impressed that it is so much safer overall.

Yeah, also getting a little tired of the implication that all of the autonomy engineers at google, Uber, CMU, Tesla, and everywhere else haven't thought of these "problems" that require about 15 seconds to come up with and post on a forum, and haven't been working on solutions for years. Hell, Uber is about to deploy an in-city system.

No doubt, some of these things are really hard problems. Guess what, smart people have solved a lot of really hard problems.
 
it's already been solved by Google and others
I'm pretty sure any responsible person at Google, or others, would agree with me that this is, at best, a gross overstatement.
Worked on? Sure. Impressive progress made? Absolutely. Solved? Not even close.

When identifying red light, stop at crosswalk or stopping line. If line is not visible or available, stop at intersection extents derived from sidewalks, traffic, etc. If all else fails, warn the user, log the event.

Ok, so here you almost answer the question I posed (note: I was asking a question, not making any assertion): in the (likely, imo)
"all else fails" case, you warn the user and ... keep on going? Stop? Either answer requires, at a minimum, a major departure from
the use model of the current AP, since now the driver has to be ready to "pitch in" -- but not simply "take over" -- at completely
unpredictable times, and potentially frequently.

Don't get me wrong here; I would love stopping at traffic lights to work. But I'm an engineer and I know that there's no correlation
between desirability and feasibility, as much as many people believe/wish there was.

The point of my original question wasn't so much to ask "how would their engineers solve the tough
technical problems" as to ask "what do you folks, as Tesla owners/drivers, want your car to do in these
situations".
 
I'm pretty sure any responsible person at Google, or others, would agree with me that this is, at best, a gross overstatement.
Worked on? Sure. Impressive progress made? Absolutely. Solved? Not even close.
Well, you must consider Elon an irresponsible physicist because he's called autonomous driving an essentially solved problem. And, Google's already been driving for millions of miles under city conditions, with it's car stopping at both stop signs and stop lights. So, I'm not sure why you're unnecessarily skeptical of what's being done today. BTW, I'm also an engineer. The situation you outlined is not what I'd personally consider one of the more difficult problems. It's essentially a static image/object recognition problem. I think the more difficult ones will be dynamic, such as how to deal with road crews and or traffic diversions or someone waving a traffic baton.
 
I think the more difficult ones will be dynamic, such as how to deal with road crews and or traffic diversions or someone waving a traffic baton.
And I think the truly difficult problem is dealing with human predators. How do you stop nasties from waving a "traffic baton" at potential victims and having autonomous vehicles deliver them up on a platter? Yes, there are various mitigation techniques, but I imagine people are pretty clever when it comes to outwitting autonomous systems. This will scare people, no matter how much cheaper and generally safer the systems might be.
 
  • Funny
Reactions: callmesam
The obsession with full autonomy baffles me. Having said that, Tesla for now has the edge on semi-autonomous features. I prefer to drive my cars, I do not mind if extra safety features keep me out of trouble.

Full autonomy has the potential to completely transform society and cities and save tens of thousands of lives every year in the US alone, so I can see why people are obsessed with it.
 
They've also been promising us "AI" since the 80's and where is that today? Nowhere really unless you're talking about playing chess or playing Jeopardy. AP is really not much different than AI.
See, I think you either hit the nail on the head or just missed it.

I think AP, real AP is AI, and I think based on what Musk has said about AI, he knows something we don't, or we may think we know it, but have the order of magnitude off, by well, orders of magnitude.

If the cloud is the brain and all of our cars sensors are providing input, then my car in Southern California benefits from what a car in Michigan experiences. My car will know about driving in ice without me ever driving in it. It won't have to process the millions of miles of ice driving, it will just know what ice is. It will be the same for a millions of things seen by hundreds of thousands, and then millions of Tesla's. It will just know in a neural network way. It will know that piece of tire retread is a non issue and that 87% of other drivers will think the same. But it will take into account that the driver ahead of you has been texting and may overreact to seeing the retread. and will create some distance from that driver.

I may be way off, but what if Musk knows that AI is effectively here. Now the car is driving and it has the aggregate of millions of miles of training to draw from. I can't wait for it to stop because there really is a kangaroo in the road in California, and it knows what one looks like because of Australian drivers.
 
Last edited: