Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Why AP 2.0 Won't Be Here Soon, and It Won't Be What You Think It Is

This site may earn commission on affiliate links.
I'm pretty sure you're reading that wrong, and that both the initially and the automatic braking phases are with vehicles on 8.0.

I think once you have 8.0, the car will automatically reach for whitelist tiles for the area you're driving in. If it finds detailed data for your route, it'll brake automatically as appropriate. If your route has no data, then it'll do the initial mapping for your route but not brake without camera input.

Guessing of version numbers aside, it reads like the actual control won't happen until there's significant fleet learning with geotagging potential false braking events. I'm speculating this might not happen automatically with 8.0, but only after Tesla verifies it's working correctly.
 
I'd bet they have some development version of Autopilot with LIDAR and/or FLIR running, if only to compare results.

There is a picture of a Model S with a LIDAR bump strapped on the roof somewhere around here.

There are just as many times that someone makes an irrational decision, which turns out to be the correct decision in the end.

By definition, there must therefore be many MORE times when the rational decision would turn out to be the correct one. The rational decision is ALWAYS the one that maximizes the probable outcome. That is what rational means. If humans are deciding irrationally, the sooner that they are out of the loop, the better.

Thank you kindly.
 
There's some discussion here of the timing of the development of radar as a primary breaking input. I found this in part 7 of the Electrek transcript:


Jordan Golson -The Verge
And then one last question: How long as this sort of radar primary thing has been in development? Is that something you have been doing in the past couple months since the fatal accident or is this been the primary thing all along?
Elon Musk – Tesla CEO
It’s something that I wanted to do for a while. Probably since late last year, but I was always told that it wasn’t possible, you can’t do it, it’s not gonna work, nobody else has made it work, software is too hard, sensor is not good enough, but I really pushed hard on questioning all those assumptions last 3 or 4 months. Like there got to be a way to make this work and now we believe that there is.


So Elon says they've gone from questioning prior negative assumptions on radar to (presumably soon) shipping a product in 3 or 4 months. That's pretty quick for software performing such a critical task.
 
  • Informative
Reactions: Topher and MarkS22
Yes, exactly. There are just as many times that someone makes an irrational decision, which turns out to be the correct decision in the end. Computers won't be able to do that. Given the exact same set of fixed inputs, it will make the same decision every single time. It's just not possible to provide 100% of the inputs that humans have when driving, so therefore, humans will always have more information to draw from to make these decisions, and yes, sometimes those will be irrational ones. There are plenty of videos that show that someone didn't react to an impending crash and survived, whereas if they did swerve or try to react (as a computer would have, given those same inputs), they would have likely died.



It will be for everyone. But in 10 years, I don't think there will be the level of autonomous driving everyone is expecting there will be. (And some people are expecting it next week). Unless, as I've said before, ALL humans are removed from the roadways, and there are "AD" only lanes. Until then (within the next 10 years), it will remain a novelty that's really not much better than what we have today. And on top of that, the regulations to even allow it on a large scale will be a decade behind that.

As I noted earlier in this thread, I truly believe we will not get to legal, government sanctioned and regulated fully-autonomous driving without requisite infrastructure changes. Trusting the vehicle to optically see a red light is not something most people are going to be comfortable with and just one or two failures will cause such an uproar the government will be shutting it down very quickly. What will be needed is something like a transponder at intersections that actually communicates with the car, cutting out the human-necessary visual cues altogether. Why limit the car to visual cues when it can know the state of that intersection before it's even within eyesight? How will the car know where to stop at the intersection? How does my automated vaccuum cleaner know not to go through a certain doorway? It's not from visual cues, it's because I've placed a device in the doorway that sends out a beam that tells it "Stop, go back". What we're talking about in this thread is what is possible and probable as we negotiate the dangerous state of semi-autonomous vehicles mixing with human-operated vehicles, but the move to fully-autonomous vehicles legally operating on the roadways will have to involve more than just changes to the vehicles alone if the "magnitudes of improvement in safety" are to be achieved. For me, the really interesting question is, how much autonomy will be allowed in vehicles before a complete paradigm shift is required in the infrastructure of the roadways?
 
By definition, there must therefore be many MORE times when the rational decision would turn out to be the correct one.

But it's not binary... there are degrees of "irrational" away from "rational", so you can't really say that "by definition" without including some level of level of irrationality. And is *any* decision a human makes always 100% rational? I don't think so, so "by definition" humans make many more irrational decisions than rational (pretty much all of them). I don't think computers (and AD in particular) would be capable of making an irrational decision that could actually avoid an accident or injury like a human could.
 
Guessing of version numbers aside, it reads like the actual control won't happen until there's significant fleet learning with geotagging potential false braking events. I'm speculating this might not happen automatically with 8.0, but only after Tesla verifies it's working correctly.

Lots of guessing here for all of us. :)

My guess (for whatever it is worth) is that you'll already have the radar only braking active on some routes on day one (using data from the beta testers) and it'll add new routes automatically as soon as it has a few passes across them - so every daily commute will be covered in the first week, and most routes within a month or so.
 
There's some discussion here of the timing of the development of radar as a primary breaking input. I found this in part 7 of the Electrek transcript:


Jordan Golson -The Verge
And then one last question: How long as this sort of radar primary thing has been in development? Is that something you have been doing in the past couple months since the fatal accident or is this been the primary thing all along?
Elon Musk – Tesla CEO
It’s something that I wanted to do for a while. Probably since late last year, but I was always told that it wasn’t possible, you can’t do it, it’s not gonna work, nobody else has made it work, software is too hard, sensor is not good enough, but I really pushed hard on questioning all those assumptions last 3 or 4 months. Like there got to be a way to make this work and now we believe that there is.


So Elon says they've gone from questioning prior negative assumptions on radar to (presumably soon) shipping a product in 3 or 4 months. That's pretty quick for software performing such a critical task.

Thanks. That directly answers the question and also matches his Twitter and blog comments. So, unless we think he's flat out lying repeatedly:

(1) The radar use being discussed with 8.0 is in the past few months

(2) There is no way they're shipping a new sensor suite without testing a real world version of this new system.
 
But it's not binary... there are degrees of "irrational" away from "rational", so you can't really say that "by definition" without including some level of level of irrationality. And is *any* decision a human makes always 100% rational? I don't think so, so "by definition" humans make many more irrational decisions than rational (pretty much all of them). I don't think computers (and AD in particular) would be capable of making an irrational decision that could actually avoid an accident or injury like a human could.

Nope. There is a single rational decision, the one that maximizes probable outcomes, every other decision is irrational. Some are obviously worse than others, but once you leave the BEST solution, you are decreasing the desirability of the probable outcome. Humans make the rational decision all the time, in fact, given a few decisions per second while driving, there are millions of rational decisions for every irrational one. If humans made many more irrational decisions that rational ones, we would ALL be dead.

Why couldn't an irrational decision by a computer actually avoid an accident in the same way that a human's could? Essentially you are saying that a human got lucky, and made decision which decreased their probable outcome, which never-the-less happened to come through. They drew to an inside straight, and got it. Are computers incapable of 'getting lucky'? What would that even look like? Would the straight never come through if a computer was playing? How do the cards know?

Watch this, it might help:

Thank you kindly.
 
Why couldn't an irrational decision by a computer actually avoid an accident in the same way that a human's could?

Because a computer would never make an irrational decision. It can make a wrong decision, but every decision by a computer, "by definition" is a rational decision, since given the exact same inputs, it will always make the exact same decision. There is never any irrationality introduced in systems like that.. that would just make them less reliable.... kinda like human beings.

Take this admitted made up example. Say a 55 gallon drum rolls off the truck in front of you.. and is literally "barrelling" down towards you. The computer is going to see this as an obstacle closing in on you at a very high speed, and in order to decrease the force of the impact, the rational decision is to apply the brakes in an emergency braking scenario. But a human at the wheel might notice the bouncing frequency of the barrel, and decide that if he/she actually speeds up, increasing the velocity towards the obstacle, they might be able to time it just enough to drive under the barrel during a bounce and avoid an accident all together (but maybe not so much for the driver behind them). Sure, you could program a computer to handle this specific example and predict the trajectory of the object and make a similar decision, but that's a very specific edge case. How many nearly infinite and specific edge cases can we program into the system? And even if it's aware of this edge case, it's in direct conflict with the emergency braking system which wants to stop. It's going to have to make a "judgement call", which computers are not very good at. In the end, it's just a weighted average, and it chooses the decision path with the highest weighted average. But a human? Makes what the computer thinks is an irrational decision (to speed up) and avoids the accident altogether. That's not getting lucky, it's intuition and taking into account many more factors than the computer has access to.

This is just one example, but driving has endless examples of unpredictable scenarios that it just won't be possible to code into an algorithm so that a computer always makes the correct decision as good as, or better than, a human. At some point, good old human intuition takes over and will always be better than the computer in drastic, emergency maneuver type situations. That's where having the ability to make what seems to be an irrational decision, based on the inputs, is actually the better course of action.
 
  • Like
Reactions: Chopr147
Got any data to back up your claim that these split-second decisions don't turn out tragically wrong nearly as often as they are right?

Now if all you're really arguing is that there's a "long tail" of increasingly rare corner cases, each requiring increasingly large amounts of
human skill (or, I suspect, more likely: luck) to pull off, then sure. But now you need to look at the bigger picture and ask: is it better
for the vast majority of drivers to be saved from their own heroic but ultimately disastrous bad judgment most of the time even if it means
that now and then a course will be chosen which, in hindsight, is clearly inferior to what the best, luckiest human could have done on a
really good day? Reasonable people can disagree on the answer to that, but agreeing that that's a (if not the) question is a good start.
 
Lots of guessing here for all of us. :)

My guess (for whatever it is worth) is that you'll already have the radar only braking active on some routes on day one (using data from the beta testers) and it'll add new routes automatically as soon as it has a few passes across them - so every daily commute will be covered in the first week, and most routes within a month or so.

That sounds reasonable. I saw it as a way to make the announcement now about software coming in "1 to 2 weeks" but still spend a couple months doing more robust testing before this new system actively applies the brakes. <shrug> We'll know in about a week. :)

On a side note, I'm curious if there will be any feedback when your vehicle contributes to a whitelist location. Probably not, but I feel like an optional chime or some visual on the dash would be cool. The current UI does a great job at building confidence about what the car sees. They could probably build the map faster if they integrated something like Waze and encouraged owners to drive on more roads.

And a follow up to that: What's the over/under on seeing two cars ahead in 8.0?
 
Got any data to back up your claim that these split-second decisions don't turn out tragically wrong nearly as often as they are right?

I don't think I'm claiming anything one way or the other. There obviously isn't any data to support either side of that claim, if there even was one.

What I'm saying is that this follows the old 80/20 rule. AP 1.0 has gotten us to maybe 60% to 70% of the way there. AP 2.whatever will get us to 80% in 20% of the time. Which will be "pretty good" for "driver assist". I don't think we'll ever get the last 20% to "full autonomous driving" anytime in the foreseeable future, despite what Elon Musk says.
 
On a side note, I'm curious if there will be any feedback when your vehicle contributes to a whitelist location. Probably not, but I feel like an optional chime or some visual on the dash would be cool. The current UI does a great job at building confidence about what the car sees.

I'm thinking/hoping that Tesla will give us some kind of indication that radar only braking from the whitelist is active at any given time - presumably, you're contributing to the list any time that isn't active.

(But yes, it'd be a nice bit of social engineering if they actually told you at the time you uploaded to the list with a chime or message flash. It'd make us feel more useful. :) )
 
I am a recent Model S driver. Ordered 6/24, delivered Aug.18th
I briefly flirted with waiting for AP 2.0 before ordering but realized it could be this year, next year or even longer. And even with 2.0 it might not be usable for autonomy for many years. Tesla continues to improve and I would be waiting forever for that "next big thing".
My research, reading materials , Elon quotes etc....tell me we are at least 5 years away and probably longer. I don't think it will be 10 years though as the OP suggests. Once this train gets rolling full autonomy will be here faster than most believe. Using this analogy I think that train is slowly leaving the station at this moment in time.
 
I am a recent Model S driver. Ordered 6/24, delivered Aug.18th
I briefly flirted with waiting for AP 2.0 before ordering but realized it could be this year, next year or even longer. And even with 2.0 it might not be usable for autonomy for many years. Tesla continues to improve and I would be waiting forever for that "next big thing".
My research, reading materials , Elon quotes etc....tell me we are at least 5 years away and probably longer. I don't think it will be 10 years though as the OP suggests. Once this train gets rolling full autonomy will be here faster than most believe. Using this analogy I think that train is slowly leaving the station at this moment in time.
But all of that is just so many dry leaves blowing in the breeze next to the real question: now that you have it, do you love it? Can you
imagine a scenario in which waiting for something better/different would have been the right choice?
 
Because a computer would never make an irrational decision. It can make a wrong decision, but every decision by a computer, "by definition" is a rational decision, since given the exact same inputs, it will always make the exact same decision.

That is not what 'rationality' means. You seem to think that good chess programs are just as rational as bad chess programs. That is just false. Nor does a computer always make the same decision based on the same input. That is exactly what 'fleet learning' is FOR. The fleet learns from the mistake (or even the success) and performs better the next time.

People have been making this argument, "sure, computers can do that, but they will never do this other thing." for as long as there have been computers. They have always been wrong. The really tough things for computers to do are always things people think are easy.

I will leave it as an exercise for any students to prove that humans don't give the same decision for the same inputs (absent any learning).

Thank you kindly.
 
t. That is exactly what 'fleet learning' is FOR. The fleet learns from the mistake (or even the success) and performs better the next time.

Exactly.. so the inputs *are* different.

But again, the fleet learning is to learn slowly changing or static geographic attributes of each road.. not the for pickup truck that has things falling off the back or the SUV driver checking their snapchat who just swerved into my lane.


Nor does a computer always make the same decision based on the same input.

Then one or more of the inputs is random. Otherwise, computers wouldn't be good at what they are designed and built to do.
 
My guess is that a version of this has been in the works for some time on the sidelines at Telsa with a significant amount being done at Bosch. Can it be misdirection or marketing spin? Sure. But between Elon's tweet and the blog post this month ("After careful consideration, we now believe it can be used as a primary control sensor without requiring the camera to confirm visual image recognition.") it doesn't sound like this was the original direction.

Also, remember that 8.0 is just for whitelisting. "Initially, the vehicle fleet will take no action except to note the position of road signs, bridges and other stationary objects, mapping the world according to radar. " The actual control system, beginning with "mild braking" won't be ready until 8.1, which could be many months away.

Elon has said that the AP software is 100% Tesla. And I can't find any reference for the full new AEB not working until 8.1. It will start working after several AP equipped Teslas have driven a route using the V8 firmware.