Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Why AP 2.0 Won't Be Here Soon, and It Won't Be What You Think It Is

This site may earn commission on affiliate links.
They've learned enough that radar is now the primary vs the camera.

after reading the transcripts of AP in v8 of the software update

Just as we use our senses of sight, hearing and touch to drive, Tesla will use all its senses too!

Now, it's all about radar, or at least radar fusion with camera.

The transcripts make clear radar as primary doesn't replace camera as primary. I understand some pedantic types will have trouble with the notion of multiple primary systems but that is what EM actually said. :D

Radar as primary simply means that TACC can stop based on radar (point cloud) input without confirmation from the camera.
The transcripts are also pretty clear that radar point cloud is why you shouldn't be waiting for LIDAR on your next Tesla.

This is completely consistent with earlier statements from EM that (camera, radar, sonar) suite would be used in combination and able to tolerate a failure in any one of the three without catastrophic consequences. So AP features in v8.0 is excellent evolution in the radar dimension which is critical in achieving sensor subsystem redundancy.

So yes, there is likely fine tuning still going on with desired sensor balance, but this isn't nearly as disruptive as some have suggested up thread.

Not that anyone cares, but I also agree that 'fixing' this for exiting AP cars via OTA is critical to maintain momentum.
So adding this AP capability into the v8.0 release was a 'no brainer'.
 
The transcripts make clear radar as primary doesn't replace camera as primary. I understand some pedantic types will have trouble with the notion of multiple primary systems but that is what EM actually said. :D

Radar as primary simply means that TACC can stop based on radar (point cloud) input without confirmation from the camera.
The transcripts are also pretty clear that radar point cloud is why you shouldn't be waiting for LIDAR on your next Tesla.

This is completely consistent with earlier statements from EM that (camera, radar, sonar) suite would be used in combination and able to tolerate a failure in any one of the three without catastrophic consequences. So AP features in v8.0 is excellent evolution in the radar dimension which is critical in achieving sensor subsystem redundancy.

So yes, there is likely fine tuning still going on with desired sensor balance, but this isn't nearly as disruptive as some have suggested up thread.

Well, first of all, even in your quote of my comment, I note "radar fusion with camera."

As I've stated, this feels like a shift because there was the fatal accident (which the camera didn't handle well), followed by dropping Mobileye, followed by Elon's tweet on July 17th about a "promising call" and it "looks like significant improvements possible" with radar. Those don't appear to be the actions of a person who has already locked in a sensor suite that uses radar equally. That sounds like someone working hard to change course and make the car safer.

If this was on the roadmap, it wouldn't "look like" improvements are possible. He would already know they are possible. The tweet would've been more like "Have been working on significant radar enhancements with Bosch for several months." A July call wouldn't have been "promising."

The only other way to interpret it is that he was working on it for AP 2.0 and then scrambled to find a way for it to work with AP 1.0. It's possible, but I find it unlikely because he seemed pleasantly surprised the current radar could do point clouds. Had he been working on point clouds with Bosch already and designed the next-generation sensor suite, you'd think he would already know where was a possibility of this well before July. When selecting the AP 2.0 sensors, nobody ever discussed the capabilities of the current radar they're using?
 
Well, first of all, even in your quote of my comment, I note "radar fusion with camera."

As I've stated, this feels like a shift because there was the fatal accident (which the camera didn't handle well), followed by dropping Mobileye, followed by Elon's tweet on July 17th about a "promising call" and it "looks like significant improvements possible" with radar. Those don't appear to be the actions of a person who has already locked in a sensor suite that uses radar equally. That sounds like someone working hard to change course and make the car safer.

If this was on the roadmap, it wouldn't "look like" improvements are possible. He would already know they are possible. The tweet would've been more like "Have been working on significant radar enhancements with Bosch for several months." A July call wouldn't have been "promising."

The only other way to interpret it is that he was working on it for AP 2.0 and then scrambled to find a way for it to work with AP 1.0. It's possible, but I find it unlikely because he seemed pleasantly surprised the current radar could do point clouds. Had he been working on point clouds with Bosch already and designed the next-generation sensor suite, you'd think he would already know where was a possibility of this well before July. When selecting the AP 2.0 sensors, nobody ever discussed the capabilities of the current radar they're using?

There is no way on God's green earth that Tesla or any other software organization went from "this looks promising, let's pursue it" in July to tested implementation of 3D point cloud based on advanced radar processing ready for deployment in mid-September. No matter what tweets or other things Elon has said, this has been in the works for a long time. Now, maybe it was for a different radar before, but that's the only other option. This stuff is hard.
 
The only other way to interpret it is that he was working on it for AP 2.0 and then scrambled to find a way for it to work with AP 1.0. It's possible, but I find it unlikely because he seemed pleasantly surprised the current radar could do point clouds

This is close to what I've been thinking and I find this likely. EM has been saying long before July that LIDAR was unnecessary.
That is only reasonable if they were working plans to use RADAR in a similar fashion which had been hinted at before (though not explicitly mentioning point clouds.)

As a systems software developer, and given the computing resources of AP 1.0, a July call with approach to shoehorn this into existing hardware (i.e. with Bosch commitment to attempt updated drivers for fuller access to AP1 radar) would very well have been promising.

BTW, I included the 'fusion' portion of your comment on the complementary nature of the sensors as a compliment. :)
(It looked like you were taking the long way to the obvious integrated approach.)
Others I quoted were taking an even longer path presuming camera wasn't a primary sensor.
 
  • Like
Reactions: MarkS22
I would like to add just two words to why autonomous vehicles is still at least 5 years away: Government Regulation.

Is the current terrible auto-pilot illegal? Why would a better one be? Driving without having someone in the driver's seat might be illegal, but producing a car that allows that would seem not to be. Producing a car that ONLY allows that is illegal (in some jurisdictions). Google needed special permission, Tesla hasn't.

Agree with the OP, level 4 is much further off than Elon and co imagine.

Every time I think this, I remember that Elon and Co. are the experts, and I am not. Once I adjust for Elon's well known optimism, I have to accept his far more knowledgeable opinion.

Just curious: what are people imagining that autopilot is going to do at traffic lights? Let's hand-wave away the reliable red-light detection problem and focus on where, exactly, to stop: some lights are immediately above the entrance to the intersection, some hanging over the middle, some at the far edge. Sense both the light and the "limit line" on the road?

How do you do that? What is stopping auto-pilot from doing the same thing? Or, it could do something you CAN'T, like sense the induction circuit in the road, that the signal uses to sense cars.

What is the car to do...

Whenever you think this, answer with 'precisely what you would do'. Unless you can think of information the system is not getting, or some processing that you know it is incapable of, then there is no barrier to it doing exactly what you would do in that situation.

As to fleet learning dead spots, I live in Maine, not exactly high population. There are 4 (possibly 5) Teslas in the area, 1 green, 2 silver, a red, and (I think I saw a white last night). I personally know 2 of the owners, and can guess the routes they have taken. Every route I take every day has already been seen by a Tesla, 70% of the routes I ever take have been seen (not even factoring in the other 3)

Thank you kindly.
 
  • Like
Reactions: landis
Whenever you think this, answer with 'precisely what you would do'. Unless you can think of information the system is not getting, or some processing that you know it is incapable of, then there is no barrier to it doing exactly what you would do in that situation.

So you're saying computers and AI are 100% capable of emulating the entirety human brain processing, and they can do "whatever I would do"?

I don't think so. Not in two years, and not even in 20 years.

Even Elon Musk isn't that good.
 
  • Like
Reactions: Matias and msnow
So you're saying computers and AI are 100% capable of emulating the entirety human brain processing, and they can do "whatever I would do"?

I don't think so. Not in two years, and not even in 20 years.

Even Elon Musk isn't that good.

It doesn't have to emulate human brain processing. For the most part the advances in AI in recent years have been from *not* trying to do that. Trying to emulate a human driving is a waste of time. Trying to build a better driver than a human is not.

Saying the system should do "whatever you would do" (or perhaps pick something better) does not require that it think like a human, only that the result is similar.
 
  • Like
Reactions: Bet TSLA
It doesn't have to emulate human brain processing. For the most part the advances in AI in recent years have been from *not* trying to do that. Trying to emulate a human driving is a waste of time. Trying to build a better driver than a human is not.

Saying the system should do "whatever you would do" (or perhaps pick something better) does not require that it think like a human, only that the result is similar.

Still completely disagree. There is almost infinite human judgement and nuances that go into "driving." that computers will never be able to emulate, for instance my example above of a road hazard in one or two lanes over that the sensors can't detect where a human could easily detect and predict what other human drivers will do to avoid it. Even if the sensors could detect it, there are 1000 different possible outcomes and the computer has to use logic to pick one, and most likely that would not be the maneuver that a human would choose.

Also, the computer is never going to decide that accelerating towards a yellow light is the right thing to do to make it through a changing light ("squeezing the lemons").
 
Still completely disagree. There is almost infinite human judgement and nuances that go into "driving." that computers will never be able to emulate, for instance my example above of a road hazard in one or two lanes over that the sensors can't detect where a human could easily detect and predict what other human drivers will do to avoid it. Even if the sensors could detect it, there are 1000 different possible outcomes and the computer has to use logic to pick one, and most likely that would not be the maneuver that a human would choose.

Also, the computer is never going to decide that accelerating towards a yellow light is the right thing to do to make it through a changing light ("squeezing the lemons").

Well, agree to disagree then. I think you are forgetting senses that autonomous driving systems have that humans do not and how much faster computers are at enumerating potential solutions. I think it will be a surprising decade for you.
 
  • Like
Reactions: Bet TSLA
So you're saying computers and AI are 100% capable of emulating the entirety human brain processing, and they can do "whatever I would do"?

No, of course not. I am saying that the answer to 'how is a car going to do this?' is probably 'Same way that you do.' If you recognize where to stop at a stop light, by seeing the line on the road, that is what the car will do as well. We know that the camera is capable of detecting lines. If you can't see the line due to snow what do you do? I stop well short of the intersection, and expect that a car would do the same.

The things that I envision being hard for a AP system are exactly the things that I don't know how to do reliably. Determining that some driver is about to swerve into my lane, for example. Knowing where the lane is on a road completely covered with snow. Figuring out which of the eight stop lights I can see, applies to me and what I want to do.

Thank you kindly.
 
Also, the computer is never going to decide that accelerating towards a yellow light is the right thing to do to make it through a changing light ("squeezing the lemons").
Care to explain your reasoning here? Are you saying that no rational, quantitative analysis could ever conclude this was the correct
course of action? That's tantamount to claiming that humans make this choice irrationally.
 
  • Like
Reactions: Saghost
Care to explain your reasoning here? Are you saying that no rational, quantitative analysis could ever conclude this was the correct
course of action? That's tantamount to claiming that humans make this choice irrationally.

Indeed the short heuristic is even easier for a computer because it can know nearly exactly the stopping distance for a given speed (factoring in conditions like temperature and rain, which it also knows). Once it knows that it knows whether it can safely stop or must continue through. After deciding to continue through, it can more closely monitor cross traffic, etc.
 
Also, the computer is never going to decide that accelerating towards a yellow light is the right thing to do to make it through a changing light ("squeezing the lemons").

Why not? Are you saying that you don't know how to decide that problem? I could write an algorithm right now that would do exactly what I currently do in that situation. And the car doesn't even need to glance at the rear view mirror to make its decision.

Quite frankly, it is many orders of magnitude more difficult to NOTICE a yellow light than to figure out what to do once one is seen.

Thank you kindly.
 
There is no way on God's green earth that Tesla or any other software organization went from "this looks promising, let's pursue it" in July to tested implementation of 3D point cloud based on advanced radar processing ready for deployment in mid-September. No matter what tweets or other things Elon has said, this has been in the works for a long time. Now, maybe it was for a different radar before, but that's the only other option. This stuff is hard.

Yes, I'm sure they're doing a ton of internal development/testing on a wide variety of sensors. I'd bet they have some development version of Autopilot with LIDAR and/or FLIR running, if only to compare results.

My guess is that a version of this has been in the works for some time on the sidelines at Telsa with a significant amount being done at Bosch. Can it be misdirection or marketing spin? Sure. But between Elon's tweet and the blog post this month ("After careful consideration, we now believe it can be used as a primary control sensor without requiring the camera to confirm visual image recognition.") it doesn't sound like this was the original direction.

I mean, they literally say: "The radar was added to all Tesla vehicles in October 2014 as part of the Autopilot hardware suite, but was only meant to be a supplementary sensor to the primary camera and image processing system."

Ultimately, what would the motive be to act surprised with "we now believe", "looks like" we can upgrade, and "meant to be a supplementary sensor to the primary camera"? Why not simply say "We've been working on a variety of radar enhancements for years now" if that's the case? If the truth is that they've been developing this since 2015, then they'd look better by saying that.

Also, remember that 8.0 is just for whitelisting. "Initially, the vehicle fleet will take no action except to note the position of road signs, bridges and other stationary objects, mapping the world according to radar. " The actual control system, beginning with "mild braking" won't be ready until 8.1, which could be many months away.
 
Last edited:
  • Disagree
Reactions: MP3Mike
That's tantamount to claiming that humans make this choice irrationally.

Yes, exactly. There are just as many times that someone makes an irrational decision, which turns out to be the correct decision in the end. Computers won't be able to do that. Given the exact same set of fixed inputs, it will make the same decision every single time. It's just not possible to provide 100% of the inputs that humans have when driving, so therefore, humans will always have more information to draw from to make these decisions, and yes, sometimes those will be irrational ones. There are plenty of videos that show that someone didn't react to an impending crash and survived, whereas if they did swerve or try to react (as a computer would have, given those same inputs), they would have likely died.

I think it will be a surprising decade for you.

It will be for everyone. But in 10 years, I don't think there will be the level of autonomous driving everyone is expecting there will be. (And some people are expecting it next week). Unless, as I've said before, ALL humans are removed from the roadways, and there are "AD" only lanes. Until then (within the next 10 years), it will remain a novelty that's really not much better than what we have today. And on top of that, the regulations to even allow it on a large scale will be a decade behind that.
 
Yes, I'm sure they're doing a ton of internal development/testing on a wide variety of sensors. I'd bet they have some development version of Autopilot with LIDAR and/or FLIR running, if only to compare results.

My guess is that a version of this has been in the works for some time on the sidelines at Telsa with a significant amount being done at Bosch. Can it be misdirection or marketing spin? Sure. But between Elon's tweet and the blog post this month ("After careful consideration, we now believe it can be used as a primary control sensor without requiring the camera to confirm visual image recognition.") it doesn't sound like this was the original direction.

Ultimately, what would the motive be to act surprised with "we now believe" and "looks like" we can upgrade? Why not simply say "We've been working on a variety of solutions for years now" if that's the case? If the truth is that they've been developing this since 2015, then they'd look better by saying that.

Also, remember that 8.0 is just for whitelisting. "Initially, the vehicle fleet will take no action except to note the position of road signs, bridges and other stationary objects, mapping the world according to radar. " The actual control system, beginning with "mild braking" won't be ready until 8.1, which could be many months away.

I'm pretty sure you're reading that wrong, and that both the initially and the automatic braking phases are with vehicles on 8.0.

I think once you have 8.0, the car will automatically reach for whitelist tiles for the area you're driving in. If it finds detailed data for your route, it'll brake automatically as appropriate. If your route has no data, then it'll do the initial mapping for your route but not brake without camera input.