Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Model X AP1 is more stable than AP2?

This site may earn commission on affiliate links.

I guess the whole term BETA and other disclaimers during the sale aren't enough to dissuade the class action attorneys...great example of a perverted legal system.
There was a reason (or several) that two Apple engineers, working on that company's then autonomous driving vehicle were hired by TESLA at the beginning of this year... Look for AP3 hardware, etc.,hopefully by year end, across all models.

Thank you very much

FURY

Not sure of the logic here. What would be the benefit of AP3? They are still developing out the AP2.x hardware and its operating at a fraction of its capacity - if anything, those engineers were sourced to deliver advanced functionality on the existing platform. Tesla is on record stating that AP2 will deliver full autonomous driving, or they will provide free upgrades to any deficiencies.

The issues would be compounded by adopting a 3rd hardware platform.
 
I think they will go very far to try to make AP2 deliver on the promises. Or figure out less intrusive hardware to add to the AP2 suite. Launching AP3 is basically admitting that the AP2 suite was too bad for the features they promises, and thus having to pay back everyone who bought FSD.

Also introducing AP3 means they have to serve now three branches of AP code. Which will basically add another workload for their engineers.

I'm in the camp of thinking that it's really more of a software problem than a hardware problem at this point. Elecktrek.co has mentioned a few times that the sensors (cameras, radar, sonar, gps) only need to be marginally better than the human eye to drive a car as safe or safer and I agree with that sentiment. Not that better cameras or different sensors won't help a bit, but the monumental leap we're waiting for, basically full self-driving, is way more a software problem.

Also, unless they go in a fundamentally different direction with the hardware, I don't think AP3 hardware would mean any major code differences. Again, since the problem right now is the "brain" not the "eyes", feeding it higher quality data shouldn't really affect the decision making that the AP code is making.

I'd really like to know how that current AP2 software is operating under the hood. It seems like full self-driving on a production vehicle is really only feasible with machine-learning/ai software and neural networks. Tesla has talked a bit about fleet-learning, radar tagging, hd mapping etc....but current AP1 and AP2 software has never really felt like it's "learning" and certainly hasn't had the explosive growth in reliability or functionality that you'd expect with a system like that. I have a feeling that having to bring AP2 to parity with AP1 without MobilEye was a pretty big (unexpected) diversion from focusing on full self driving....and what we have at the moment may be somewhat of a patchworked version of AutoPilot that is just meant to cover the basics like lane-centering and lane-changing. My thought is that the full self-driving codebase is fundamentally different than what we're currently using.

I might be completely wrong, but tend to think that the current software that swerves me on to every exit ramp it passes, is very different from the software that is supposed to get a car across the country by itself within the next 5-6 months.
 
Not sure of the logic here. What would be the benefit of AP3? They are still developing out the AP2.x hardware and its operating at a fraction of its capacity - if anything, those engineers were sourced to deliver advanced functionality on the existing platform. Tesla is on record stating that AP2 will deliver full autonomous driving, or they will provide free upgrades to any deficiencies.

The issues would be compounded by adopting a 3rd hardware platform.


FWIW, what makes you believe it's at a fraction of its operating capacity? The only measure we've seen is @verygreen noting that it's already virtually at 100% CPU usage on all 6 cores while sitting idle processing the front camera feeds. We don't have good information on what extent the GPU is saturated, but if more hardware helps, I don't think Tesla will sit still. They will be designing next generations of hardware.

And at my day job I am expected to support 6 concurrent generations of hardware (and 10 or so SKUs each) without abandoning them, and develop the next 3 years' pipeline too. It's not like it's impossible to do.
 
I'm in the camp of thinking that it's really more of a software problem than a hardware problem at this point. Elecktrek.co has mentioned a few times that the sensors (cameras, radar, sonar, gps) only need to be marginally better than the human eye to drive a car as safe or safer and I agree with that sentiment. Not that better cameras or different sensors won't help a bit, but the monumental leap we're waiting for, basically full self-driving, is way more a software problem.

Software problems are kind of hardware problems too, though. For example, the front radar is a good example. Attempting to estimate distance to the car in front is a solveable problem by cameras. After all, you can do it with your two eyes. Heck, you can do a half decent job with one eye closed. But Tesla's radar on AP1 and AP2 is basically a high-refresh-rate measuring stick to the car in front, because that's a far far easier way of solving the problem. Rain sensing is another example of that. I absolutely believe it's possible to use CV to detect rain, and Tesla has been on the hook for delivering that for almost a year now. But you can also duct tape a HELLA rain sensor and connect it to LIN and with maybe 1 day of coding you have a working rain sensor.

So yeah, in one sense, it's the software not being there yet. But on the other hand, it's also a trade off between the software getting there in 2-3 years vs specialized hardware helping you to get there faster.

If the priority is to fulfill marketing promises FIRST, then you would basically do what conventional automakers do: Have a lot of domain-specific sensors that operate in parallel with a camera-based vision system, and then as future generations of cars come out after software improvements, take away sensors one by one as you don't need them. But if you want to improve your bottom line first….. well… :)
 
Both AP1 and AP2 have "tried to kill me" but in very different ways. AP1 is more predictable in the ways it tries, it slowly veers towards a truck as a normal expected action, or ignores a merging car. AP2 is more..TURN LEFT NOW! JK, RIGHT!

Even that statement seems worse than it is though. AP1 vs AP2 in terms of autopilot functionality is pretty equal to me at this point in time. With that said, I would still prefer an AP1 at this point given there isn't much distinguishing the two and AP1 can be had at a lower price point. Oh, and also, give me my frigging rain sensing wipers.
 
  • Like
Reactions: chillaban
Both AP1 and AP2 have "tried to kill me" but in very different ways. AP1 is more predictable in the ways it tries, it slowly veers towards a truck as a normal expected action, or ignores a merging car. AP2 is more..TURN LEFT NOW! JK, RIGHT!

Even that statement seems worse than it is though. AP1 vs AP2 in terms of autopilot functionality is pretty equal to me at this point in time. With that said, I would still prefer an AP1 at this point given there isn't much distinguishing the two and AP1 can be had at a lower price point. Oh, and also, give me my frigging rain sensing wipers.

Completely agreed. AP1 is confident even when it "tries to kill you". And yeah, it either does it by smoothly ignoring/departing your lane towards a car right beside you, or does so by smoothly ignoring a new obstacle (stopped car, partially offset car in your lane, concrete construction barrier / lane closure, etc etc etc).

AP2's control inputs are far more erratic. If anything, I've seen AP2 brake for more things it suddenly recognizes compared to AP1. For example, using TACC on a street with a lot of parked cars. If you don't angle your steering just right, AP2 will happily pulse the brakes every 1-2 seconds for every parked sideways car it recognizes briefly. AP1 just plain doesn't see any of them. You can point your car at a sideways car and I'm pretty sure it'll happily T-bone it.

Smooth/confident != Safe. If anything, erratic control inputs results in more "safety" in an ADAS because it scares the crap out of the driver and forces him to pay attention.


EDIT: In terms of functionality gap, though, I'd say right now the BIGGEST AP1 vs AP2 gap is city driving. AP2 is extremely erratic when going through intersections where lane lines momentarily disappear. AP1 is capable of handling this situation correctly almost all of the time, even when there's no lead car, especially when there is a lead car. AP2 on the other hand, even with a lead car, can jerk the wheel to one side or the other at any moment in an intersection when it starts imagining a diagonal lane line.
 
Last edited:
  • Helpful
Reactions: TaoJones
FWIW, what makes you believe it's at a fraction of its operating capacity? The only measure we've seen is @verygreen noting that it's already virtually at 100% CPU usage on all 6 cores while sitting idle processing the front camera feeds. We don't have good information on what extent the GPU is saturated, but if more hardware helps, I don't think Tesla will sit still. They will be designing next generations of hardware.

And at my day job I am expected to support 6 concurrent generations of hardware (and 10 or so SKUs each) without abandoning them, and develop the next 3 years' pipeline too. It's not like it's impossible to do.

We don't have good information on the current roadmap and release of features either except to know the end state that they are seeking (and what some people based their purchase on). I guess that I would ask if have you examined the code to determine what the processor is managing? They could very likely be collecting information for release management; after all, the product is still BETA....even AP1 is still BETA.

The design was such that sensors and capacity were increased to accommodate increased awareness to enable full autonomous driving. According to the outside/in analysis of the 2.5 release, that seems largely increased GPU processing power which would validate your point about processing reaching its limit. And, not fundamentally change the code or sensor array. "AP3" would suggest a new platform altogether, which doesn't seem likely unless their 2yrs of AP1 BETA testing lead them down the wrong path.

As far as your directives in your day job, its admirable of your company to support multiple generations of hardware and the necessary code. That said, there is likely a strong business case to do so rather than simply customer service etc. There may be SLAs or other market pressures creating such a complex workspace for you. I agree its manageable; however, it doesn't seem optimized.
 
We don't have good information on the current roadmap and release of features either except to know the end state that they are seeking (and what some people based their purchase on). I guess that I would ask if have you examined the code to determine what the processor is managing? They could very likely be collecting information for release management; after all, the product is still BETA....even AP1 is still BETA.

The design was such that sensors and capacity were increased to accommodate increased awareness to enable full autonomous driving. According to the outside/in analysis of the 2.5 release, that seems largely increased GPU processing power which would validate your point about processing reaching its limit. And, not fundamentally change the code or sensor array. "AP3" would suggest a new platform altogether, which doesn't seem likely unless their 2yrs of AP1 BETA testing lead them down the wrong path.

As far as your directives in your day job, its admirable of your company to support multiple generations of hardware and the necessary code. That said, there is likely a strong business case to do so rather than simply customer service etc. There may be SLAs or other market pressures creating such a complex workspace for you. I agree its manageable; however, it doesn't seem optimized.

With regards to your first point, you'll have to ask someone with root access — I unfortunately don't have any visibility under the hood, just into what others with root access have observed, and I believe verygreen specifically said that the process names are very obvious to their functionality on the APE box (AP2 ECU) and the CPU-intensive processes are definitely vision-related and not metamanagement.

But as far as my day job, it's not at all SLA related. There's indeed a strong business case for maintaining existing and developing future hardware. Customers don't exactly want to buy your product again if the last one they bought was abandoned a year later in favor of new hardware. And as far as future work, Moore's Law in all of its forms is absolutely ruthless about the implications of locking yourself to a hardware platform. Hardware becomes exponentially more capable over time. If you are stuck working on older hardware without a path forward, you will fall behind compared to competitors that rebased onto a newer platform. That's just the truth of the matter. If Tesla wants to stay ahead of the competition, they've got to be working on the next generation of hardware, both because it's more capable and because it allows them to learn from their mistakes. But that doesn't mean the previous generation has to be abandonware either…
 
FWIW, what makes you believe it's at a fraction of its operating capacity? The only measure we've seen is @verygreen noting that it's already virtually at 100% CPU usage on all 6 cores while sitting idle processing the front camera feeds. We don't have good information on what extent the GPU is saturated, but if more hardware helps, I don't think Tesla will sit still. They will be designing next generations of hardware.

And at my day job I am expected to support 6 concurrent generations of hardware (and 10 or so SKUs each) without abandoning them, and develop the next 3 years' pipeline too. It's not like it's impossible to do.
IT was not 100% cpu usage, more like 50%.
AP2.0 Cameras: Capabilities and Limitations?
keep in mind 2 of those 6 cpus are low power low speed.
 
  • Helpful
  • Like
Reactions: TaoJones and skilly
I'm in the camp of thinking that it's really more of a software problem than a hardware problem at this point. Elecktrek.co has mentioned a few times that the sensors (cameras, radar, sonar, gps) only need to be marginally better than the human eye to drive a car as safe or safer and I agree with that sentiment. Not that better cameras or different sensors won't help a bit, but the monumental leap we're waiting for, basically full self-driving, is way more a software problem.

Also, unless they go in a fundamentally different direction with the hardware, I don't think AP3 hardware would mean any major code differences. Again, since the problem right now is the "brain" not the "eyes", feeding it higher quality data shouldn't really affect the decision making that the AP code is making.

I'd really like to know how that current AP2 software is operating under the hood. It seems like full self-driving on a production vehicle is really only feasible with machine-learning/ai software and neural networks. Tesla has talked a bit about fleet-learning, radar tagging, hd mapping etc....but current AP1 and AP2 software has never really felt like it's "learning" and certainly hasn't had the explosive growth in reliability or functionality that you'd expect with a system like that. I have a feeling that having to bring AP2 to parity with AP1 without MobilEye was a pretty big (unexpected) diversion from focusing on full self driving....and what we have at the moment may be somewhat of a patchworked version of AutoPilot that is just meant to cover the basics like lane-centering and lane-changing. My thought is that the full self-driving codebase is fundamentally different than what we're currently using.

I might be completely wrong, but tend to think that the current software that swerves me on to every exit ramp it passes, is very different from the software that is supposed to get a car across the country by itself within the next 5-6 months.
The current software is very delicate to hardware changes I believe. Even the same camera but slightly different alignments caused a lot of trouble in the beginning. Even if the hardware should be almost identical, it will still cause uncertainties during troubleshooting and quality assurance (e.g. this happened to camera A, but never to camera B, is the problem in the camera, the code or the combination?).

I also believe that the current EAP is a temporary bridge between MobilEye and FSD. In the end I would think they run both EAP and the FSD on the new codebase. Something else would not make sense. Only difference is that EAP is limited to some functionality.

FSD is a kind if software where you spend a long time doing the first 20% which is image recognition. Either it works or it doesn't, and when it doesn't it can take a long time to get it to work and not even the simplest EAP features works without. Explains why they wanted to leave MobilEye working until FSD is in place, they'd rather avoid developing the AP2 EAP branch at all before launching FSD if it was possible. But then when MobilEye partnership broke, they couldn't leave the cars 2 years without any AP either and had to improvise.

After the first 20% FSD it's pretty easy for a while until the last 2%, e.g. rare random incidents, construction sites etc... The last 2% is essential to get working in order to let the car drive without a driver and can be really really hard.
 
FWIW, what makes you believe it's at a fraction noting that it's already virtually at 100% CPU usage on all 6 cores while sitting idle processing the front camera feeds. We don't have good information on what extent the GPU is saturated, but if more hardware helps, I don't think Tesla will sit still. They will be designing next generations of hardware.

And at my day job I am expected to support 6 concurrent generations of hardware (and 10 or so SKUs each) without abandoning them, and develop the next 3 years' pipeline too. It's not like it's impossible to do.
FWIW, what makes you believe it's at a fraction of its operating capacity? The only measure we've seen is @verygreen noting that it's already virtually at 100% CPU usage on all 6 cores while sitting idle processing the front camera feeds. We don't have good information on what extent the GPU is saturated, but if more hardware helps, I don't think Tesla will sit still. They will be designing next generations of hardware.

And at my day job I am expected to support 6 concurrent generations of hardware (and 10 or so SKUs each) without abandoning them, and develop the next 3 years' pipeline too. It's not like it's impossible to do.
If I was an engineer at Tesla and was working on coding FSD, I would use the Tesla fleet as "guinea pigs" for image recognizing and decision-making. Even if the software does not do any actions yet, I would let it parse the pictures and report how the predicted actions of the software compare to the actual actions done by the driver. In that way you can do a lot of stability and performance testing without actually releasing anything, build confidence in the system and discover potential problems before they occur on release.

That would explain 100% CPU even if EAP atm only used 10% or so.
 
But as far as my day job, it's not at all SLA related. There's indeed a strong business case for maintaining existing and developing future hardware. Customers don't exactly want to buy your product again if the last one they bought was abandoned a year later in favor of new hardware. And as far as future work, Moore's Law in all of its forms is absolutely ruthless about the implications of locking yourself to a hardware platform. Hardware becomes exponentially more capable over time. If you are stuck working on older hardware without a path forward, you will fall behind compared to competitors that rebased onto a newer platform. That's just the truth of the matter. If Tesla wants to stay ahead of the competition, they've got to be working on the next generation of hardware, both because it's more capable and because it allows them to learn from their mistakes. But that doesn't mean the previous generation has to be abandonware either…

I think that this is situational. Apple and its iPhone is a great example of hardware updates being engrained into the long term buying cycle of the customer - their customers can't wait to abandon the latest hardware in favor of a barely better form factor. Cars to some degree have this same GTM; change a number cover or convert to LED lighting and the market sways towards the 'latest and greatest' (even if it isn't). Hardware and Moore's Law do have a strong relationship; however, consensus suggests that Moore's Law is in its twilight years anyway - efficient use of design seems to be the weak link at the moment; case in point, AP2.

AP2 being all of one year old is not suffering from the same obsolesce that AP1 is simply due to what they have learned through that AP1 development. The software seems to have a road to travel before we obtain the full benefit of AP2. In that, while there will likely be an "AP3" there doesn't seem to be a logical benefit to abandoning AP2 as it sits today - in fact, it appears that @verygreen analysis of the CPU usage was quite different from what you recall (see post #30).
 
If I was an engineer at Tesla and was working on coding FSD, I would use the Tesla fleet as "guinea pigs" for image recognizing and decision-making. Even if the software does not do any actions yet, I would let it parse the pictures and report how the predicted actions of the software compare to the actual actions done by the driver. In that way you can do a lot of stability and performance testing without actually releasing anything, build confidence in the system and discover potential problems before they occur on release.

That would explain 100% CPU even if EAP atm only used 10% or so.
An issue I see with that is the data volume needed to send all that video back for analysis. The connection is pretty slow.

BTW, AFAIK, Mobileye developed AP1 not Tesla, so I'm doubtful Tesla got much from it.
 
  • Like
Reactions: 1 person
If you were to remove both EAP and FSD alltogether, valued at 7000$ which is almost 10% of the car for many, one could argue that they wouldn't choose that car if they knew the feature never came. And that it's a significant function loss, that would let people return their cars.

If I had known then what I know now, I absolutely would not have purchased an AP2 car, I would have found an AP1 equipped how I wanted, saved some money and been driving my new car immediately. I ordered an AP2 car based on the features told to me by the show room and reading what Tesla's own website said. ie: full AP1 functionality PLUS automatic lane changes, automatic highway merging and EVEN BETTER capability due to more cameras. I did spend the extra money for "pie in the sky" FSD, but I knew that going into it. Never in my wildest dreams would my car 10 months later STILL not have the abilities of the AP1 cars I test drove.

I would have purchased an AP1 and traded it for an AP2 / AP3 in a year or two once the features and functionality actually was BETTER than what AP1 currently was / is.

Oh well, hindsight's better than foresight by a damnsight! :D

Still love my car, keeping my fingers cross the update that just downloaded was an improvement!
 
I drive one of the last AP1 model X's I believe - built in September 2016. Seeing the AP2 issues, I feel ok about that, after initially being ticked I'd missed the newer hardware model by a month. While good, I road tripped to Austin and back from Dallas last weekend and definitely still had some harrowing AP moments. Here are two:

1 - entered a construction zone with concrete barricades right up against the lane markings. Pretty clearly, my car was relying on the painted lines and get uncomfortably close to the concrete barricades - had to pull the wheel to the right and disengage AP.
2 - on a straight stretch of road, there was a bend in the road ahead that put cars in the right lane in line with the cars in the left lane and AP slammed on the brakes, though I was in the left land and the car ahead that was going slower was in the right lane. Not sure how to fix issues like that as we want AP to be looking down the road and proactively avoiding dangers.

Bottom line for me is, in a construction zone (probably 25% of that 200 mile drive) I will be at the wheel, though I'm fine with the AP in charge of the throttle. All in all, a very good experience, but, far from perfect and the mistakes it made are different than mistakes a human might make, which makes them seem bizarre and disconcerting.
 
...1 - entered a construction zone with concrete barricades right up against the lane markings. Pretty clearly, my car was relying on the painted lines and get uncomfortably close to the concrete barricades - had to pull the wheel to the right and disengage AP...

Ever since i saw the video of a Model S slam a construction barrier I've been cautious in construction zones. there is varying road levels, badly painted lines, and several sets of lines sometimes that could confuse the system...
 
An issue I see with that is the data volume needed to send all that video back for analysis. The connection is pretty slow.

BTW, AFAIK, Mobileye developed AP1 not Tesla, so I'm doubtful Tesla got much from it.
I don't see an issue with that.

You just need statistics to gain confidence, not the actual video feeds. And you just need detailed data from the events where the driver actions deviate vastly from FSD predicted actions. This data can be sent when there is free bandwidth to do so and the server can ignore the data that you don't need yet.

Use the statistics to figure out where most of the issues are, then use the mothership to request that the cars store+upload small video feeds and action charts of similar events next time they happen. Fix the issue and test it by re-running that situation at the office. And move onto statistics to identify the next issue. When a reasonable amount of issues are presumably resolved, release a new firmware and start collecting new statistics. All this testing happening without users noticing.

When FSD at one point starts becoming more confident, mothership can tell the cars to upload any potential issue, then start work on the details.

Would surprise me a lot if Tesla doesn't work on FSD this way. And yes, I'm a software engineer :).
 
I don't see an issue with that.

You just need statistics to gain confidence, not the actual video feeds. And you just need detailed data from the events where the driver actions deviate vastly from FSD predicted actions. This data can be sent when there is free bandwidth to do so and the server can ignore the data that you don't need yet.

Use the statistics to figure out where most of the issues are, then use the mothership to request that the cars store+upload small video feeds and action charts of similar events next time they happen. Fix the issue and test it by re-running that situation at the office. And move onto statistics to identify the next issue. When a reasonable amount of issues are presumably resolved, release a new firmware and start collecting new statistics. All this testing happening without users noticing.

When FSD at one point starts becoming more confident, mothership can tell the cars to upload any potential issue, then start work on the details.

Would surprise me a lot if Tesla doesn't work on FSD this way. And yes, I'm a software engineer :).
thankfully this just happend: Self-driving car legislation paves way for Tesla Autopilot development