Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
Well 2.5 does have a new ECU ... Note that I'm not saying 2.0 owners won't get an ECU retrofit if that's what it takes. I'm just entertaining the possibility that ECU 2.0 might not be fast/stable enough for the EAP features like Smart Summon or Autosteer+. My guess is as good as yours at this point
Hopefully it is fast and stable enough considering Elon promised a summon that could navigate your property and the ability to get on and off ramps from AP1 that hasn’t been delivered yet, and that was promised in 8.1 and supposed to be released 10 months ago. Not only are they not awesome at hitting timelines, I would not be surprised if they “oversold” AP2 capability and the real issue is getting software to get close to the original promise with current hardware.
 
Hopefully it is fast and stable enough considering Elon promised a summon that could navigate your property and the ability to get on and off ramps from AP1 that hasn’t been delivered yet, and that was promised in 8.1 and supposed to be released 10 months ago. Not only are they not awesome at hitting timelines, I would not be surprised if they “oversold” AP2 capability and the real issue is getting software to get close to the original promise with current hardware.

We're yet to witness AP1 "meeting you on the curb" and reading traffic lights either...
 
Well 2.5 does have a new ECU ... Note that I'm not saying 2.0 owners won't get an ECU retrofit if that's what it takes. I'm just entertaining the possibility that ECU 2.0 might not be fast/stable enough for the EAP features like Smart Summon or Autosteer+. My guess is as good as yours at this point

If it adds one new feature (i.e. hidden pedestrian detection), then the 2.5 radar is probably more important for EAP than the redundancy in the 2.5 ECU.
 
To be honest, given the incredibly poor graphics and UI performance on the current Tesla ECU - by this I mean input lag, scrolling lag, redraw lag, animation lag - I'm not surprised they've updated it finally. Presumably the Model 3 has a much faster GPU and processor to actually put pixels onto that high res display.

Isn't the ECU in all current Model S/X controlled by effectively an outdated phone processor? An old Tegra 3 or something?

Whilst that may be ok to run basic google maps (poorly), a laggy sketchpad, a browser that doesn't really work or is too slow to do anything on anyway, a simple media streamer with slimmed down UI etc... there's really no way that it could be used for anything remotely more power intensive, such as... oh I don't know... a visualisation of enhanced autopilot, or apps, 3D maps, or any of the things that you'd actually want to use the giant tablet for while you're being chauffeured around?
 
the incredibly poor graphics and UI performance on the current Tesla ECU
I think you're misunderstanding. The ECU we're talking about is the discrete Autopilot ECU that resides above the glove box compartment. This is new in all S/X since Oct '16.
Perhaps you're referring to the touch screen and instrument cluster graphic cards? Well there's hints of something going on with new part numbers, but no verified improvements seen yet
 
  • Helpful
Reactions: hiroshiy
To be honest, given the incredibly poor graphics and UI performance on the current Tesla ECU - by this I mean input lag, scrolling lag, redraw lag, animation lag - I'm not surprised they've updated it finally. Presumably the Model 3 has a much faster GPU and processor to actually put pixels onto that high res display.
Yes, they use x86 Gordon Peak (hardly a performance powerhouse but still a big improvement). But it has nothing to do with hw2.5 or autopilot at all.

Isn't the ECU in all current Model S/X controlled by effectively an outdated phone processor? An old Tegra 3 or something?
Yes, pretty much. Two of them actually. one for instrument cluster and one for the big screen.(or it's possible IC is actually tegra2, too lazy to check).

Whilst that may be ok to run basic google maps (poorly), a laggy sketchpad, a browser that doesn't really work or is too slow to do anything on anyway, a simple media streamer with slimmed down UI etc... there's really no way that it could be used for anything remotely more power intensive, such as... oh I don't know... a visualisation of enhanced autopilot, or apps, 3D maps, or any of the things that you'd actually want to use the giant tablet for while you're being chauffeured around?
Personally once real mind-free driving is there I'd use a laptop or other such universal device since it would allow me much better control over whatever it is I want to do with display angles that are convenient for me and adjustable ;)
 
Yeah I'm expecting Level 5 on AP2. They keep selling the promise even today on the design studio. If they don't think it's possible why continue to show FSD on the website? Why would they screw themselves over so badly on purpose? Does anyone think Elon really wants to screw himself over? Because I don't see the fun in that.

Otherwise you better believe there will be major major lawsuits. There is so much pessimism, which is going to look stupid once more updates come out.

This is completely naive. Tesla releasing an update doesn't negate all for example i have been saying since dec 1 2016. How they have been deceptive about AP2 and its timeline since day one.


The discussion about pessimism, I will reiterate. The reason AP2 won't achieve L5, let alone any kind of L4 (including highway) is not even about GPU compute power.

Its because AP2 and AP2.5 is a camera only system.

The 12 ultrasonic are useless because they are only good for precision parking and even with that are unreliable because they don't see narrow and small objects. This is the reason for many auto-parking/summon related accidents. Besides they are only good to 26 ft.

This leaves 8 cameras and 1 radar which has about 25 degree FOV if i remember correctly. This makes the radar useless for 95% of all driving tasks. For example, ramps, intersections, turns, busy urban/city roads. Its so narrow that you can avoid being detected up to 25ft away and still be in the trajectory of the car.

How do you make turns or watch for upcoming cars from the left and right at an intersection for example with one camera and 1 narrow radar?

You have to be delusional to think that one radar amounts to anything other than forward driving in one line in a restricted highway.

For this to not be a camera only system it would need surround radars. At night in freeway/roads with no light for example, the side and back facing cameras are literally useless. That's literally 5 cameras rendered useless. @verygreen can confirm this.

Ever taken a road trip at midnight in the interstate roads with absolutely no lights?

This is a camera only system that is without an complimentary system.

This is why it will never be L4.

For a car to be L4 and higher it will need two main vision systems.

The reason radar isn't a main system (besides it will need surround radars to even be A system) is that unlike it, lidar system can differentiate and classify objects. It can classify a deer or pedestrian, a car, a cone, lanes, trees, barriers, road sign,cyclist, pedestrians, curbs, grass, road edges etc. Radar on the other hand will just return that it sees a big object and cant tell you that the object is a big rock or a deer.

Lidar also works in heavy rain, snow and dust.

The reason you need two main system is that when one fails, the other takes over.

People misunderstand when i say the camera system fails. I'm not talking about the individual cameras, or the entire set of cameras. Most people naively count redundancy based on how many cameras are facing a particularly fov.

But that's not where failure happens at. Failure happens in the SOFTWARE predominately. Object recognition and classification failure.
For example the car not recognizing the dog in Tesla FSD video. It would have killed that dog, that's the camera system failure i'm talking about. And there is no backup.

We already know that even camera hardware fail in adheres conditions(bright sunlight,night, etc) which also influence the object recognition and classification capability which then means you literally driving blind.


Its a big deal that the 2018 Audi A8 is the first car with Lidar and more will follow in 2018.

Any one who compares radar or even suggests radar can replace the functionality of lidar (which can see a football helmet from two football fields away and classify it) should be banned for crimes against common sense.

luminar.gif
 
Last edited:
Yes, they use x86 Gordon Peak (hardly a performance powerhouse but still a big improvement). But it has nothing to do with hw2.5 or autopilot at all.


Yes, pretty much. Two of them actually. one for instrument cluster and one for the big screen.(or it's possible IC is actually tegra2, too lazy to check).


Personally once real mind-free driving is there I'd use a laptop or other such universal device since it would allow me much better control over whatever it is I want to do with display angles that are convenient for me and adjustable ;)

Wait a sec, are we saying that AP2.5 HW may also include upgraded computer(MCU?) that controls the 17inch screen and thereby hope for improved response and browser performance?
 
Wait a sec, are we saying that AP2.5 HW may also include upgraded computer(MCU?) that controls the 17inch screen and thereby hope for improved response and browser performance?
No, that's not what I am saying. It's just what model 3 has, I suspect the interior refresh of S/X will bring updated computers, but whenever that would be coincident with hw2.5 update or not - I don't really know. There certainly are many upsides to combine the two that I discussed here, though: MCU fails for the second time

We'll find out in due time.
 
No, that's not what I am saying. It's just what model 3 has, I suspect the interior refresh of S/X will bring updated computers, but whenever that would be coincident with hw2.5 update or not - I don't really know. There certainly are many upsides to combine the two that I discussed here, though: MCU fails for the second time

We'll find out in due time.
I guess we really need that HW2.5 teardown. But do we know the new MCU has moved away from eMMC to removable/replaceable storage (is this possible to tell only from firmware)? Or it may still have the same exact problems (although maybe reduced depending on how large it is and how much more durable a new version would be).

Edit: I didn't look up the thread, it seems you discussed this already. Looks like it's still mostly keeping the eMMC and Model 3 doesn't show sign of a separate SDcard for maps. Hopefully the larger size helps and hopefully the type chosen is more robust in terms of write cycles. Sucks to have to replace the whole board when only the storage worn out.
 
Last edited:
This is completely naive. Tesla releasing an update doesn't negate all for example i have been saying since dec 1 2016. How they have been deceptive about AP2 and its timeline since day one.

The point is. Why in the hell advertise the option to buy FSD if the car is not going to be capable of it in the future as promised? Why do that?

It's been about a year since the option was first introduced in the design studio. Since EAP/FSD hardware was announced. If they didn't believe it would be possible with current hardware, they could've easily up to this point have said something or completely eliminated the option to purchase FSD.

Instead almost a year later, the option is STILL in the design studio.

Model 3 pretty much has the same 8 camera hardware as the S/X. They are about to ramp up production in large quantities of the car. Why would they ship the car with not capable hardware at such high quantities? Can you imagine the absolute disaster if it turns out the Model 3 nor the other cars can't do FSD with current hardware. They would be shooting themselves in the foot.

So I'm not being naive to think that I expect to get what was promised to me when I bought my car. If not, then I expect a full retrofit completely free of additional hardware for FSD. Or my name will go on the lawsuit that will be going to them regarding this matter. So I'm optimistic and excited for the future and the new improvements that will come to the car. I'm looking forward to see what happens in 9.0 this month or next.
 
People misunderstand when i say the camera system fails. I'm not talking about the individual cameras, or the entire set of cameras. Most people naively count redundancy based on how many cameras are facing a particularly fov.

But that's not where failure happens at. Failure happens in the SOFTWARE predominately. Object recognition and classification failure.
For example the car not recognizing the dog in Tesla FSD video. It would have killed that dog, that's the camera system failure i'm talking about. And there is no backup.
But as you say yourself, the problem in this scenario is software, it's not because the object is invisible to the camera. In which case it's fixable by improving the software. Thus it is NOT a camera system failure. Even lidar needs software to actually identify what those shapes are even if it has much higher resolution at much further distances.

As for pitch black camera failure, it is never pitch black in the real world, otherwise nocturnal animals could not see - only bats need sonar to see inside a cave. Just what are the low light capabilities of the cameras?
 
  • Like
Reactions: zmarty
The point is. Why in the hell advertise the option to buy FSD if the car is not going to be capable of it in the future as promised? Why do that?

I guess there is the chance that they've staked the entire company on the FSD claim and can't really walk back now. Stopping FSD sales now would be a disaster in its own right, probably, too...

I mean, IMO, they definitely should stop the sales if they think they can't do it. But just speculating on motivations - they might not stop even if they have doubts...
 
  • Like
Reactions: Bebop and zmarty
This leaves 8 cameras and 1 radar which has about 25 degree FOV if i remember correctly. This makes the radar useless for 95% of all driving tasks. For example, ramps, intersections, turns, busy urban/city roads. Its so narrow that you can avoid being detected up to 25ft away and still be in the trajectory of the car.
Didn't we discuss this a while back? Tesla's AP2 Bosch radar range of 160m matches MRR, which has the following specs (double the numbers if talking about FOV as we use colloquially):

Field of view (horizontal)
Main antenna
±6° (160 m)
±9° (100 m)
±10° (60 m)
Elevation antenna
±25° (36 m)
±42° (12 m)

Mid-range radar sensor (MRR)
Frustrated with FSD timeline

A max of 84 degrees FOV at 12m is definitely enough for pedestrian detection. In fact, that's what it's advertised for:
"Thanks to the elevation antenna, the system achieves an opening angle of ± 42 degrees at close range – so a pedestrian stepping out into the road from behind a parked car, for example, is detected at an early stage."

We don't have the specs on the new Continential radar in HW2.5, but the PR suggests it's even more capable:
HW2.5 capabilities

Edit, there's only two types of Continential LRR apparently (they don't have MRR and SRR are too short range for general application) and here's the specs, also more than capable for pedestrian detection at these angles and ranges:
ARS441: Field of View: ± 9° 250m / ± 45° 70m / ± 75° 20m
ARS510: Field of View: ± 4° 200m (220m typ.) / ± 9° 120m / ± 45° 40 …70m
Continental Automotive

The reason radar isn't a main system (besides it will need surround radars to even be A system) is that unlike it, lidar system can differentiate and classify objects. It can classify a deer or pedestrian, a car, a cone, lanes, trees, barriers, road sign,cyclist, pedestrians, curbs, grass, road edges etc. Radar on the other hand will just return that it sees a big object and cant tell you that the object is a big rock or a deer.
This is not true as far as I can tell. Automotive radar can tell the difference between a moving subject (like a pedestrian or a deer) and a small static object (like a rock) by using information from the doppler frequency image. This has been the subject of a lot of the automotive radar research and why there are systems out there that can do pedestrian recognition using only radar.

@JeffK may want to chime in as he has some more experience in looking at this area.
https://www.adv-radio-sci.net/10/45/2012/ars-10-45-2012.pdf
 
Last edited: