Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla autopilot HW3

This site may earn commission on affiliate links.
Good Pod Cast here with some info on that.
On the Road to Full Autonomy With Elon Musk — FYI Podcast

Basically Elon thought it could be possible but it takes a lot of extra software engineering to get there. On the current hardware they are doing cropping and all sorts of other tricks to keep from overwhelming the chip. On their new in house hardware they are running full frames and there is plenty of room on the processor.
 
Not that it matters for Tesla, but do you think Px2 is capable of FSD, or just capable of more/better Level 2 / maybe Level 3?
I wonder if they wish they had never done AP2.5 and just kept everything on HW2? Would’ve saved time and expense of designing and implementing HW2.5 while potentially bribing HW3 to market more quickly and not having a (what will hopefully seem like a) small subset of cars on HW2.5
 
I wonder if they wish they had never done AP2.5 and just kept everything on HW2? Would’ve saved time and expense of designing and implementing HW2.5 while potentially bribing HW3 to market more quickly and not having a (what will hopefully seem like a) small subset of cars on HW2.5

They needed the interior camera for Tesla Network monitoring. HW2.5 also removed the cooling fans.
 
  • Informative
Reactions: Fiver
I do not believe it's remotely capable of FSD. Just NoA with auto lane change, and perhaps the smart summon feature since it's slow moving.

Musk gave his perspective on the latest ARK Invest podcast he did. Paraphrasing: They have to use lots of tricks to get good performance out of the Nvidia hardware like selectively cropping the video image and (others that were not mentioned). He said they could probably get FSD to work on the Nvidia hardware but it would take a ton of focus on optimization on the performance. The new HW3 gives them a lot more headroom and allows them to spend their effort on adding new features instead of optimizing the software performance.

My take is that once HW3 is shipping in quantity and makes up a large % of the fleet, HW2/2.5 will be delegated to maintenance updates like HW1 is today.
 
So that means the 70% of Tesla buyers who did not pre-purchase FSD may be stuck with maintenance releases.
Correct, however what's likely is that anyone purchasing FSD from now on will also get HW3, but there is a corresponding price hike for FSD compared to us (extremely hopeful, possibly delusional) people who have already bought it, since they will not be trying to get FSD working on HW2* but the FSD option is still available for purchase.
 
  • Like
Reactions: CarlK
I expect they won't have it working until HW4 is out, which includes at least one solid state lidar. They learned their lesson and stopped selling FSD because it just means more free upgrades that are likely costing them a lot more than the ticket price, especially if they have to fit an extra sensor.
 
I expect they won't have it working until HW4 is out, which includes at least one solid state lidar. They learned their lesson and stopped selling FSD because it just means more free upgrades that are likely costing them a lot more than the ticket price, especially if they have to fit an extra sensor.

Doubtful. LIDAR was a shortcut, intended to get self-driving cars on the roads sooner. It simplifies the effort required to determine the distance of objects, dramatically reducing the CPU power required to determine what visual data is and is not relevant.

However, there's no reason whatsoever that you can't get the same depth data from stereo cameras (either with the same angle of view or not) or even from consecutive frames from the same camera (either by literally generating a point cloud from multiple images or by training a neural network using multiple inputs simultaneously so that it learns to understand depth implicitly). Doing it with one or more cameras just requires considerably more computational resources. But computers are getting faster every day, which is to say that LIDAR has always been a temporary shortcut until such time as computational power is adequate to do the job without it.

There is really no question about whether we will eventually reach that point. The only plausible question is how long that will take.
 
Doubtful. LIDAR was a shortcut, intended to get self-driving cars on the roads sooner. It simplifies the effort required to determine the distance of objects, dramatically reducing the CPU power required to determine what visual data is and is not relevant.

No. Lidar is much more than that. It vastly simplifies object recognition and tracking. Instead of needing an AI to do image recognition, you can use well tested and long established algorithms with lidar. It's far more than simply a reduction in CPU load.

Tesla vastly underestimated how difficult it would be to get AI to do image recognition. They were clearly hoping to not have to build a 3D model of the world and instead rely on just identifying obstacles (cars, barriers, pedestrians etc.) and the road boundaries. As we have seen from Waymo's released data, there are many cases where that is inadequate.

Tesla may eventually get there, but not with HW3 or anything available today. Recognizing a child is one thing, recognizing that they look like they might be able to step into the road is quite another. Presumably drivers don't want the random braking that plagues current AP to get even worse as it starts to consider pedestrians and the like.
 
  • Like
Reactions: Kant.Ing
LIDAR is absolutely awesome, until it stops being awesome and becomes completely unusable. And that's basically whenever there's fog, rain, snow, reflective surfaces or similar. There's some pretty neat solid-state LIDAR sensors available now, but they too are very much subject to heavy processing and filtering to clean up the return signal. LIDAR is a lot like RADAR in that regard.

But the biggest issue is this: While you can filter out rain and snow to some degree, raindrops on the window cover wreak havoc with the resulting data. Droplets are basically tiny lenses that divert the beam depending on the angle of incidence.

The failure modes of LIDAR include detecting objects where there are none, not detecting existing objects, detecting existing objects on wrong positions and with wrong movement vector, as well as detecting multiples of a single existing object.

Is Tesla right about their approach? I don't know. But I do know that LIDAR is far from as perfect as its proponents claim it to be. Even as an augmentation sensor it can be extremely tricky to rely on.

At this point honestly I think the easiest path towards autonomy is to also augment the road with radio and visual markers specifically designed to aid autonomous navigation, much like we have optical road signs and road markings that aid human drivers.
 
No. Lidar is much more than that. It vastly simplifies object recognition and tracking. Instead of needing an AI to do image recognition, you can use well tested and long established algorithms with lidar. It's far more than simply a reduction in CPU load.

No, lidar only simplifies object detection, object recognition requires vision. You don't get out of classifying objects just because you made it easier to not run into them.

Is that a bush or a pedestrian?
Is that a bicycle or a motorcycle?
Is that a light red, yellow, or green light? Is it for this lane? Is there a no turn on red sign?
Does the temporary sign say stop, road closed, road closed to through traffic, or detour?
Did the speed limit change?
Is the person in the road a random jaywalker or a traffic officer?
Is that two vehicles traveling close together, or one vehicle with a trailer?
Is that a trolley, or a vehicle turning left?
 
LIDAR is absolutely awesome, until it stops being awesome and becomes completely unusable. And that's basically whenever there's fog, rain, snow, reflective surfaces or similar.

Lidar can cope just fine with rain and snow. It's just extra noise to be filtered. Fog is harder but not impossible, and to the extent that it impairs lidar it will be as bad or worse for cameras.
 
No, lidar only simplifies object detection, object recognition requires vision.

Incorrect. Lidar simplifies recognition by providing 3D data about the shape of the object.

Try googling "lidar object recognition", there are numerous papers discussing how it works. Note also that it is very resilient to poor weather conditions.