Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

There will be NO HW4 upgrade for HW3 owners

This site may earn commission on affiliate links.
The processor. There are only so many distinct scenarios that a given set of processing hardware can handle. Computes, memory, bandwidth, etc.
NN's and generalized AI don't really work that way. Plus, this would require knowledge that the current HW3 is already up against compute limits.

It'll have to realize something, but that something may be as simple as "A pedestrian is standing in the road and won't move".
At this point, you've recognized the road, road markings, road control signs, a human, and the lack of the "human intent" to move. But the thing you're going to be compute constrained on is interpreting hand motions, despite the fact that if you're 100% sure of all the other things, you could now devote all compute to those hand signals?

The issue with all of this is 100% software, which Tesla is so far away from right now that it's impossible to tell where they will be HW constrained.

Drivers would then develop a sense of when the car is likely to deal with a given situation successfully.
Oh, in other words, an L2 system like they have now, where the driver is responsible.
 
Who said anything about additional cameras? The proposal was to replace the cluster of three front-facing cameras (wide, normal, narrow) with two front-facing cameras pointed like X-Y microphones, where the one on the right points left and the one on the left points right. It would actually involve fewer camera inputs.

Assuming the new cameras are of similar resolution, the going from three down to two is trivial.
That would be something different than HW4 and would require much engineering development to train the FSDb software differently. Certainly not a trivial upgrade by any means and little more than fantasy.
 
The processor. There are only so many distinct scenarios that a given set of processing hardware can handle. Computes, memory, bandwidth, etc.
Thats not how that works. It's not a list of distinct scenarios stored in memory and processed as they occur, which can only hold or process so much. It's a bunch of neural networks trained to accomplish specific or multiple tasks. They could be tracking all vehicles and moving objects while another could be tracking skeletal movements which in turn can be used to predict gestures, find drivable space and lanes, find and detect traffic lights etc.

Currently FSDb sees humans as cuboids, it does not track human pose, that's not to say it can't do pose estimation because of processor limitations, it just does not have the software capability yet.
NN's and generalized AI don't really work that way. Plus, this would require knowledge that the current HW3 is already up against compute limits.


At this point, you've recognized the road, road markings, road control signs, a human, and the lack of the "human intent" to move. But the thing you're going to be compute constrained on is interpreting hand motions, despite the fact that if you're 100% sure of all the other things, you could now devote all compute to those hand signals?

The issue with all of this is 100% software, which Tesla is so far away from right now that it's impossible to tell where they will be HW constrained.


Oh, in other words, an L2 system like they have now, where the driver is responsible.
While you are right that generalized AI don't work like @JB47394 suggests, those tasks are run simultaneously and some concurrently within a processing cycle or frame and there is only so much you can fit within a frame. We do know they are hardware constrained with no redundancy in compute on HW3.

We know they use both compute nodes in HW3 to run different NN networks required for FSDb to function. Thats not to say there isn't room for optimization in the future.

2lkB1pY.png


SoC-B was supposed to be backup and used for redundancy, meaning it would run the same things as SoC-A and can take over if SoC-A fails but as we can see, it is used in parallel with SoC-A to run other networks because they are both compute constrained.

SoC- A runs
  • Occupancy network
  • Moving Objects network
  • Path Planning

SoC- B runs
  • Lanes network
  • Traffic controls & road signs network
  • Some occupancy network.
 
Currently FSDb sees humans as cuboids, it does not track human pose, that's not to say it can't do pose estimation because of processor limitations, it just does not have the software capability yet.
Do you believe that doing post estimation will involve additional processing? That is, one or more new networks for classification and working that information into the overall control system?
 
Can't say for sure what they need but it is safe to assume HW3 is not gonna be enough. Especially because there is no redundancy in compute.
For L4, it does not require mirror redundancy, only safety redundancy. As such, as long as when either chip fails, it can run a bare minimal fallback (pulling to a stop in its own lane or optionally pulling to the side of the road), that is all that is required. As such using both chips in normal operation (without being exact mirrors) does not necessarily fail the redundancy test.
 
For L4, it does not require mirror redundancy, only safety redundancy. As such, as long as when either chip fails, it can run a bare minimal fallback (pulling to a stop in its own lane or optionally pulling to the side of the road), that is all that is required.
And in the same way, they don't have to do all operations all the time. Humans focus on things as needed. Nothing says the car can't come to a stop when there is a human directing traffic, and change the NN to process human hand gestures instead of processing lane lines or stop signs.

But anyone trusting Tesla in anything they say needs to remember that they said HW2 was enough, then HW2.5 (with double the compute), then they couldn't even do stop lights on HW2.5, and now we have HW3 which is compute constrained while only doing L2, but yeah, don't worry, HW4 will finally crack that L4 nut.
 
  • Like
Reactions: MTOman
And in the same way, they don't have to do all operations all the time. Humans focus on things as needed. Nothing says the car can't come to a stop when there is a human directing traffic, and change the NN to process human hand gestures instead of processing lane lines or stop signs.

But anyone trusting Tesla in anything they say needs to remember that they said HW2 was enough, then HW2.5 (with double the compute), then they couldn't even do stop lights on HW2.5, and now we have HW3 which is compute constrained while only doing L2, but yeah, don't worry, HW4 will finally crack that L4 nut.
The reality is they won't know what is the minimum required until they reach that point. And when they reach that point, it's not out of possibly to optimize the computing to minimize resource usage. But while in development stage and especially for L2, using the hardware to its fullest makes sense.
 
For L4, it does not require mirror redundancy, only safety redundancy. As such, as long as when either chip fails, it can run a bare minimal fallback (pulling to a stop in its own lane or optionally pulling to the side of the road), that is all that is required. As such using both chips in normal operation (without being exact mirrors) does not necessarily fail the redundancy test.
You are right that all you need is to bring the system to a safe stop in case of a failure, but Tesla's approach was dual redundant compute systems whereby if one fails the car keeps going as if nothing happened. For a while they ran on a single node until they ran out of compute in one node, and they started pushing things on to the other node.
TgIudX9.jpg
 
You are right that all you need is to bring the system to a safe stop in case of a failure, but Tesla's approach was dual redundant compute systems whereby if one fails the car keeps going as if nothing happened. For a while they ran on a single node until they ran out of compute in one node, and they started pushing things on to the other node.
TgIudX9.jpg
Yes, I'm aware that Tesla's original design was a full mirror, but I just wanted to point out that it is not necessary for L4. It's a very subtle distinction that is easy to miss and I've seen even experts miss it and make similar claims of how pushing things to other nodes means L4 is impossible on HW3, when it's actually not part of the requirement, even though it is nice to have.
 
  • Informative
Reactions: APotatoGod