Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
I think we may have figured out some aspects of AGI. The car has a mind. Not an enormous mind, but a mind nonetheless.

This is the most profound statement ever from Elon. It's the breakthrough we've all been waiting for. We've jumped the chasm from being a car company to truly being an AI company. FSD, Dojo as a service and Bot will be unleashed with this.
 
I think we may have figured out some aspects of AGI. The car has a mind. Not an enormous mind, but a mind nonetheless.

This is the most profound statement ever from Elon. It's the breakthrough we've all been waiting for. We've jumped the chasm from being a car company to truly being an AI company. FSD, Dojo as a service and Bot will be unleashed with this.
When pigs can fly, indeed.
 
You guys are ignoring all the bread crumbs Elon has been leaving. He's telling his followers it's time to load up ... that FSD is solved. He has no reason to try and pump the stock right now ... even if he did that kind of thing.
After six years of falling for Elon’s bs my wipers still don’t work and last week my radio crashed.
 
  • Funny
  • Like
Reactions: Nack and APotatoGod
That's the wrong way to evaluate what's happening. These are progress reports, not projections of future progress. Also, way more than one offhand comment, he's repeatedly reinforcing the narrative that they have the solution to FSD.

Uh huh. I’ve been waiting since the Fall of 2016 and he‘s been “repeatedly reinforcing the narrative“ every few weeks since then. It’s baloney and he’s a charlatan.
 
Also it should've been clear from just the Douma/Green cites- but it's specifically the NNs that are having this issue- which as a couple folks pointed out run on the NPUs not the CPUs... so getting rid of C code that was hitting the CPUs not only won't help this problem, the fact they're moving that work to NNs means the out-of-compute-for-NNs single node issue will be worse with that transition. (It might well improve the systems capabilities of course, but at cost of being even less able to ever fit back in a single node)
Addressing underlined section
Removing C code can shink the NN as it no long needs to be partitioned in discrete chunks with quantized ouputs for the C to read.
If elimination of these intermediate states (and future reshaping) removes more weights than the added functionality requires, the NN can be shrunk.
 
Addressing underlined section
Removing C code can shink the NN as it no long needs to be partitioned in discrete chunks with quantized ouputs for the C to read.
If elimination of these intermediate states (and future reshaping) removes more weights than the added functionality requires, the NN can be shrunk.


Fair point- and I'm willing to accept if they were only replacing existing function they could potentially shrink the NNs. But there's still a fair bit of functionality that they still need to add (all the missing bits of a complete OEDR), so I remain highly dubious this swap would offset that between here and >L2.
 
The system seems to lack the ability to retain training data as it is and there's countless scenarios and edge cases it still needs to cram in which is counter intuitive to a smaller NN transition. It would be interesting to know more HW4 NN details but I recall they opted for an increase to 10 bit weights which can only help with generalization.

I wouldn't be surprised if the NN architecture allows for a select # of inputs, weights, and outputs to be used. In that way, simpler processes like steering, acceleration and braking could use more optimally sized (smaller) NNs.
 
And yet, you bought a 2023 Model Y? You sure showed him! You should probably sell it and go with one of the fine alternatives from Ford or GM.
That’s the intent, but not from Ford or GM.

The car I want isn’t arriving in the US until next summer so between whatever resale value the FSD transfer adds by then (probably equal to EAP), the tax credit, and the fresh carpool stickers I should be able to drive this new car with a warranty for a year and turn a small profit. The 3 was starting to need expensive repairs and had turned into a rattle trap.

If I wouldn’t lose my parking space at work for buying a PHEV I would be in an S60 Recharge right now.
 
  • Disagree
Reactions: superblast
Here's a topic for discussion:

Do we believe V12 will be able to run on the HW3 NPUs (neural processing units)? And if not, do you think that warrants some sort of HW3+ retrofit?

The context is that each FSD Chip on the HW3 board has 2 NPUs with a total combined peak performance of 73.73 TOPS, which we've learned is insufficient to run the current neural networks at a working frame-rate. In order to compensate for this, Tesla has extended the compute onto the second (originally meant to be redundant) FSD Chip on the HW3 board, giving them a total of 147.46 TOPS to work with. They evidently had to make this change sometime around 2021.

Meanwhile, each HW4 FSD Chip has 3 NPUs with a total combined peak performance of 121.65 TOPS. If they extend compute onto the redundant HW4 FSD Chip, that's 243.3 TOPS total for neural processing.

Since V12 will be offloading some of the C++ code run on the CPU to NN inference run on the NPUs, it could tax HW3 beyond the total feasible 147 TOPS across both chips.

I know Elon has said that a HW4 retrofit is not feasible, likely due to different camera connectors and harnesses, but I could foresee some sort of HW3+ board, with all the old harnesses and connectors, but upgraded NPUs.

Thoughts?
 
Here's a topic for discussion:

Do we believe V12 will be able to run on the HW3 NPUs (neural processing units)? And if not, do you think that warrants some sort of HW3+ retrofit?

The context is that each FSD Chip on the HW3 board has 2 NPUs with a total combined peak performance of 73.73 TOPS, which we've learned is insufficient to run the current neural networks at a working frame-rate. In order to compensate for this, Tesla has extended the compute onto the second (originally meant to be redundant) FSD Chip on the HW3 board, giving them a total of 147.46 TOPS to work with. They evidently had to make this change sometime around 2021.

Meanwhile, each HW4 FSD Chip has 3 NPUs with a total combined peak performance of 121.65 TOPS. If they extend compute onto the redundant HW4 FSD Chip, that's 243.3 TOPS total for neural processing.

Since V12 will be offloading some of the C++ code run on the CPU to NN inference run on the NPUs, it could tax HW3 beyond the total feasible 147 TOPS across both chips.

I know Elon has said that a HW4 retrofit is not feasible, likely due to different camera connectors and harnesses, but I could foresee some sort of HW3+ board, with all the old harnesses and connectors, but upgraded NPUs.

Thoughts?
"I know Elon has said that a HW4 retrofit is not feasible, likely due to different camera connectors and harnesses, but I could foresee some sort of HW3+ board, with all the old harnesses and connectors, but upgraded NPUs."

I think it's a possibility. Nothing prevents Tesla from building a new CPU/NPU/GPU board that fits the HW3 wiring and has the capability of the current HW4 (or even higher capability). Hardware component size shrinks with time.

I also think 5M pixel cameras for HW4 are still not high enough. Can they go with 16M pixels?
 
Last edited:
insufficient to run the current neural networks at a working frame-rate
The qualifier of frame-rate is getting at throughput, but there's also a consideration of latency, which can also affect throughput. From AI Day 2022, there's scheduling/utilization visualization showing the Compute lines on each of TRIPs 0-3 relatively full with gaps probably reflecting waiting on the current frame before processing the next frame of data:

AI Day 2022 FSD Networks in Car.png


Maybe simplest to spot is the yellow blocks for Path planning in TRIP 0, so use that as a reference point for the repeating pattern of processing each camera frame. This seems to show SoC-A is processing roughly 2.3 frames in this window. Whereas SoC-B in the same window size seems to have about 1.6 frames.

One potential tradeoff is adding in a new network increases the latency and decreases the frame-rate, but that could still result in better driving due to smarter decisions that don't need constant control adjustments. But on the flip side, longer latency for thinking about the current frame could result in additional milliseconds before changing control because there is actually something new to act on.

Presumably the new Controls network will have all the existing networks as its inputs to make a decision, so there's the additional latency of transferring data between SoCs. This visualization already shows how Occupancy network is primarily handled on SoC-A but there's inputs to TRIPs 2 and 3 when processing Lanes and Traffic controls.
 
I know Elon has said that a HW4 retrofit is not feasible, likely due to different camera connectors and harnesses, but I could foresee some sort of HW3+ board, with all the old harnesses and connectors, but upgraded NPUs.
Seems unlikely Tesla would shell out billions of dollars to not sell new cars. If we're lucky there will be a FSD transfer when they obsolete hw3.,
 
Seems unlikely Tesla would shell out billions of dollars to not sell new cars. If we're lucky there will be a FSD transfer when they obsolete hw3.,

By that logic, they didn't need to upgrade HW2.5 buyers to HW3 either, but they did at no additional cost for those that already bought FSD.

They're selling the HW3 retrofit for $1,000 retail right now, including labor. So how much can it cost them to swap a board? $200 in parts and $200 in staff time, max?

If V12 is required for the levels of safety Elon said was achievable with HW3, then I think they could make a retrofit happen.
 
By that logic, they didn't need to upgrade HW2.5 buyers to HW3 either, but they did at no additional cost for those that already bought FSD.
That was peanuts for Tesla. They made all that money back and then some by selling FSD to customers expecting to future proof the cars. That didn't happen (see Musk quote that you referenced from the earnings call).
They're selling the HW3 retrofit for $1,000 retail right now, including labor. So how much can it cost them to swap a board? $200 in parts and $200 in staff time, max?

If V12 is required for the levels of safety Elon said was achievable with HW3, then I think they could make a retrofit happen.
Unfortunately, there is zero upside for Tesla to upgrade 1M old cars.
 
That was peanuts for Tesla. They made all that money back and then some by selling FSD to customers expecting to future proof the cars. That didn't happen (see Musk quote that you referenced from the earnings call).

Unfortunately, there is zero upside for Tesla to upgrade 1M old cars.
I wasn't aware the uptake on FSD Capabilities was that high. I thought only about 400K cars or so bought into it.
 
That was peanuts for Tesla. They made all that money back and then some by selling FSD to customers expecting to future proof the cars. That didn't happen (see Musk quote that you referenced from the earnings call).

I don't follow your logic. It was peanuts to upgrade FSD purchasers from HW2.5 to HW3, but it will be prohibitively expensive to upgrade the compute on roughly the same group of vehicles? Unless you're saying tons more people are buying FSD at it's current $15k price.
 
  • Like
Reactions: APotatoGod