minus the dandruff.Huh?
Ok, never mind, Head and Shoulders...
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
minus the dandruff.Huh?
Ok, never mind, Head and Shoulders...
Elon said: Our next-gen AI model after this has a lot of promise: ~5X increase in parameter count, which is very difficult to achieve without upgrading the vehicle inference computer.I wonder if this means Tesla will need to upgrade the FSD computer in all vehicles. Could be expensive.
Do we know if HW3 is upgradable in a Service Center? They could just swap the chip if the cameras don't change... So theoretically would be a opt-in thing, the customer should take the time and money to do it, and the care would be the same. Not a recall.
This takes me back to the times of dial up baud modems and AOL IM chat roomsThat's a strange capture. Reminds me of "Hand in my Pocket" by Alanis.
[Verse 1]
I'm broke, but I'm happy
I'm right, but I'm WRONG
I'm short, but I'm healthy, yeah
I'm high, but I'm grounded
I'm sane, but I'm overwhelmed
I'm lost, but I'm hopeful, baby
That's some detail there, thanks. I'm still trying to figure it out myself. (Maybe this next para helps dumb it down some.) This is actually important to avoid the AI wool effect over the eyes.Elon said: Our next-gen AI model after this has a lot of promise: ~5X increase in parameter count, which is very difficult to achieve without upgrading the vehicle inference computer.
Feel like this is salesmanship from Elon. Yes it is hard to do, but we are up to the task.
Likely doing quantization: Example quantization that could be done: 8 bit / 5 = ternary quantization, also known as bitnet. The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Things that can be done to lower latency:
Usually a quantized model is faster to decode than FP32. You can also make a model wider rather than taller to keep latency down. This assumes the hardware can do plenty in parallel. You can also do tricky things like have two models, one fast and one not so fast. The fast model reacts to emergency situations while the not so fast model does most everything else. Other ideas include architecture improvements like block transformers: \scalerel* Block Transformer: Global-to-Local Language Modeling for Fast Inference
Since I did that multiple times I respond to that question only. I, not necessarily anyone else bought that option each time it was available for my new Tesla explicitly because I knew I was defraying cash needed for further development, not in the P&L since Tesla did not recognize that as income, but cash flow counts for me....Why would I pay extra money for a thing I was promised with a prior purchase for no additional cost? That seems extremely foolish advice...
Isn't it ironic?That's a strange capture. Reminds me of "Hand in my Pocket" by Alanis.
[Verse 1]
I'm broke, but I'm happy
I'm right, but I'm WRONG
I'm short, but I'm healthy, yeah
I'm high, but I'm grounded
I'm sane, but I'm overwhelmed
I'm lost, but I'm hopeful, baby
Why would I pay extra money for a thing I was promised with a prior purchase for no additional cost? That seems extremely foolish advice.
Doubly so when we have no idea if HW4 will be enough either any my existing HW3 car continues to perform flawlessly--- what sense would spending tens of thousands of extra dollars make here?
Here we go again…..I hope people can read between the lines here.
Robotaxi-level FSD will not operate on HW3
Notice, in order to get the purported 5x - 20x improvement in reliabilty (according to Musk), the model had to increase 5x in parameter size.
So no, Tesla could not simply train with more data for longer to get that large of an improvement. They needed to increase model size. As I've speculated in the past.
Now the # of parameters barely fits on HW3, and probably due to efficiency moves like reducing compute precision.
But we are still say 50x away from robotaxi levels.
How much larger is the model going to need to get? Certainly at least another 10x in size.
No way in the world HW3 will handle that. And HW4 will also be challenged.
Are you holding on to the old car just to see if Tesla will eventually make good on the HW3 robotaxi promise (or otherwise make you whole)?