Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
I should have checked TMC before posting the question.

This post has the most interesting information:- Do any HW4 Vehicles have FSD Beta activated ?


Also this one:- Do any HW4 Vehicles have FSD Beta activated ?
Is the understanding that it's the sensor inputs t (i.e. cam pixel counts, etc...) that is the issue with FSD running on HW4 vs. HW3?

I would have expected the actual hardware to be a superset of HW3, more/faster NN processing cores, more memory, higher on/off die bandwidth, more/faster ARM CPU cores, etc... I'd also expect that there may be some new architectural additions (i.e. new CPU instructions, additional data types the NN cores can process, etc..). This would all allow for the old code to run on the new hardware, just faster. Whereas new code that took advantage of new features wouldn't run on the older HW, in much the way new x86 CPUs could run old stuff, but new SW that used stuff like SSE instructions wouldn't work on old.

If so, the NN's should largely run unmodified except for things that are timing or sensor-format dependent... which is hopefully a small amount of modification...

Or is the thought that HW4 is a significant architectural departure from HW3?
 
It is also a pretty worthless one, as not everyone disengages for the same reasons. (Or for any reason at all.)
with millions of cars, this is not a part of the equation. In the end, it's a good metric as the other reasons for disengagement have no relation to the FSD-version running, so you can remove them from the equation.
 
It could be that EM is thinking more about AGI than a human centric profitable FSD. We (Tesla investors) should be given some sort of progress metric at some point. Right now things are looking like a billion dollar science experiment without any goal or method of assessing success.

It sort of seems the entire justification for Dojo (auto labeling) is gone. Rejiggering purpose-built Dojo for a new task is going to take time.

It would not be too cynical to observe that X-AI is really the beneficiary of V12 experiments.
Labelling is important. Labels allow the languages of the semantic world to be related to the objects of the physical world. So labelling ability will not be wasted.

"Optimus, please gently pick up the yellow chair in the right corner of this room and take it to my niece"
 
Interesting Elon response to a few months old post. He has said this many times of course, but why repeat here?

1693230459793.png
 
...

If the AI gets confused whilst driving in a blizzard, I don't see how this is different than a human not being able to drive through a snowstorm.

...

Exactly! Way too many critics come up with hypotheticals of "could the car handle THIS!" when the actual answer is "nobody should be driving in these conditions except in an emergency...and maybe the AI will be smart enough to stay home."

When a heavy snow storm, or heavy rain with risk of flash flood, or other hazardous weather is predicted in my area, all of the weather reports and warnings say to "avoid travel". People know this, but go out anyway...and some fraction will have an accident, get stuck in mud or snow, or end up in one of those 18 hour traffic jams that hits the news every couple years because an entire area packed with cars became impassable. Emergency personnel would all be much happier and safer themselves if this could be avoided or at least reduced.

Tesla's cars already check the weather (wind, etc.) to provide better battery use predictions...not much of a leap to also check the weather for the next several hours or even days to check for weather-related hazards and actually follow official warnings/recommendations. If you simply "must" go out, the driving and the liability should be on the person making that choice.

it will be a whole extra level of FSD in a different type of vehicle that will be intended to drive in (almost) all-weather emergency situations....although maybe it will be something like a dually-cybertruck with even more ground clearance driving on big knobby tires.
 
True, but perhaps they could be categorized. I usually disengage before every stop so FSD won't use the friction brakes.
So now you want Tesla to employee a bunch of people to analyze every disengagement and guess as to why the driver actually disengaged? That sounds really expensive, and would likely have a lot of bad data. (I think people said the same thing about the snapshot button early FSDb testers had. Just having a snapshot was enough and Tesla could figure out why they hit the button based on the data. But since Tesla has since moved to having people record voice reasons for the disengagement it seems that the event, and data, alone were not enough. And no the voice response isn't enough either, as they don't always ask for one, people don't always provide one, and sometimes the reason stated doesn't provide enough information.)
 
Is the understanding that it's the sensor inputs t (i.e. cam pixel counts, etc...) that is the issue with FSD running on HW4 vs. HW3?

I would have expected the actual hardware to be a superset of HW3, more/faster NN processing cores, more memory, higher on/off die bandwidth, more/faster ARM CPU cores, etc... I'd also expect that there may be some new architectural additions (i.e. new CPU instructions, additional data types the NN cores can process, etc..). This would all allow for the old code to run on the new hardware, just faster. Whereas new code that took advantage of new features wouldn't run on the older HW, in much the way new x86 CPUs could run old stuff, but new SW that used stuff like SSE instructions wouldn't work on old.

If so, the NN's should largely run unmodified except for things that are timing or sensor-format dependent... which is hopefully a small amount of modification...

Or is the thought that HW4 is a significant architectural departure from HW3?
I don’t know for sure, but I think the HW4 computer has significant architectural differences from HW3. It isn’t just a process node and core number upgrade. This is how they can get 3-4x speed improvements over HW3, otherwise it would be like 40% at best (chart CPU speed updates last few years to see what I mean). If I’m right, that would mean the server farm has to create a brand new model for HW4. Meaning that Tesla now has to iterate both HW3 and HW4 models in parallel. So they automatically need to double (or more, maybe HW4 is more server intensive) server compute for this on top of all the other server compute loads V12 is adding.

Anyone who knows more details care to chime in?
 
I don’t know for sure, but I think the HW4 computer has significant architectural differences from HW3. It isn’t just a process node and core number upgrade. This is how they can get 3-4x speed improvements over HW3, otherwise it would be like 40% at best (chart CPU speed updates last few years to see what I mean). If I’m right, that would mean the server farm has to create a brand new model for HW4. Meaning that Tesla now has to iterate both HW3 and HW4 models in parallel. So they automatically need to double (or more, maybe HW4 is more server intensive) server compute for this on top of all the other server compute loads V12 is adding.

Anyone who knows more details care to chime in?
If that's the case, I'd love another Jim Keller style video talking about the architecture...
 
I don’t know for sure, but I think the HW4 computer has significant architectural differences from HW3. It isn’t just a process node and core number upgrade. This is how they can get 3-4x speed improvements over HW3, otherwise it would be like 40% at best (chart CPU speed updates last few years to see what I mean). If I’m right, that would mean the server farm has to create a brand new model for HW4. Meaning that Tesla now has to iterate both HW3 and HW4 models in parallel. So they automatically need to double (or more, maybe HW4 is more server intensive) server compute for this on top of all the other server compute loads V12 is adding.

Anyone who knows more details care to chime in?
So invest in AI server companies? Is NVidia the only play here? The H100 is made up of Intel cores from what I just read. Silent winner? You wouldn't know which processor is even used on this entire page (other than the mention of the PCIe which sounds like the Intel bus architecture). Almost like the Intel processors were white labeled. Nvidia rebranded them into their own.
 
Why?

New technology doesn't work until it does. That's the nature of the beast.

We had a lot of failed airplanes before Kitty Hawk.

Because Tesla has a track-record of failing and then changing their approach, and the CEO overstating capability each time. Remember the snake charger? That brief effort was the result of somebody embarrassing Musk at an event when he was talking about summoning the car across the country. That was _2015_.

Also, as far as a "Chat GPT moment" goes, Tesla's already there. Chat GPT makes a lot tons of mistakes requiring human intervention and sometimes just makes stuff up that's completely wrong. Chat GPT has a lot of value despite the errors because it has a low cost of error. FSD has a high cost of error so it has to be way better than Chat GPT to have significant value.
 
So invest in AI server companies? Is NVidia the only play here? The H100 is made up of Intel cores from what I just read. Silent winner? You wouldn't know which processor is even used on this entire page (other than the mention of the PCIe which sounds like the Intel bus architecture). Almost like the Intel processors were white labeled. Nvidia rebranded them into their own.
Nvidia has gone with a more power hungry lower performance Intel cores while their last gen used AMD Epyc cpus. The reason why is because AMD is on the verge of releasing their MI300, which is a x86 CPU+GPU as a single chip packaging(think PS5), much like Nvidia's upcoming Grace product (arm+GPU). Based on AMD's guide for Q4, no Nvidia is not the only game in town. Hampered by software before, AMD's stable diffusion performance finally matched Nvidia's as of 2 weeks ago(Nvidia used to be 16x faster). They have also managed to port existing pytorch code from Nvidia->AMD with no code change under MosaicML.

This is all good news because not only will Tesla have access to alternatives over Nvidia, but we can also see margin drop from Nvidia's price gouging practices. Rumor has it Nvidia's H100 GPUs having a gross margin of around 1000%.
 
I think people should stop hoping that this time it's going to be different.

But that's how technological breakthroughs happen. You iterate, trying different approaches coupled with new knowledge, which fail but bring new wisdom, and after enough failed attempts one finally works.

It only has to work ONCE to be a new disruptive technology. And Tesla's current version of FSD is getting extremely close, so if V12 is more capable then it will get even closer, and maybe will be the one to finally solve real autonomy.