Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
Replacing the C++ code with neural network will lessen the load on the CPU part of the autopilot computer, and increase the load on neural network part of the computer. Only Tesla knows how this affects the available neural network resources on the autopilot computer: While they are adding load on the neural network part of the computer because of the added V12 functionality, they may also optimise (i.e. lessen) the load used by the V11 functionality.

One cannot understand the difference without understanding what a NN really is and how it does what it does.
Computational "intensity" of NN is determined by its size i.e. how much nodes it containes.
The number of nodes is generally an input parameter before training starts.
If it is to small, the "learned knowledge" will be to course. it won't 'catch' all the little details needed to be succesfull, it will ignore too much data.
If it is too large, it will run slower than needed to attain desired results, It may also perform poorly because to much attention to unimportant details (how much mud was on each cars, were there stickers, did it have open windows, was the music loud, did the driver wear sunglasses,? ...)

What is adequate size for NN that catches all the needed details and ignores all unimportant stuff is at the core of machine learning problem.
The process is thus incremental, one tries to guess the adequate size, trains the NN, study the performance, tries a bigger NN, tries pruning (reducing the number of nodes until performance starts to deteriorate.

A NN does not get bigger (costlier) to run just because you feed it additional videos or added additional output signals.
When human learns a new language, reads another book etc, he doesn't start speaking or walking slower.
Our brain size (number of neurons) limits how much detail we can hold in our heads. This limit is usually not reached because we also tend to forget unimportant sfuff. Computer NN don't know how to forget on-the-fly, they are static.
 
Questione: in PHEVs, is the regen brake efficient enough to supersede the fact that you carry around a battery/mass? Many people don't plug in but they recharge the battery with the brake alone.
This depends on how you drive them. If you put in some additional monitoring and drive to the numbers, then the answer is yes. Otherwise it's no.
 
Looks like earth moving machines are finally arriving at Giga Mexico:



The video mentions drone footage by Omar Benavides - but I could not find his uploads anywhere. Perhaps your google-fu is better than mine?

Edit: The guy looks like he is surveying the land. But the digger could just as easily be parked there by the road authorities or someone.
 
Last edited:
George Hotz manages to run his 3 camera input end-to-end NN on a mobile phone CPU (Qualcomm Snapdragon 845 for crying out loud) and drive to the nearest Taco Bell. So I highly doubt these "experts" who say that Tesla FSDBeta cannot run on one node.

But then, he drove to Taco Bell. How do you explain that? that is the real problem
 
Thank you @NicoV . That is helpful. So bottom line is we don't yet know if eliminating the 300k lines C++ will enable it to fit on one processor in HW3. With either the non-optimised or (future) optimised NN. As always FSD is still a carrot out of reach.
As other folks have pointed out, the C++ stuff likely runs on the CPU cores (ARM based), not the NN processing units.

"Fitting" on one processor could have multiple meanings: The code base is too large to fit within the memory constraints, or it may not execute fast enough to meet the time constraints the system needs for processing video frames at XX FPS etc...

Typically you (or the compiler) can optimize code for speed or size. For example, you can unroll loops to execute faster, but at the expense of a larger code size. Sometimes both is possible. Or you re-factor and change the approach/algorithm and which typically isn't strictly optimization.

My guess is that the "doesn't" fit thing is execution speed related.
 
Last edited:
We owned a Volt and now a RAV4 Prime, and we have many family members and friends with similar plug in hybrids, and we ALWAYS plug them in. I think it is a fallacy to think that most and even many do not plug in.
I believe much of the ‘doesn’t plug in data’ came from businesses data. Employees with hybrid company cars rarely plugged in. Conscientious private hybrid owners would plug in.
 
We owned a Volt and now a RAV4 Prime, and we have many family members and friends with similar plug in hybrids, and we ALWAYS plug them in. I think it is a fallacy to think that most and even many do not plug in.
My experience with my only PHEV was that it got 107 mpg if charged regularly VS about 45ish if neglected. As a result it was charged incessantly. Problem was really it retained all the ICE negatives related to complex mechanical devices - oil, belts, emissions, smells, radiators and transmissions.
 
Looking back at the last CPI release on July 12, we had a similar positive reaction in pre-market to a cool CPI print. The days trading was fairly muted but then it did kickoff a rally from the local low to the recent high just under $300. Hopefully we've put in a local low and can get back to breaking $300 and the resistance trend line.

1691671351197.png
 
A longstanding (and totally valid) criticism of PHEVs is that their packs were never used. That makes a lot of sense when it was a small pack that only offered low double digit miles of range, however a 24kWh pack should be sufficient for most people's daily drive. It will be interesting to see at what point PHEVs stop being a distraction and start having a meaningful impact on carbon emission reduction.
Yes. I view PHEV as a gateway to expose more people to the potential and benefits of all-electric travel (as my daughter’s Prius did for me in 2014). They will certainly be charged and used in all-electric mode by families already owning a BEV, e.g., drivers who want to capture most of the environmental benefits but avoid charging stops on longer trips (my wife, for one). An option for a larger battery to bring local range up to 50-60 miles would make a difference in sales volumes IMHO (I’m waiting on that to replace a hybrid SUV).
 
George Hotz manages to run his 3 camera input end-to-end NN on a mobile phone CPU (Qualcomm Snapdragon 845 for crying out loud) and drive to the nearest Taco Bell. So I highly doubt these "experts" who say that Tesla FSDBeta cannot run on one node.

It's not them "saying" it- it's them having actually modeled and measured and observed the actual code and behavior. It's not an opinion- it's a fact. The current (and for the past 2+ years) code can not run on one node and has been (in fact, not in guesswork) using extended compute (ie cross-node and all the performance penalty that entails because they had no other choice) all that time.

Again this has been covered, exhaustively, in the proper forum for it- it's baffling people keep throwing out random "I don't believe this thing I've read like 2 total paragraphs about ever no matter how many facts I'm unaware of prove it" in this thread when there's a wealth of knowledge available on it elsewhere if you actually care to avail yourself of it. Doubly so when it's such an important part of many folks future valuation models.

Also it should've been clear from just the Douma/Green cites- but it's specifically the NNs that are having this issue- which as a couple folks pointed out run on the NPUs not the CPUs... so getting rid of C code that was hitting the CPUs not only won't help this problem, the fact they're moving that work to NNs means the out-of-compute-for-NNs single node issue will be worse with that transition. (It might well improve the systems capabilities of course, but at cost of being even less able to ever fit back in a single node)



Not saying FSD computer didn't run out of compute 2 years ago, but this back and forth doesn't really matter as they optimize how they utilize their resources and have IMPROVED FSDB drastically over the last 2 years.

It does matter in terms of it ever being capable of robotaxi on HW3 though- because they need redundancy to do that and that's not possible in those constraints.


"The code is now more compute intensive today" implies that FSDb would be underperforming as it's being choked by the hardware that was obsolete 2 years ago but this is simply not the case.

It doesn't imply that at all.

It implies it was choked by ONE NODE, and they've made all those gains by using cross-node compute.

Which is great for improving things overall- and when they port that to HW4 and it all fits back in 1 node again that's great... But it implies it's never going to be redundant on HW3 cars. And since the system is still missing entire sections of capability needed for L3 or higher we've no idea if adding those eventually will push it beyond single-node on HW4 either. Nobody knows that yet because they haven't done it yet. We DO know it can't be done single-node on 3 though.



Performance drastically decrease if there's a severe bottleneck in the hardware as it runs out of resources. When a game runs out of Vram by 10%, frame rate drops to single digits...it doesn't just slow down by 10%, but by 95%. So if current code requires 120% of what the FSD computer can give, then FSDB wouldn't even work. It doesn't work @80% until there's enough compute in HW4 to make up the difference.

You're conflating traditional compute with NN compute. They're...very different as others have suggested.


On top of that, there's videos now showing the camera inputs from HW3 vs HW4... keep in mind today FSDb can not read most signs beyond a very few large highly legible ones like standard speed, stop, and yield.... There's like 100 traffic control signs a driver needs to be able to read and obey in the federal manual- Tesla does like 3 of the 100 today. To ever operate L3 or higher it'll need to do all of them that might be encountered in whatever ODD they set. The footage makes it even clearer how many smaller (but needed to drive at L3 or above) signs are illegible with the HW3 cams- but clearly visible in the high res HW4 ones--- so that'll be another limitation but one you haven't seen yet because they haven't bothered addressing this capability with the HW3 cams that can't really do it properly anyway.


Anyway- markets open in a few minutes- again there's a world of info folks seem unaware of over here:

 
Last edited:
The world seems poised and ready for a better version of autonomous ride-hailing than is currently being offered.

SMR covers a CNBS report on Cruise and Waymo in San Fransisco, bringing to light the problems these companies are facing and demonstrating flaws found in their routine operation.

Meanwhile, Tesla waits quietly in the wings, gathering data, until FSD is deemed ready for prime time. Once this moment is reached it will be like watching a steam roller taking out these Geo-map-restricted services playing at the autonomy game.


Despite the enthusiasm for Cruise/Waymo expansion into other cities, and, desire for permission to operate at speeds up to 55 mph, it seems unrealistic to presume either will be able to successfully manage the edge cases which currently make for comic relief and frustrate other road users.

Bullish?