Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
This is called “distributed computing”. It’s very much a real thing, and very likely a very lucrative thing. Pixar uses it to render its movies on “render farms”. Numerical weather forecasting models use it to crunch numbers and get new weather forecasts as quickly as possible. Stock analysis firms use it.

Why again is it ridiculous?

I thought about this in December of 2012 when I got my first Tesla and wondered why it took so long for them to finally think about this.

Years ago I set up my personal computer as a distributed computing node as part of the “SETI@Home” project, which used the personal computing resources of thousands of idle, privately-owned computers to analyze radio signals from outer space and, through Fourier Transforms and other tricks, identify if any of them contained any interesting signs of extraterrestrial intelligent life.

It’s not ridiculous and makes perfect sense, and could actually be a service that Tesla opens. Someone who wants a lot of numbers crunched would hire Tesla for a computing job. If you agree, Tesla will use your FSD computer to do calculations while your car is parked in your garage. In return, Tesla pays you, in the form of Supercharger credits, or literally credits your account. Meanwhile Tesla skims a bit off the top. Millions of distributed GPUs available when needed—all reachable from Tesla’s servers. I can’t think of a single other company that could do this.

I would jump at that deal as a Tesla owner, and I think many people would.
They should at least offer free charging for those who allow them to use the car's computer.
 
right - but the majority of the time Tesla can see your eyes just fine.
Just a guess here, but I doubt "the majority of the time" is good enough for the Feds. The only difference between other makes eye monitoring and Tesla's is that other makes shine an IR light in the driver's eyes. Given: A, No car company is going to install an IR light if there's no reason; B, They get to use hands free; C, Tesla wanted to go hands free and was slapped down; It is a reasonable conclusion that the lack of an IR light is why the feds are keeping the "hands on" requirement. Of course, alternate possibilities exist as well. It could be a punishment program for Tesla since Tesla does not play the "Bribe the politicians with campaign contributions game." Certainly that is almost certainly taken in to account by the Feds, but it seems likely to me that is more of a bonus point than the Fed's primary motivation.
 
Just a guess here, but I doubt "the majority of the time" is good enough for the Feds. The only difference between other makes eye monitoring and Tesla's is that other makes shine an IR light in the driver's eyes. Given: A, No car company is going to install an IR light if there's no reason; B, They get to use hands free; C, Tesla wanted to go hands free and was slapped down; It is a reasonable conclusion that the lack of an IR light is why the feds are keeping the "hands on" requirement. Of course, alternate possibilities exist as well. It could be a punishment program for Tesla since Tesla does not play the "Bribe the politicians with campaign contributions game." Certainly that is almost certainly taken in to account by the Feds, but it seems likely to me that is more of a bonus point than the Fed's primary motivation.
Still makes no sense - If you can see the person's eyes you rely on them. If not you revert to other modalities. Honestly, this is freshman level stuff.
 
  • Like
Reactions: JB47394
What happens if one positions the head towards the screen but keeps the eyeballs straight ahead on the road? What happens if one positions the head straight forward but keeps the eyeballs looking at the screen?
If I have sunglasses on (and the camera can't see my eyes) then it seems to go off of head position. If I'm just wearing regular glasses it seems to track eye movement/gaze. At night it's less clear. My car doesn't have an infrared light to illuminate the cabin so it seems to use the camera when there's enough light to do so and revert to the steering wheel otherwise.
 
If I have sunglasses on (and the camera can't see my eyes) then it seems to go off of head position. If I'm just wearing regular glasses it seems to track eye movement/gaze. At night it's less clear. My car doesn't have an infrared light to illuminate the cabin so it seems to use the camera when there's enough light to do so and revert to the steering wheel otherwise.
To bring it back to the question, that's likely not good enough to be the only way of attention detection, which is why the steering is still in the loop.
 
  • Like
Reactions: zoomer0056
This is called “distributed computing”. It’s very much a real thing, and very likely a very lucrative thing. Pixar uses it to render its movies on “render farms”. Numerical weather forecasting models use it to crunch numbers and get new weather forecasts as quickly as possible. Stock analysis firms use it.

Why again is it ridiculous?

I thought about this in December of 2012 when I got my first Tesla and wondered why it took so long for them to finally think about this.

Years ago I set up my personal computer as a distributed computing node as part of the “SETI@Home” project, which used the personal computing resources of thousands of idle, privately-owned computers to analyze radio signals from outer space and, through Fourier Transforms and other tricks, identify if any of them contained any interesting signs of extraterrestrial intelligent life.

It’s not ridiculous and makes perfect sense, and could actually be a service that Tesla opens. Someone who wants a lot of numbers crunched would hire Tesla for a computing job. If you agree, Tesla will use your FSD computer to do calculations while your car is parked in your garage. In return, Tesla pays you, in the form of Supercharger credits, or literally credits your account. Meanwhile Tesla skims a bit off the top. Millions of distributed GPUs available when needed—all reachable from Tesla’s servers. I can’t think of a single other company that could do this.

I would jump at that deal as a Tesla owner, and I think many people would.
Waive the $10 connectivity subscription and you can use me and my Wi-Fi all night long. (I'm not proud.)
 
I don't have FSD and probably wouldn't even if it worked perfectly, but would still like to try it again (tried it once early in 2022 and it almost got me killed at one particular spot) when V12 is generally available. Guess I have to wait a few months....
 
New Whole Mars Catalog drive. City streets, clear air, 10:30 at night, so not much traffic to contend with. He provides a monologue about the earning and such, so I muted it. I'm sure it's an upbeat message.


It's comical. The car finally moves after needlessly dawdling at an intersection and he looks for the most ridiculous cause. I think FSD saw that lady looking out the second floor window. 🤣
 
1) Because the customer car is not Tesla's property nor uses Tesla's electricity. You need to wake up quite a bit more than the FSD-chip (CAN-bus controllers, cellular etc), so I am not quite sure of how energy efficient this would be. Tesla needs to pay the customer for this, unless there is an army of stupid fan-boys giving Tesla consent for free.
2) The chip is designed for inference and not training.
3) In ML you need silly amounts of data to train. Should this data be transferred to the car, or have the in-car computer use cellular to make thousands of high latency remote calls? Given the former, it would be limited by car storage/cost and the latter would be too slow. The same goes for many other compute-intensive problems.

All in all, it's a stupid idea that some fool asked at an earnings call back in 2019-2020 and Elon is still running with it.
I think I agree with you practically that there are lots of problems here.

I would disagree with one thing, though: I don't think you would have to transmit "stupid" amounts of data. Sure, in the aggregate the training and validation datasets represent a huge amount of data. But the neural network is trained in passes. Theoretically, all you would need to perform one pass (including back-propagation) would be the current (pre-processed) video data from one frame of each camera, the current weights and biases of the tensors, and the back-propagation algorithms (which would only have to sent once for each NN and training set). That doesn't seem like much data to me. How you would break it all up and distribute it, however, could conceivably be much more complicated than it looks at first blush.

You would certainly want to restrict opt-ins to cars that were connected via Wi-fi, I imagine.
 
So are we generally coming to the conclusion here that v12 is just going to be a minor improvement?

Still just a mile or two between disengagements? (Exact rate dependent on many factors of course.)

I also don’t see rapid iteration occurring. Seems like it’ll be faster than it has been most likely, but can’t imagine rollouts will be more than twice as frequent, and I imagine each one will only bring small improvements as we are accustomed to.

I would also guess that any pattern of problems that arise will end up persistent with v12, similar to what we experienced with v11 (some problems were fixed, but there were certain categories of problem that were persistently unfixed).

I guess the above would generally be consistent with expectations.
 
So are we generally coming to the conclusion here that v12 is just going to be a minor improvement?

Still just a mile or two between disengagements? (Exact rate dependent on many factors of course.)

I also don’t see rapid iteration occurring. Seems like it’ll be faster than it has been most likely, but can’t imagine rollouts will be more than twice as frequent, and I imagine each one will only bring small improvements as we are accustomed to.

I would also guess that any pattern of problems that arise will end up persistent with v12, similar to what we experienced with v11 (some problems were fixed, but there were certain categories of problem that were persistently unfixed).

I guess the above would generally be consistent with expectations.
I think it's too early to draw any conclusions until people not on Tesla's payroll (so people other than Whole Mars) get to drive it in the real world.
 
I think it's too early to draw any conclusions until people not on Tesla's payroll (so people other than Whole Mars) get to drive it in the real world.
I’m just basing this on what we’ve seen so far.

Looks like incremental improvement. (Currently it may also have regressions.)

Obviously once we get it we’ll know more, but historically all the problems have been identifiable from video before general release.

But confidence intervals have tightened substantially now that we have seen the nearly finished product.
 
Last edited:
  • Like
Reactions: rlsd
All I want to see (not just read about) to believe in V12 is steady improvement with training and updates. If it keeps improving, eventually it'll get good enough. Perhaps with Dojo firing on all cylinders the rate of improvement would be faster, but we haven't really seen steady improvement at any rate since approximately forever, so even limping along on nVidia chips would reflect that improvement if the system really works.
 
but we haven't really seen steady improvement at any rate since approximately forever,
FSD is way better than it used to be. Over the last 6-9 months I would say improvement has slowed though.

But there has been improvement at a very slow rate.

In very simple situations, FSD can now drive straight ahead in traffic or without traffic, as long as you turn on minimal lane changes. And you need to be somewhat tolerant of jerkier stopping than one is accustomed to.

It can also take turns smoothly in many cases now. That did not used to be the case even for simpler turns.

It used to routinely curb wheels. Haven’t seen a lot of that recently (I am sure it can still happen!).

It used to not have an occupancy network and taking unprotected lefts was basically impossible. Now it can do them, with occasional success, as long as there is not traffic.

There are a lot of early videos and it is easy to compare.

Still just a couple miles between interventions (depends) though.