Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Say what?

Screenshot 2024-04-05 at 5.54.32 PM.png
 
Good to see the truth finally coming out with the conventional wisdom being wrong.

And everyone here claimed the cars can't be trained by their drivers. Shows what they know. AND It's now shown to be a personalized process!

Tesla has also confirmed the cars detect hand signals, another thing no one thought possible.

There is no stopping the robotaxis now.
 
Good to see the truth finally coming out with the conventional wisdom being wrong.

And everyone here claimed the cars can't be trained by their drivers. Shows what they know. AND It's now shown to be a personalized process!

Tesla has also confirmed the cars detect hand signals, another thing no one thought possible.

There is no stopping the robotaxis now.

I don't think Elon actually means that we are personally training our individual cars. For one, the FSD computer does not have enough compute to train large NN and also drive the car. Second, we know Tesla collects data from the fleet and trains the NN centrally on their large training compute. Third, we see no indication that when I drive my car, it is learning from my personal driving since we only see improvements when we get a new software update. If my car was learning from my personal driving then I should expect to see my car not repeat the same mistakes. We don't see that until a new software update comes.

I think Elon chose his words poorly and likely was just describing the existing process of collecting data from the fleet, training the NN and then uploading the latest software back to the fleet. That process is repeated in a closed loop so in a sense our cars are helping train the NN indirectly. But the individual cars are not personally learning.
 
I don't think Elon actually means that we are personally training our individual cars.
I guess I have to use emojis. 😢

The more interesting thing honestly is Tesla themselves claiming that the cars read hand signals. (Maybe sometimes but I doubt it is in any way reliable - but sometimes, rarely, would mean they are correct I suppose.) But not something Elon Tweeted about.
 
  • Funny
Reactions: DrGriz
Um. So, here we are again, arguing about whether the NN stuff in a Tesla is trainable in any sense by the users.

Every half year or so, I raise this possibility, since the rough block diagram of a NN computer (and this is generalized, so it includes wetware as well as hardware) always shows feedback from the output of said NN back to the NN weights and all that.

This gets jumped on by Various Posters who state that they've seen tweets or talked with actual Tesla coders or whatever; most of the arguments along these lines has the stated fact that the load delivered to the car has a fixed checksum. Hence, if it's got a fixed checksum, those checksums are against the weights, and therefore the Weights Can't Change.

To which my comments, in amongst the shouting, has been, "So what? CS majors are inventive." One could imagine situations where the weights come at fixed values, but then are allowed to have offsets that are mangled up and down over a range over time.

And part of my comments have been driven by observation. It sure has seemed that on one day or drive the car acts this way, then, on another day or drive, it acts that way. Now, that's explainable by minor differences in traffic, big complicated feedback systems with strange internal variables, or whatever.. but niggling along was the idea that the overall system was doing some self-training.

It may very well be that Elon's comment was all about the feedback going back to the Mothership and the weights being changed there. And, assuming that's true, all of the above isn't true. But.. there's some room in there for a bit of autonomous training of this and that, with various results going back to the mothership to see how changing a particular weight, on an ensemble basis across many, many cars affected how those cars drove.

Fun.
 
  • Like
Reactions: DanCar
To which my comments, in amongst the shouting, has been, "So what? CS majors are inventive." One could imagine situations where the weights come at fixed values, but then are allowed to have offsets that are mangled up and down over a range over time.


...that's... not how Checksums work.




And part of my comments have been driven by observation. It sure has seemed that on one day or drive the car acts this way, then, on another day or drive, it acts that way. Now, that's explainable by minor differences in traffic, big complicated feedback systems with strange internal variables, or whatever.. but niggling along was the idea that the overall system was doing some self-training.


We also know each time a route is entered the mothership sends down realtime map/routing/nav info for that specific route at that specific time. This isn't individual training related, but does potentially change the behavior of THAT drive compared to a previous one.
 
Every half year or so, I raise this possibility, since the rough block diagram of a NN computer (and this is generalized, so it includes wetware as well as hardware) always shows feedback from the output of said NN back to the NN weights and all that.
Forget weights. Those are sacrosanct. These would be predefined input parameters to the neural network, provided by some customization component. That component might be heuristic, neural networks, or a combination of the two. For example, if the car detects that you always push the car to 80 on I-10 in Texas, then it can provide that as one of the customization parameters. The neural network would be trained to respect those parameters, but only within the context of everything else going on.

The last thing we need is somebody jailbreaking the training on their car - and investigators not being able to figure out what they did to it because nobody can understand raw weights.