Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
I disengage this morning because V12 still moved too close and too fast to the creeping line on a 50 mph street with heavy traffic and big trucks. I saw the creeping line too close to traffic on the screen. V11 had the same issue.
Was the creep line steady or jumping around? Does it seem like 12.2.1 is even making use of that visualized line? If end-to-end is not actively controlling based on the line, this would definitely be something I would be more proactive in disengaging 12.x.
 
Does running stops signs and red lights [...] count?
Not in the cases that I've seen. Those moves were safe. If FSD ever runs a stop such that it endangers someone, then there would be reason for worry. The greatest problem with the current software running stops is that it's not supposed to do that, safe or not.

pulling into oncoming traffic
When was that? I don't remember seeing it.

The one dangerous moment that comes to mind for me was that early case of Omar having to take over when the car wasn't swinging wide enough for a turn, and it was going to drive into/over a curb.
 
I have no doubt that AI systems will understand recursion quite well if they don't already. If a freshman Computer Science student can do it...

But your answer has helped to shape my opinion on the matter, which is that generative video will definitely negate Tesla's data advantage. Eventually.

I guess a better question would be, "How long before generative video negates Tesla's data advantage?"
Interesting, because my point was actually the opposite.
For a generative NN to help train a driving NN, it would need to have already solved self-driving in order to create the correct action labels for the created scenario (on top of creating the new scenario).
Elon mentioned something like this previously, if you can accurately simulate the real world, you've already solved the problem.

The auto labing system is a pale form of this setup: a much larger, slower, more power hungry NN that can classify the scenario which is used to train the driving NN. However, it only deals in what was, not what could be.
 
Interesting, because my point was actually the opposite.
For a generative NN to help train a driving NN, it would need to have already solved self-driving in order to create the correct action labels for the created scenario (on top of creating the new scenario).
Elon mentioned something like this previously, if you can accurately simulate the real world, you've already solved the problem.

The auto labing system is a pale form of this setup: a much larger, slower, more power hungry NN that can classify the scenario which is used to train the driving NN. However, it only deals in what was, not what could be.
Ah. I understand what you are saying now. It's a good point.

I believe that eventually, AI systems will be able to accurately simulate the real world. Or at least something we could mistake for the real world. It's kind of a higher order Turing test.
 
  • Like
Reactions: mongo
Are you referring to the right turn examples where it rolled through at 5-10mph like the other lead vehicles treating them as yields (and not doing so dangerously)? Or is there an example of 12.2.1 running through while going straight?

I've only seen the 5-10mph runs. Not seeing or responding appropriately to traffic controls is dangerous in my book. Of course a v11 safety recall for the same thing wasn't that long ago.
 
When was that? I don't remember seeing it.

The one dangerous moment that comes to mind for me was that early case of Omar having to take over when the car wasn't swinging wide enough for a turn, and it was going to drive into/over a curb.

Regarding the oncoming traffic, I posted it in the v12 experience thread that was recently merged into this thread. It's Jillybean's broadcast from last night.

 
  • Like
Reactions: JB47394
Was the creep line steady or jumping around? Does it seem like 12.2.1 is even making use of that visualized line? If end-to-end is not actively controlling based on the line, this would definitely be something I would be more proactive in disengaging 12.x.
The creep line was steady. But V12 moved the car to the creep line a little bit fast although not as fast as what V11 did. If the car moved slowly and gave me a chance to double check then I would not disengage. I think the car will not move to the traffic but who knows. It's too dangerous to take risk.
 
The creep line was steady. But V12 moved the car to the creep line a little bit fast although not as fast as what V11 did. If the car moved slowly and gave me a chance to double check then I would not disengage. I think the car will not move to the traffic but who knows. It's too dangerous to take risk.
From AI Driver's video, it is apparent that the visualizations are generated in parallel to the E2E driving system. His recent video had the car drive into a pedestrian shown on the visualization. Fortunately, it was a phantom pedestrian. If the visualization was coming out of the E2E system, then the car should have stopped for the phantom pedestrian.

So, I suspect that all of the visualizations are done separately from the E2E system and are little more than eye candy.
 
Not when Tesla is training a "photons in" => "controls out" NN. The training system doesn't even use images anymore, just the raw data from the cameras. Generative AI isn't geared up to produce that kind of data.

This is not accurate.

Teslas generative video AI shown by Ashok is literally just guessing at future RGB values, based on the original real-world video fed as a seed- and as explicitly stated in the presentation, so if that's all they are using to train then it's absolutely producing that kind of data.
 
  • Like
Reactions: RabidYak
But it brings up the question of what the V12 control system is using to see. Is it truly a completely-new monolithic system that goes from pixels-in to controls-out?
That's basically what it is supposed to be. There are, of course, other inputs besides just cameras. But, supposedly, the system is not trained on how to identify a car, or pedestrian, or any specific objects.
 
That's basically what it is supposed to be.
I never believed that they would make a wholesale replacement in one go, but I frequently underestimate how aggressive Elon's people can be. If it's truly a monolith, then I'm fairly astonished.

Edit: It also means that the system is going to be sensitive to each change of hardware, and that means getting training data for each set of hardware. It explains the lack of Cybertruck V12, and the distinct timings for HW3 and HW4. I know this isn't news to anyone, but I thought it worth repeating, given further evidence of a completely new, monolithic system. I can see a high volume vehicle like the 'Model 2' accumulating lots of training data very quickly, but something like the Cybertruck should take significantly longer.
 
Last edited:
Hopefully this X video works. In summary, it's not the best roundabout performance but it's also busy.

At first FSD was indecisive with entry into the roundabout and then moments later FSD appears to be disengaged by hard brake. You can hear FSD was re-enabled shortly thereafter. Unfortunately the driver doesn't give details.

 
Last edited:
  • Informative
  • Like
Reactions: JHCCAZ and JB47394