Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Yeah I am sure 12.3 will address these issues.

Anyway I just want some ASS. Looks promising according to the Costco parking lot video. And I am sure the EAP folks with HW2.5 want it too, so looking forward to seeing the stack running on that.
This HW4 guy would be happy with dumb summon that only goes forward and back at this point.
 
Informative video from AI Driver

Now that is what I've been wanting to see (also love the production AI DRIVR brings) . WOW v12 is showing a LOT of potential and fantastic improvements over v11. Of course it still needs a lot of polish and some glaring mistakes. I bet with it going to more people Tesla will be able to train v12 quickly and we will see substantial improvements before most of get it in the next few weeks.

The Costco lot was so much more human like driving (smooooooth) than the "nervous" robot driving v11 does (or tries).

Please send it to DirtyTesla too. Also like to see Chuck get it.
 
Last edited:
Here's something I've been thinking about the past few days.

Will generative video negate Tesla's data advantage?
I don’t think so. Here’s why:

1. Generative video requires lots of video to train on. Unless there are databases of millions of clips of driving video from all Tesla camera angles online, you won’t see this.

2. Even if that video WERE online, I think you’d need the corresponding control inputs (steering wheel, accelerator, turn signals, suspension deflection, and whatever else Tesla is using).
 
Will generative video negate Tesla's data advantage?

In time yes, I think generative video will reduce the advantage of real world data. With generative video, companies can simply create their own videos for training end-to-end and won't need real world data as much. The other advantage of generative videos is that it can create data far faster than collecting real world data. We see this with Sora where a simple text prompt can create videos in seconds. Wayve is putting a lot of effort into generative video for their end-to-end training since they are a small start-up that lacks a lot of real world data. So they are compensating with generative video. I think the downside of generative video is that it may not always be accurate or realistic. We see this with some AI videos that have clear defects in them. Generative videos need to be hyper realistic and faithful to the real world in order to be useful for end-to-end training otherwise you will be training your system on bad data. But generative video will get more and more realistic over time. I imagine a not too distance future where an AV start-up could go from nothing to deploying autonomous driving on roads in just a few months, by generating a million clips of video and training their end-to-end without any real world data.

Has anyone seen any 12.2.1 videos of it doing something dangerous?

No. But we have not seen enough videos yet. To put things in perspective, an excellent eyes-on system should have a MTBF of about 50+ hours. That would be the equivalent of watching 50 hours of FSD videos without V12 having a single safety critical intervention. We are not there yet.
 
Last edited:
  • Informative
Reactions: Larry Hutchinson
You'd also have to define 'something dangerous'

We've seen it, and AI driver calls it out, that on open straight roads it seems to do a lot of random braking and slowing down-- A thing folks insisted here for years was dangerous for getting rear-ended.

I also vaguely recall at least one or two reports of it crossing double-yellows too (though without cars around- so illegal but you could argue if dangerous or not?), but it's possible it was the previous 12.x instead of the current one
 
  • Like
Reactions: AlanSubie4Life
Informative video from AI Driver


It is interesting to me how V12 is apparently good at the complex and apparently bad at the simple. It is like Tesla focused on training V12 on difficult cases and so it is really good at those cases. But it cannot do something super simple like maintain a constant speed on a straight, empty road.
 
I suppose it's possible that the visualization is kind of lifted intact from v11, but it would seem very odd if said visualization modules, as an output of the v11 perception, include calculation of creep limits.
Creep limit and the median crossover region could be trained perception outputs, but even if they're from 11.x control, we've seen the previous stack still around for freeways, so there's potential for it to be reused in other ways too. Maybe 11.x control is used for additional checks to send back data of when it disagrees with 12.x even if the old behavior doesn't directly affect new driving behavior?
 
  • Like
Reactions: JHCCAZ
Here's something I've been thinking about the past few days.

Will generative video negate Tesla's data advantage?
Not any more than simulation negates Tesla's data advantage.
Unless generative can extrapolate new edge cases from the data set, it will only create variations of what exists. Further, for each new scenario it also needs to properly create ego's reaction to the situation.
In other words: to understand recursion, you must first understand recursion.
 
Yeah I am sure 12.3 will address these issues
Presumably Tesla deployed 12.2.1 a bit wider to find these issues with end-to-end actively controlling that shadow mode wouldn't have found. For example the confusion of being in the middle turn lane vs right-most lane resulted in swerving back and forth a few times, and Tesla could have data collection detecting oscillating steering or acceleration.

So, bets on when the majority of 11.4.9 will get FSD V12 at this rate?
If Tesla is dependent on active usage of 12.2.x to find these issues to fix in 12.3.x, then this could help reveal how long their cycle time is in gathering data, training, verifying and deploying. Unclear if the training data needs a positive example of how a human driver would have stayed in the same lane versus the known bad behavior with confusion is enough to adjust the neural networks?
 
Not any more than simulation negates Tesla's data advantage.
Unless generative can extrapolate new edge cases from the data set, it will only create variations of what exists. Further, for each new scenario it also needs to properly create ego's reaction to the situation.
In other words: to understand recursion, you must first understand recursion.
I have no doubt that AI systems will understand recursion quite well if they don't already. If a freshman Computer Science student can do it...

But your answer has helped to shape my opinion on the matter, which is that generative video will definitely negate Tesla's data advantage. Eventually.

I guess a better question would be, "How long before generative video negates Tesla's data advantage?"
 
Stop sign at the bottom of a hill, it handled very smoothly. Previous versions braked early and hard.
Oh that's great to hear. I actively change navigation in both directions to avoid 11.x from needing to deal with a steep hill especially for stop at the bottom. It would stop too early, midway through, not far down enough and overall not smooth. Older FSD Beta even triggered forward collision warning when it imagined a vehicle at the painted STOP road marking, and other times it would suddenly swerve to go around the phantom vehicle. Hopefully this means generally 12.x will handle hills and steep angles much smoother.
 
  • Like
Reactions: swedge
Being a whole new system approach and now going to MANY times more cars means the odds of finding bugs and problems go up exponentially.
My feeling is that the opposite is true: the rollout (IMO) would be measured to provide a steady, regular influx of issues at a rate the team can address them. So the number of cars getting the update might go up exponentially after a while, and that would ideally result in the same number of issues coming up as in the earlier, narrower releases since the low-hanging fruit has been picked...