Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
With all of the car's AI and NN processing, how can it believe that a vehicle can jump around like that?
I believe the simple answer is that highway Autopilot has "manually" combined camera views with simple C++ code. Lane change code primarily evaluates object detection outputs from the repeater camera, so visualize those outputs without worrying too much about duplicates. Tesla probably is replacing these ad-hoc incremental highway features with the holistic FSD understanding as shown at AI Day, so hopefully this means Tesla believes "city streets" with the 360 video neural networks (which understand vehicles don't jump around like that) is good enough to replace the multi-billion-mile(?) real-world tested production code, and "soon" even basic Autopilot for highways will be improved (and ignore radar even for vehicles that have it).
 
V9 was supposed to be mind-blowing and "coming soon."
Well V9 was certainly a significant improvement over V8 so lets hope that V10 will have the same level of "driving" improvements over V9. That would be good. Plus merging highway with city/residential streets visualization and driving functionality along with adding the new internal camera functionality available in 2021.32.5 and you'd have a pretty significant upgrade.
2021.32.5 Official Tesla Release Notes - Software Updates
 
Last edited:
I believe the simple answer is that highway Autopilot has "manually" combined camera views with simple C++ code. Lane change code primarily evaluates object detection outputs from the repeater camera, so visualize those outputs without worrying too much about duplicates.

Yeah essentially the "old" code that highways still use think the truck one camera sees isn't the same truck another camera sees because it has no understanding of that.

My excitement about V10 isn't anything to do with robotaxis- which I still don't think are coming anytime soon- but with getting a much better idea of if we'll see L3 or L4 highway driving anytime soon once the 4D rewrite gets applied there and which is the best I ever expected them to offer with this sensor package and was why I bought FSD back in 2018.
 
Never said they tweaked it by hand. But "overfitting" or making sure it has tons of training data for specific roads is still something humans can do to improve performance in specific areas that they want to focus on. In fact, it's one of the things you have to be actively careful of when training an NN, there's all sorts of bias you can introduce because of the training sets you choose. Just the bias of engineers being in CA and experiencing failures personally and focusing on fixing those is a kind of bias.
Well your bit seemed to be implying this is just engineers catering to present to their boss on purpose, vs that it is likely simply biases in their data set (even the cars in beta) mainly being concentrated in CA.
What's your opinion of why Elon can experience 8.X or 9.X, call it fire and "blow your mind" and then within 60 minutes of the release we have videos of it driving straight for concrete pillars, and then a few weeks later Elon is calling it "not that good?"
I don't really care for Elon's opinion in the first place of his personal drives, given most he can do is relatively short drives. Even internal dedicated test drivers for Tesla is only going to have a fairly limited view of the actual performance of the system. Expanding beyond that is the whole point of the FSD Beta program.

Plus as we have seen, the performance can vary even for the same exact version, driven in the same exact place. There can be multiple drives where it performs well, and then it just fails in others. The same can happen on Elon's personal drives too. The company's internal tracking of progress is probably going to more be on their regression testing, simulation testing, and the various snippets of failure sent by the FSD Beta fleet. I highly doubt it's all based on a drive by Elon, even though the twitterverse likes to put a lot of emphasis on it.
 
Nice to get direct screen feed videos but disappointing to still see large trucks jumping around in the visualization with FSD Beta 9.2, which should be using Vision-only builds:

View attachment 707187

Although potentially that's what Elon Musk meant by combining the highway and city street stacks where production (highway) Vision-only still focused primarily on forward cameras to predict velocity and city streets was trained on all cameras for consistent birds-eye-view, so hopefully Beta 10 will fix this.
This is on the highway, which doesn't use the same stack as the city part. As mentioned, it's not merging until Beta 10 at the earliest (might not even merge yet then).
 
This implies engineers are hand tuning it to fit his roads specifically, but from what had been seen in presentations this is basically impossible. It is however likely the NNs are "overfit" to the roads local to Tesla, given their internal testing is all done there (this would naturally happen without needing any deliberate action from engineers). But it isn't something you can really tweak by hand (only to some of the decision making parts that are still hand coded, but earlier in the year they have already started migrating that to NNs).
No the driving policy is still ~99% handcoded
 
I highly doubt it's all based on a drive by Elon, even though the twitterverse likes to put a lot of emphasis on it.
Yeah, those dummies on Twitter paying attention to Elon, the CEO of the most valuable car company in the world, commenting on an upcoming product release, with zero other PR interface to Tesla at all. What idiots. They should know that his exposure to FSD is minimal.

But I do agree- of course their internal metrics are not driven by a drive by Elon. But that also means they know it's not fire or "going to blow your mind", and that when they release it people are immediately going to have videos of it doing very stupid things, despite the CEO claiming it is "Capable" of intervention free driving (all cars are if you get lucky!)

Your post history is just as full of referencing things Elon has tweeted/"said" just as much as anyone else. Why are people supposed to ignore what he says about how the latest FSD "will blow your mind", but listen to him when he discusses other things?

Plus as we have seen, the performance can vary even for the same exact version, driven in the same exact place. There can be multiple drives where it performs well, and then it just fails in others.
This is not what I expect out of a "blow your mind" release. "Only tries to kill users 5% of the time you drive that road!"

Even internal dedicated test drivers for Tesla is only going to have a fairly limited view of the actual performance of the system. Expanding beyond that is the whole point of the FSD Beta program.
The CA filing says they have 2,000 internal testers and 71 external "beta" testers. Does not compute that they are getting most of their data from the FSD beta program.
 
Yeah, those dummies on Twitter paying attention to Elon, the CEO of the most valuable car company in the world, commenting on an upcoming product release, with zero other PR interface to Tesla at all. What idiots. They should know that his exposure to FSD is minimal.

But I do agree- of course their internal metrics are not driven by a drive by Elon. But that also means they know it's not fire or "going to blow your mind", and that when they release it people are immediately going to have videos of it doing very stupid things, despite the CEO claiming it is "Capable" of intervention free driving (all cars are if you get lucky!)

Your post history is just as full of referencing things Elon has tweeted/"said" just as much as anyone else. Why are people supposed to ignore what he says about how the latest FSD "will blow your mind", but listen to him when he discusses other things?
I thought my point was clear enough, his opinion of the system just based on his own personal short drives may not be worth much, but there are plenty of things that he posts that are just facts, company policy changes or hints of internal info, which are still valuable.
This is not what I expect out of a "blow your mind" release. "Only tries to kill users 5% of the time you drive that road!"
"Blow your mind" to most people is simply it being able to do something seemingly difficult at least once. Most people don't have the patience to examine long videos of it doing the same thing over and over again (until there is some rare case that it fails). That's just repetition and is boring to most people.
The CA filing says they have 2,000 internal testers and 71 external "beta" testers. Does not compute that they are getting most of their data from the FSD beta program.
That's not what it says. It said this on the March 9th call (which was paraphrasing): "Update on City Street Pilot: Currently there are 824 vehicles in the pilot program-753 employees and 71 nonemployees. Pilot participants are across 37 states with majority of participants in California." We don't have an update after that, but the rest of the document suggests a bulk of the 800+ participants added later afterwards would be non-employee (thus the discussion about the need for a training video). Also, to be clear, by "internal dedicated test drivers" I mean the limited subset of employees that do testing as part of their job with Tesla owned vehicles, not the people participating in the beta with their personal vehicles driving on their own time (which I bet many if not a majority of those employee participants are; there are a lot of employees of Tesla that work in areas completely unrelated to the AP program). Getting FSD Beta access is just an employee perk (just like how the FSD option was offered free of charge to employees).
 
I don't know the exact percentage (do you have a reference?), but I do remember they were starting to move some of it already at the beginning of the year:
Is Tesla Migrating From Programming Logic To Neural Net For Self-Driving?


So there's 3 main elements-

Driving is 3 steps-

Perception, which is ~100% NNs for Tesla

Planning, which is ~90% traditional code for Tesla, with the last 9 months seeing the only non-traditional code finally creeping in (elon made reference to this recently too, pointing out it's still only a small amount

Control- which AFAIK is 100% traditional code
 
Perception, which is ~100% NNs for Tesla

So if perception is individual per camera, then overlapped without regard for duplicated elements, I can see how it is entirely NN based. But to my way of thinking there is a difference between a NN perceiving presence of an object in a given frame, and the 'reasonableness' of what is perceived frame to frame.

Do these NN's track over a block of frames on a per camera basis, or block of frames from a combination 360 bev, or just based on single frames?
 
So if perception is individual per camera, then overlapped without regard for duplicated elements, I can see how it is entirely NN based. But to my way of thinking there is a difference between a NN perceiving presence of an object in a given frame, and the 'reasonableness' of what is perceived frame to frame.

Do these NN's track over a block of frames on a per camera basis, or block of frames from a combination 360 bev, or just based on single frames?


This is the fundamental change of the "rewrite" (which at this point appears to have been several rewrites)

Original system was each frame of each camera was "perceived" all by itself.

So when the two side cameras would each see part of a big truck each cameras NN might say "We are like 78% sure that's a truck" but with no understanding anywhere in the system both were seeing the SAME truck.

So you'd get not just a jumpy truck displayed, sometimes it'd show 2 trucks maybe even overlapping.

The aspiration is replacing that not just with a 360 BEV where the system can "understand" the part of the truck is sees in one cam is the same truck as it sees part of in another-- but to understand this in 4D, with the 4th dimension being time.

The time aspect also massively improves labeling (which was tremendously human-time intensive in the original system since humans had to label things in every frame for every cameras)- since now even if a human needs to label an object "truck" in frame 1 on a single camera, the system can now self-label that same truck as "truck" for the rest of the video stitched together with all the cameras.

Fair bit of stuff above discussed with examples from AI day presentations.
 
  • Like
Reactions: Battpower
Thanks. I've followed all this but without first hand experience of Beta / City, all I see is same old jumping around and duplicated artifacts from time to time. It is really helpful (for me at least!) to hear the same explanation from different perspectives.

It has always confused me where the overlap is between NN and conventional code / logic, but also between static frame by frame processing vs a continuous ’certainty field' around the car that must always be maintained very close to being 100% certain at least close to and in the direct path of the car.

The time based processing must be based from multiple frames, and again in my mind you need to track and predict pathways for multiple significant objects in some kind of hierarchy.
 
Last edited:
  • Like
Reactions: powertoold
One behaviour imo completely missing from public release versions to-date is gentle slowing of the car when its certainty envelope needs more time to catch up. The whole feel is that everything is either completely under control or overloaded. Some of the odd behaviour seen recently (like steering wheel violently turning back and forth) seems like a potential consequence of trying to make the system progress (move forwards) with inadequate or contradictory sensor inputs.

When reasonable human drivers drive, they subconsciously have a concept of 'how safe am I?' and ’what's my safety margin?'. FSD may well include such concepts, but I haven't seen any evidence in my car.
 
Last edited:
  • Like
Reactions: Matias
Apologies, thread too many posts to catch up and I have lost track.

Is tonight FSD beta V10 release night, or another couple of weeks or versions?

I'm really looking forward to seeing the results of city streets and highways combined stacks, irrespective of version naming. Its probably the first benefit we will get to see here in UK.
 
Apologies, thread too many posts to catch up and I have lost track.

Is tonight FSD beta V10 release night, or another couple of weeks or versions?

I'm really looking forward to seeing the results of city streets and highways combined stacks, irrespective of version naming. Its probably the first benefit we will get to see here in UK.
Tonight would be the night....unless there's been an updated tweet
 
Is it expected that one day the visualisations will be steady and glitch free?

I have never understood what the need is to display visualisations that glitch if the car is working to a stable interpretation for FSD / AP etc that could be used to feed the driver display. A bit of lag is no problem, but why jumping around - unless that's the best visualisation that exists?

With all of the car's AI and NN processing, how can it believe that a vehicle can jump around like that?
there seems to be no filtering done. jittering should be smoothed out. the fact that objects flicker like that tells me that have years still left to go before their internal model is truly trustable.