Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

New button to enable FSD Beta download?

This site may earn commission on affiliate links.
Not true.

Perception, Prediction, Object and Lane fusion, etc, etc are all improved in the FSD beta for highways
Source?

Everything I have seen shows the standard (available today to everyone) Navigate On Autopilot software and behavior when the FSD Beta is on the highway. I am happy to be wrong, but haven't seen anyone mention it in any of the videos I've watched (I have not watched all of them, so I could have missed something).
 

Hard to tell if that's intentional, since my car does that sort of maneuver from time to time as well (swerve back in middle of lane change and then proceed). The maneuver in the video is a little bit different than what I've seen in my car, but it's difficult to conclude that it was intentionally avoiding the bag. It's possible it got scared of the bridge's / sign's shadow.
 
  • Like
Reactions: MorrisonHiker
It would be interesting to set something up where you could do an A/B test with a FSD Beta car and a "current iteration of software" car to see if they react differently on the highway in the same situation.

Clearly in that video the car moves around the bag, but it is unclear if that is because of the FSD software or not. As the video mentions, the visualization on the screen is the standard NoAP, so there is no insight into what the computer thinks it saw. It may be that all current software can do that and it is not a FSD Beta specific feature.

I am positive that Tesla will eventually integrate the highway driving and city driving into one software package, but I remain skeptical this has already happened until further evidence is available.
 
Yeah this clearly does not show anything.

Eventually I would assume AP on freeway will be improved if perception can actually be solved. But not yet it appears.

But hopefully it won’t be dodging plastic bags! Definitely perception needs to be able to distinguish between dangerous and harmless obstacles. I whacked a plastic bag on the freeway last week. Saw it way beforehand. Plowed ahead, no slowing or swerving - way too dangerous for that. FSD will doubtless do the same. The question is when. Going to have to have a good physics prediction engine to help make the tough calls I guess.
 
Yeah this clearly does not show anything.

Eventually I would assume AP on freeway will be improved if perception can actually be solved. But not yet it appears.

But hopefully it won’t be dodging plastic bags! Definitely perception needs to be able to distinguish between dangerous and harmless obstacles. I whacked a plastic bag on the freeway last week. Saw it way beforehand. Plowed ahead, no slowing or swerving - way too dangerous. FSD will doubtless do the same. The question is when. Going to have to have a good physics prediction engine to help make the tough calls I guess.

I could be tripping balls but I actually seem to remember Elon talking about how radar is better than lidar because it won't freak out about a plastic bag.
 
I could be tripping balls but I actually seem to remember Elon talking about how radar is better than lidar because it won't freak out about a plastic bag.

Lidar is basically the same as vision in this specific regard, so it seems like both systems need predictive engines to look at motion and size of objects and interpret how they are behaving in airflow, to determine the danger, based on how they behave (should be able to determine weight and aero properties fairly easily). Just like a human does! Distinguishing between a bag and plywood could be difficult, though perhaps a small portion of plywood or drywall is relatively inconsequential (though not to your paint and body work). 2x4 should be easy though. In any case I don’t see any reason why lidar or vision would freak out about a plastic bag with that strong modeling underpinning interpretation.
But anyway this is off topic from me. Back to Button Watch! Should be any day! Right?
 
Not true.

Perception, Prediction, Object and Lane fusion, etc, etc are all improved in the FSD beta for highways
Can you link to a video demonstrating this on the freeway? I am curious since that is one of the primary reasons for purchasing FSD (city NoA seems much more aspirational).
verygreen's post in the other thread makes an interesting point (even though he doesn't necessarily have access to FSD Beta code, he does for the production code). He basically says the NNs for the "Vision" module are shared among the different modules.
FSD beta for all?

Which means it likely doesn't work like everyone here seems to be assuming:
1) in FSD mode in the UI it's running Beta code (including updated "Vision" code, like the 4D that Elon was talking about)
2) in NOA mode in the UI it's running the public code (including the old "Vision" code, the basic 2D/3D system Tesla had been using for a while)

Rather it's more like this:
1) in FSD mode in the UI it's running the Beta city streets lane changing/staying logic, which uses the new "Vision" code for perception
2) in NOA mode in the UI it's running the public highway lane changing/staying logic, which uses the new "Vision" code for perception

You can see that in latter case, the NOA in the Beta cars may behave exactly the same as public NOA in terms of making decisions (given that logic have not changed), but it may have enhanced "Vision" just like it does when on city streets.
 
You can see that in latter case, the NOA in the Beta cars may behave exactly the same as public NOA in terms of making decisions (given that logic have not changed), but it may have enhanced "Vision" just like it does when on city streets.

That's interesting. I would think an astute observer would notice changes in behavior on the Freeway NOA, then. Of course, there don't seem to be tons of videos of this (the plastic bag example is really unclear and not definitive), since the focus has been city driving.

It seems like they COULD switch over to the old perception (swap the NNs in the vision module) if they thought they were on the highway (as was claimed couldn't be happening in that other thread), but no idea what they're actually doing of course. It doesn't seem like Green can rule it out one way or the other; he's not actually saying definitively that the NNs in the vision module won't change based on context.

Like I said, would require some astute observation from an avid NOA user, probably. You'd think the visualizations (even though they're the old style) might show more cars, etc., in addition to behavior changes, since the vision in the city NoA seems to pick up a lot of them. But no idea (they might limit the number of cars displayed or whatever in that old visualization). I'd be trying to detect behavior changes.
 
That's interesting. I would think an astute observer would notice changes in behavior on the Freeway NOA, then. Of course, there don't seem to be tons of videos of this (the plastic bag example is really unclear and not definitive), since the focus has been city driving.

It seems like they COULD switch over to the old perception (swap the NNs in the vision module) if they thought they were on the highway (as was claimed couldn't be happening in that other thread), but no idea what they're actually doing of course. It doesn't seem like Green can rule it out one way or the other; he's not actually saying definitively that the NNs in the vision module won't change based on context.

Like I said, would require some astute observation from an avid NOA user, probably. You'd think the visualizations (even though they're the old style) might show more cars, etc., in addition to behavior changes, since the vision in the city NoA seems to pick up a lot of them. But no idea (they might limit the number of cars displayed or whatever in that old visualization). I'd be trying to detect behavior changes.
As you point out, the limiting factor is people are understandably focusing almost all their attention on the mode where the FSD visualization shows and not caring much about comparing NOA. In the latter situation, it'll be hard to tell the difference though, especially if the old visualization was doing some "smoothing" in the first place (like you mention for example maybe limiting the amount of objects shown). Either way, obviously eventually they'll have to get to updating the rest of NOA, but that looks like it'll have to wait until after "City Streets" is released publicly.
 
Yeah, just in the process of driving, here's one, it was posted here. There are a lot of examples posted in the various threads around here; it happens frequently. To me I would say it typically happens in situations where a very careless human driver would also hit the curb. This should not be a surprise to anyone; it's pretty much inevitable that this would happen!

Hahaha! Driver is so calm sounding...(car hits curb hard)...”hit that curb, that sucks...”
Yeah, just in the process of driving, here's one, it was posted here. There are a lot of examples posted in the various threads around here; it happens frequently. To me I would say it typically happens in situations where a very careless human driver would also hit the curb. This should not be a surprise to anyone; it's pretty much inevitable that this would happen!

hahaha! Driver is so calm, ***BANG!*** “sigh, hit that curb, that sucks...”
 
Totally baseless theory - the button is in 2021.4.12, but they're not going to enable it until most of the fleet is on this version. Currently it's deployed to 31.5% of Teslafi vehicles.

So maybe they flip the switch when the majority of cars have it. The button would then opt you in for 8.3 when it drops later this week/next week.

EDIT: I know green has said it's not in there. But there seems to be some debate about whether it could be found anyway.....