Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
Thanks for finding this.

Not surprising but does make me wonder whether they’ll expand the beta without it (ie the button in a couple of weeks). I suppose it’s not a gating factor to widening the beta, as NoA is already out there as-is.

Tesla could wait for the single stack before releasing "the button" if they want the general public to have "feature complete". But I think a more important metric is safety, not whether highway driving is included or not. Early reports are showing some issues with V10. I think Tesla will want to fix those issues before releasing it wide.
 
  • Like
Reactions: scottf200
Tesla could wait for the single stack before releasing "the button" if they want the general public to have "feature complete". But I think a more important metric is safety, not whether highway driving is included or not. Early reports are showing some issues with V10. I think Tesla will want to fix those issues before releasing it wide.
I agree. They’ll definitely want a single stack before they push this to everyone, if for no other reason than consistency of user experience. But the criteria for widening the beta will be the safety of City Streets.

My guess is they’ll release the button as promised within the next 2-4 weeks. Because it has now been redefined as a ‘request’ button, there’s no reason not to. They can use it to get a better sense of demand and can then expand as slowly as they like - even if it’s just a dozen or so cars a month.
 
  • Like
Reactions: diplomat33
I agree. They’ll definitely want a single stack before they push this to everyone, if for no other reason than consistency of user experience. But the criteria for widening the beta will be the safety of City Streets.

My guess is they’ll release the button as promised within the next 2-4 weeks. Because it has now been redefined as a ‘request’ button, there’s no reason not to. They can use it to get a better sense of demand and can then expand as slowly as they like - even if it’s just a dozen or so cars a month.

FYI, Elon just gave an update that the single stack for all should come in V10.1

 
The streets of Seattle seem pretty challenging: lots of hill crests, rough roads, islands, road / bike markings, monorail pillars, etc.
And e-scooters! Looks like FSD Beta got confused by a vulnerable road user "in the crosswalk" that isn't actually using the crosswalk. Sudden braking 21mph -> 3mph after already entering the intersection to stop for the crosswalk:
scooter vru in crosswalk.jpg


(Notice the blue path line switches to gray at the crosswalk indicating it wants to stop.)
 
And e-scooters! Looks like FSD Beta got confused by a vulnerable road user "in the crosswalk" that isn't actually using the crosswalk. Sudden braking 21mph -> 3mph after already entering the intersection to stop for the crosswalk:
View attachment 707928

(Notice the blue path line switches to gray at the crosswalk indicating it wants to stop.)
At 15:20: "this is by far the best update"
Proceeds to make a left turn from the wrong lane and almost run into a curb... The illegal left was also across a bike lane, would FSD have stopped for a cyclist on the left?
To me the biggest issue with FSD Beta is that many people don't seem to notice that they're driving like a drunk when they use it. If the car is doing something wrong, disengage and report the bug!
 
Last edited:
To me the biggest issue with FSD Beta is that many people don't seem to notice that they're driving like a drunk when they use it.

Mostly in city centers though. Even fsd developers with hd maps avoid city centers. The main limitation of fsd beta seems to be perception. We've seen the performance improve "exponentially" as the visualizations become more confident and accurate.

During AI Day, Ashok said that he doesn't see why the fsd visualizations can't eventually be perfect, and we're just on the way there (when asked about the jitteriness of cars on production AP visualization). I hope he's right :)
 
Do you happen to know if the Tesla approach is fundamentally different from say the method Waymo and Mobileye are using? I presume they all use neural net processing.

They're all using NNs, though how they use them, for what they use them, and how they train them, is obviously different.

Fundamentally Tesla is trying to build a generalized vision-only solution that works everywhere. Having maps "helps" for general navigation, and to provide high level info like not-visible-at-distance traffic control locations or lane information, but they don't need to know where every single curb and tree is in the maps.

The rest are using fusion of multiple sensors (generally radar, lidar, and vision at minimum) and also HD maps that do provide super fine detail, the using the fused sensors to know where in that ultra-high-res map they are. As a result, so far anyway, none of these folks have been able to scale their systems, and they typically offer nothing more than "pilot" test programs in a single city (with promises, now years late, to offer them in a very few other cities... though some are at least private-testing in a few other cities).

@diplomat33 can probably give you a deeper dive into the weeds on the differences in each of the "we need lidar" companies approaches, I think he mentioned for example between Waymo and Mobileye that Waymo does their fusion much earlier in the process for example.


Teslas theory in contrast is if you solve vision, there's nothing LIDAR can add....and if you haven't solved vision then LIDAR and super-HD-maps is just a crutch for a sub-optimal vision system.


(personally I think Teslas thinking here is GENERALLY correct-- but I disagree with them on radar... because Radar objectively can add info that vision can't... the problem is most cheap "we need a million+ of these a year" car radars are very low resolution so trying to fuse that data with vision is difficult--- and Tesla ended up deciding the relatively few edge cases where it does add useful info you can't get with cameras are outweighed by the many more cases where it makes the system worse)
 
So you agree. Listening to Elon will only cause you to be mislead.

If you're trying to get deep dive tech details- sure. Listening to the actual technical head of the program gets you more accurate info.

Just like you'd be "mislead" listening to Toyotas CEO talking about how awesome Hydrogen cars are, when the actual engineer in charge of the Mirai admits just charging batteries directly as Tesla does makes way more sense than hydrogen fuel cells.
 
Mostly in city centers though. Even fsd developers with hd maps avoid city centers. The main limitation of fsd beta seems to be perception. We've seen the performance improve "exponentially" as the visualizations become more confident and accurate.

During AI Day, Ashok said that he doesn't see why the fsd visualizations can't eventually be perfect, and we're just on the way there (when asked about the jitteriness of cars on production AP visualization). I hope he's right :)
It's the performance of humans that's the issue. Maybe this guy is just a really bad driver and regularly makes illegal turns without noticing? It seems more likely that untrained humans are really bad at testing beta full-self driving software. It's not clear that improving the performance of the software will help either as the human may become even less aware of the mistakes the system makes.

What has been the rate of exponential improvement?
My guesstimate is that the disengagement rate is about a tenth of what it was a year ago.
 
It seems more likely that untrained humans are really bad at testing beta full-self driving software. It's not clear that improving the performance of the software will help either as the human may become even less aware of the mistakes the system makes.

It's not clear that the public fsd beta testers are actually contributing much input to the program anyway, as there aren't very many of them relative to Tesla-employee testers.

I think the public testers are more for publicity and Tesla AI team "ownership" motivation. The Tesla AI team would be more motivated if they knew their work would be on full display every two weeks.
 
You're welcome to point out where great progress has been made.


Many of the videos, which you reveal here you don't watch, already do so.

The v10 lombard one is perhaps the most obvious example but tons of others going back to the 8->9, each update, and now 9->10. Testers often repeat previous failures to find them working in newer versions, or in some cases merely "fails less badly" which remains progress.
 
I think we're going to have to start measuring these videos in traffic law violations per mile instead of disengagements:
  • Bus lane violations
  • Crossing double white lines
  • Rolling stops
  • Failure to yield (or at least not soon enough to not scare other drivers and pedestrians)
I mean, it did pretty well on the easy stuff. It's about as good as a 15-year-old student driver on unfamiliar roads. So that's not terrible. But it definitely isn't ready for unsupervised driving, or even close to it, and I kind of feel like a lot of their test drivers should be disengaging way more often than they are, so that those disengagements will be reported, and the problems that caused them can get fixed.
 
They're all using NNs, though how they use them, for what they use them, and how they train them, is obviously different.
Incorrect, they are using them the same way and for the same tasks and are trained mostly the same way although others like Waymo use SOTA architectures while Tesla uses run of the mill academic architectures.

Tesla is actually lagging in every category and they are not even doing prediction yet in the released stack. They are running the handcoded ego car logic on other actors. So they are behind what others are doing in prediction and planning.
Fundamentally Tesla is trying to build a generalized vision-only solution that works everywhere.
This gives the illusion that Tesla is a generalized system and others are not. This is 1000% wrong. Others system are also generalized. They don't forget how a car looks and starts slamming into them or running over pedestrians because they go from SF to Detroit. Or forget how to make a left/right turn.
Having maps "helps" for general navigation, and to provide high level info like not-visible-at-distance traffic control locations or lane information, but they don't need to know where every single curb and tree is in the maps.
Tesla uses hd maps and their hd maps actually do know where each curb is.
and also HD maps that do provide super fine detail, the using the fused sensors to know where in that ultra-high-res map they are. As a result, so far anyway, none of these folks have been able to scale their systems, and they typically offer nothing more than "pilot" test programs in a single city (with promises, now years late, to offer them in a very few other cities... though some are at least private-testing in a few other cities).
There's this myth about HD map that has been debunked a thousand times. HD map isn't stopping the scale of L4 systems. Secondly others have systems that work anywhere in a country or world due to the fact they have HD map of entire countries (Mobileye's Drive & Supervision, Huawei Autopilot). However those systems are not L4 everywhere because mapping isn't the gate keeper.
Teslas theory in contrast is if you solve vision, there's nothing LIDAR can add....
Superman should remove his xray vision because there's nothing Xray vision can add..
and if you haven't solved vision then LIDAR and super-HD-maps is just a crutch for a sub-optimal vision system.
First there's nothing like solving vision. Solving vision only happens with AGI. This is another of Elon's myth.
Secondly Tesla USES super hd maps, another Elon myth.
(personally I think Teslas thinking here is GENERALLY correct-- but I disagree with them on radar... because Radar objectively can add info that vision can't...
Lidar can objectively add info that vision can't. You just rather believe Elon's nonsense.
 
  • Disagree
Reactions: Dutchie
Mostly in city centers though.
What an unfortunate place for "City Streets Autosteer" to drive like a drunk in.

Kidding, I'd be surprised if we don't see it done within the next 3 months. It's probably easier than driving in downtown SF for an hour without disengagement.

The streets of Seattle seem pretty challenging: lots of hill crests, rough roads, islands, road / bike markings, monorail pillars, etc.
Like I said, the caveat to a "cross country drive" with FSD V10 is going to be that the route is planned to avoid any city streets, even though literally all the releases to this point have made no changes to highway code and are all about city streets.
 
The people who keep pointing out disengagement’s are missing the point of fsd beta. Just because there continue to be disengagement doesn’t mean that the software isn’t making great progress.
It's the lack of disengagements that's the problem! Again, the problem isn't the software, it's the testers who don't seem to even notice when they're doing something illegal and/or dangerous.
Disengagement rate has almost nothing to do with the safety of FSD Beta because it still requires a human safety driver. What matters is the ability of the safety driver to prevent errors, a skill which seems to be lacking in many people.
 
FYI, Elon just gave an update that the single stack for all should come in V10.1
I'm calling it. V10.0 does not go wider than the current beta testers. Elon says "there's no reason to release it, because the changes in V10.1 are so big and fire." Then we get a series of V10.X releases that have some regressions on the highway, and Elon says the wider release will be V11 after some sort of rewrite.

Seriously, what is up with their version numbering if V10 is not "single stack" but V10.1 will be? It should be clear to everyone it's all marketing, not an indication of the scale of any actual code changes.
 
Disengagement rate has almost nothing to do with the safety of FSD Beta because it still requires a human safety driver. What matters is the ability of the safety driver to prevent errors which seems to be lacking in many people.
This is an excellent point, and one that we all contribute to as we fetishize disengagement rate. Both the supporters and skeptics focus on disengagements, which pressures the supporters that have "FSD" Beta to post videos with no disengagements and thus wait way too long to disengage, hoping the car will eventually figure it out.

We need to move to a metric that is more "a real driver would have never done that, and doing that is an increase in risk to the population around the car" as a metric, rather than using only the decisions of the singular biased driver behind the wheel for a performance metric.