Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
Elon says V9.2 is "not great" but adds that AP team is rallying to improve it as fast as possible. He also adds that they are working on combining highway and city streets but it requires "massive retraining":


I am a little surprised that Elon would admit that FSD Beta is not great. Usually, he is hyping it up. He seems to be trying to downplay the hype in this tweet.

And I think "massive retraining" implies that Tesla has a lot of work left to do before we see a complete "highway+city" FSD. Elon seems to be trying to lower expectations.
That's a nice tweet for Tesla to use in any NHTSA investigation or Senate hearing. See how humble we are, we never said our system was perfect. Despite masses of examples of them/him doing just that.

Classic PR tactic. Or a rare moment of honesty/doubt. Good to have that statement out there anyway.
 
Elon says V9.2 is "not great" but adds that AP team is rallying to improve it as fast as possible. He also adds that they are working on combining highway and city streets but it requires "massive retraining":


I am a little surprised that Elon would admit that FSD Beta is not great. Usually, he is hyping it up. He seems to be trying to downplay the hype in this tweet.

And I think "massive retraining" implies that Tesla has a lot of work left to do before we see a complete "highway+city" FSD. Elon seems to be trying to lower expectations.
Perhaps as a result of the NHTSA investigation? As you say, this isnt usual for Elon.
 
While I agree these are cherry picked testers (as they should be), you need to be careful about this claim. You dont KNOW that the near would have been an actual accident without the intervention .. that is just speculation. The testers are told to be vigilant, and its quite possible that in some of these cases the car would have taken appropriate steps, but the tester disengaged before that. Presumably Tesla can look at the predictions in detail and determine this. Of course, there are many cases where its clear the car was way out of line, but it's incorrect to assume that is always the case.
I definitely get your point, but assuming that a slight change in circumstance would have led to a real incident is the definition of a near miss. We can change that wording to "have stopped many near misses from likely becoming actual incidents", but emphasizing the gravity of the consequences might not be a bad thing when we're talking about events that likely sent the driver's heart rates skyrocketing
 
  • Like
Reactions: drtimhill
I didn't say anything about the sample size. And in fact I have already noted that the statistics from the FSD beta would mean nothing since the drivers are cherry-picked and are required to mainain a very high vigilance level, way beyond the general population.

And I also said nothing about the FSD standard. I have never argued about not having a high bar for autonomous vehicles. My point has always been that those who argue that FSD will never appear do so using an artificially high standard of acceptance. But acceptance does not mean we should not strive to be better than that in the longer term. Of course we should.
post after post about what the car must be and how reliable it must be while more or less ignoring that humans would never be able to reach the levels they are arguing the cars must reach.
I assumed you were referring to all the criticism of FSD Beta's current performance since this is the FSD Beta video thread. If anything I think people here think it's much closer to human performance than it is.
Personally I think that FSD will be accepted when it matches human performance because I think that most of the collisions will be the fault of other drivers when it reach that point. I think "it has the same collision rate as humans but most of time it's the other guys fault" will be acceptable to people.
 
I assumed you were referring to all the criticism of FSD Beta's current performance since this is the FSD Beta video thread. If anything I think people here think it's much closer to human performance than it is.
Personally I think that FSD will be accepted when it matches human performance because I think that most of the collisions will be the fault of other drivers when it reach that point. I think "it has the same collision rate as humans but most of time it's the other guys fault" will be acceptable to people.
No, my critique was aimed at several here who have posted claims that FSD needs actual human reasoning abilities before it can match human drivers (with no evidence to back that assertion). My intent was (is) to point out that most driving is very far from "intelligent" (which is not a bad thing since we rely on the autonomous part of our brain to do most of it).

Of course we are many many years away from any genuine "artificial intelligence" (in the real sense of the term). We can't even actually define what it means. My feeling (for which I happily admit I have no empirical evidence) is that, for most daily driving, a sufficiently well-trained NN can do the mundane tasks, and be sufficiently smart to spot tasks it cannot do and ask the human to take over. However, I dont think anyone knows what "sufficiently well-trained" actually means yet, and won't until we empirically get there.
 
No, my critique was aimed at several here who have posted claims that FSD needs actual human reasoning abilities before it can match human drivers (with no evidence to back that assertion). My intent was (is) to point out that most driving is very far from "intelligent" (which is not a bad thing since we rely on the autonomous part of our brain to do most of it).

Of course we are many many years away from any genuine "artificial intelligence" (in the real sense of the term). We can't even actually define what it means. My feeling (for which I happily admit I have no empirical evidence) is that, for most daily driving, a sufficiently well-trained NN can do the mundane tasks, and be sufficiently smart to spot tasks it cannot do and ask the human to take over. However, I dont think anyone knows what "sufficiently well-trained" actually means yet, and won't until we empirically get there.
Ah. I think it's more likely than not that for true "Full Self Driving" (i.e. Level 5) we will need general intelligence. I think for something with human assistance (like Waymo, Cruise, and probably every "robotaxi" company other than Tesla) it is possible to achieve human level safety with existing technology (meaning without major breakthroughs in AI).
How can one really have evidence about what will be necessary? Do we really know how the "autonomous" part of our brain works? It seems like we still haven't been able to match the performance of the human vision system (which feels very autonomous to me).
 

360 video from Frenchie in downtown Chicago. FSD struggled a lot with lane selection which required a lot of disengagements.

The most significant was at 12:20 or so, where the car decided to try to go around the car in front of it seemingly directly at a steel pillar. Another example of the car not acknowledging pillars, steel or concrete, and happily driving at them.
 
  • Informative
Reactions: Matias
Interesting. Will they be refitting to work better in other areas? Could that happen in 9.3 or 9.4?

1629896531304.png
 
If you expect that happening in a matter of weeks, well I would recommend tempering your expectations.
I wouldn't say I expect it but some changes would seem to be a possibility. I say that because the training data is already in place for this point in time and adjusting the fit would seem to be different than retraining.

As an example, we have seen in many videos that California (rolling) stops are the norm. FSD is already trained to recognize and stop (roll) at stop signs. Fitting for other areas might mean changing the setting to come to a complete stop which is a lot easier than the original NN training. It has also been speculated that there might be a user setting for full stop vs. rolling stop. Programming the user option is more effort than adjusting the setting to 'stop or roll' but it's still easier than base NN training and all of the iterations that requires.

Let me add that I'm not an expert by any means and these are only my observations/thoughts after watching and reading a lot on this whole process. I raise the issue for others who have expertise to contribute their thoughts.
 
Last edited:
The most significant was at 12:20 or so, where the car decided to try to go around the car in front of it seemingly directly at a steel pillar. Another example of the car not acknowledging pillars, steel or concrete, and happily driving at them.
If you watch the path line it's jumping all over place, sometimes going around on either side of the pillar. For a very brief time the pillar even becomes a pedestrian. So it's not completely ignoring it, I think it just has a lot of trouble figuring out where it is. Wasn't there mention at AI day of how many seconds of video the perception NN is taking in? After sitting there for a while is it possible that it "forgets" about the pillar because it can only accurately place it while moving?
 
No, my critique was aimed at several here who have posted claims that FSD needs actual human reasoning abilities before it can match human drivers (with no evidence to back that assertion). My intent was (is) to point out that most driving is very far from "intelligent" (which is not a bad thing since we rely on the autonomous part of our brain to do most of it).

Of course we are many many years away from any genuine "artificial intelligence" (in the real sense of the term). We can't even actually define what it means. My feeling (for which I happily admit I have no empirical evidence) is that, for most daily driving, a sufficiently well-trained NN can do the mundane tasks, and be sufficiently smart to spot tasks it cannot do and ask the human to take over. However, I dont think anyone knows what "sufficiently well-trained" actually means yet, and won't until we empirically get there.
There are other players in the self-driving space that are completely throwing out a human-coded policy/planning layer and going full end-to-end machine learning (video of driving in, action of driving out). For systems like this, it doesn't even need to perceive and segment everything in the scene. It isn't important to count 35 pedestrians in the intersection and predict each of their paths when it only takes one to block your path. Specifically talking about OpenPilot.
 
What does "overfitting" mean in this context, as Elon put it?
At least for training neural networks, this usually means the networks learn certain aspects of the training data that isn't generalizable to other situations. Perhaps one example of that is in Palo Alto, left turn lanes have solid white lines on both sides of the turning lane:
palo alto double turn.jpg


However, near Chuck Cook, all lanes (turning or straight) have solid lines just before the intersection. So the neural network might have overfit believing the current straight lane is actually a left turn lane and incorrectly predicts a path to turn left to "continue."
 
Re: NDA for FSD Beta testing

To all official FSD Beta testers. Can one of you post a [edit: SAFELY conducted] video of your car driving directly towards a car-sized collection of soft objects (boxes, inflatable Santas, whatever) at >30mph on a public street/alley/parking lot and - entirely because of FSD Beta's action/inaction - crashing through them.

If no-one posts a video in 5 days then we may assume that is because there IS a Non-Disclosure Agreement (verbal, implied, and/or written) and you are not allowed to do this.

Or even easier, if there is an NDA, can you post the full terms (verbal, implied, and written) to the best of your understanding.

Thank you :)
 
Last edited: