Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
That was pretty minor and I didn’t really mind that one. But when he complains later on in the vid about FSD screwing up a stop sign by sitting there too long when the car in the cross street was turning right and doesn’t intervene by hitting the go pedal or even report it, it frustrates me because Tesla wants that data and it doesn’t look like he’s giving it to them. And that’s nothing compared to how flabbergasted and pissed I was to see him just let the car screw up the stop sign later in the video when it ignored the car turning left that had the right of way and just proceeded. Good on the other car for saving both of them by stopping, but if the car is driving like a dick, it’s on the human steward to stop it and let the FSD team that it’s doing something wrong.

…and this is why giving the FSD beta to social media personalities more concerned with their "influencer" status instead of people with software engineering and testing experience is stupid.
 
  • Like
Reactions: pilotSteve
…and this is why giving the FSD beta to social media personalities more concerned with their "influencer" status instead of people with software engineering and testing experience is stupid.
Well I sure as hell am glad they didn't give it to you!
And the bit about "social media personalities" has been proven to be BS over and over again, when a new person on Twitter or YouTube who had a handful of followers posts a video with FSD and then they get a bunch of followers.
 
AI DRIVR's video was very informative and really showed how much FSD has improved. Of course it made mistakes but rather then being nitpicky I think we should give Tesla credit for the overall improvements.
Feels like we're maybe a month or two away from expanding the beta program.
 
  • Like
Reactions: mikes_fsd
FSD beta recognizing and responding to some makeshift construction speed limits:

Screenshot_20210120-220415_YouTube~2.jpg


At 9:18
 
…and this is why giving the FSD beta to social media personalities more concerned with their "influencer" status instead of people with software engineering and testing experience is stupid.
Hmmm ... who should be asked to test software ? Typical users or techies ?

I’d choose typical users, every time.

After all, the dev team and a lot of other Tesla engineers are testing as well.
 
Hmmm ... who should be asked to test software ? Typical users or techies ?

I’d choose typical users, every time.

After all, the dev team and a lot of other Tesla engineers are testing as well.

Depends what you're looking for. Historically in my experience, you want to have external black box testers in addition to your internal SWENG/SWQA teams, who are looking at things with a technical eye so they can report the issues and give useful feedback on reproducibility, context, etc without being biased by internal knowledge of process and function. These are the "professional testers who don't work directly for the company", which is what I would expect the people with the FSD beta to be, in a sane world.

Instead, Tesla seems to be using this as a blend of marketing and kinda-testing, at least for some of the social media influencer sorts.


To give an example outside of the automotive world that I think a number of people will be familiar with, look at how games work these days. 25 years ago, being an external beta tester would usually end with your name in the credits, as part of a small team of people who volunteer to run the product through the ringer in an organized and useful fashion, while under a very strict NDA that wouldn't allow you to disclose anything, ever, or your family would be turned into meals for the homeless or whatever. Now, beta access is handled wildly differently and is part of the marketing strategy for games.

I'm not going to say that either approach is "better", but I can say that the testers that I appreciate at work are the ones who give brutal, honest feedback about the problems they experience, as well as context for reproducing the events, etc. Improvement comes from fixing problems, not by being told it's good.
 
Last edited:
  • Like
Reactions: pilotSteve
#FSDBeta 10 - 2020.48.35.1 - Unprotected Left Turns
You asked for it so here it is. This is a Beta 10 test on several unprotected left turn scenarios.
Ah, as expected, the reality check video. A regression in that it is going into the middle of the road on two way streets now, sigh. A couple of good turns, but more bad ones, albeit these are very tough and/or complex turns that Chuck is testing on.

edit - interesting conflict of information here from DirtyTesla. Chucks says Tesla currently is NOT capturing information when the accelerator is used, while DirtyTesla says he was told they were.
 
Last edited:
  • Like
Reactions: Matias
Depends what you're looking for.
Indeed. I would have loved to helpas Elon previously impliedby finding interesting reproducible test cases and providing detailed feedback; but from @Chazman92's most recent video/comment, it sounds like Tesla is in a different phase now: "All Tesla is asking us to do is drive our cars. They keep giving feedback that says 'You don't need to test certain scenarios' as we just did today in this video. They would rather us just get out and drive and capture disengagements. That is what they're after."

To me it seems like they're trying to reach some aggregate miles between disengagement threshold at this time. Unclear if it's a regulatory related requirement or some internal number before a wide release.

Edit: I just realized, Tesla already has some baseline metric of miles between disengagements on city streets with people using Autopilot off highways, and one would think it should be easy to surpass as non-FSD-beta Autopilot can't even make most turns. So maybe they're looking for something like 10x more miles between disengagement or even 100x. Unfortunately, most driving is simply going straight, so public release Autopilot can perform reasonably well on this metric, and indeed testing complicated but interesting FSD beta maneuvers can concentrate a lot of disengagements in a short distance. (Yes, I realize this is a flawed metric for many reasons, but it is an existing metric that at least CA DMV has used.)
 
Last edited:
@Mardak I need your opinion on something. As you may remember, during my 1st FSD Beta 10 run, I faced a ROAD CLOSED sign and the car wanted to go through it. I saw that Kim Paquette let her car sleep for a bit, connect to Wifi and then she said FSD got better for her. So I did the same and I went for a drive today. In the past that ROAD CLOSED would fail 100% of the time. I tested it today and I get a 50% improvement. Here are the screenshots of the 1st day and today.

Screen Shot 2021-01-21 at 2.51.49 PM.png
Screen Shot 2021-01-21 at 2.57.17 PM.png


Top photo is from Day 1. Bottom photo is from today. The difference I see is an additional fence in the 1st picture. It's hidden so I put a green arrow to show you. That was the only "significant" difference I saw. I don't want to jump to the conclusion that FSD Beta 10 got better so I'd be curious about your opinion (and others). If this isn't easy to read/look at, I can create a quick video and post it on YouTube.

Let me know your thoughts!
 
  • Informative
Reactions: Matias
#FSDBeta 10 - 2020.48.35.1 - Unprotected Left Turns
You asked for it so here it is. This is a Beta 10 test on several unprotected left turn scenarios.
If someone physically moved their b-pillar camera to a position ahead of the driver would a recalibration be able to accept this configuration. Just wondering if as a test that might improve the visibility of traffic.
 
As you may remember, during my 1st FSD Beta 10 run, I faced a ROAD CLOSED sign and the car wanted to go through it.
If you're asking why Autopilot wanted to drive through barriers in the first place, I believe the path planner knows it needs to make a right turn and looks at the neural network output to "select the best path that goes right" (as opposed to the highest predicted path likely going straight). As you recorded:
frenchie barriers.jpg

and others have recorded:
raj closed.jpg


The predicted path can happily ignore other predictions such as barriers, road edges or medians/islands:
brandon island.jpg


Where in this last example, the path planner wants something going straight, and the neural network not having trained enough on intersections with intersection islands ends up falling back to what it knows of "well, going straight through an intersection path generally looks like this" even if the path goes right through a separately predicted median. Autopilot engineers long term probably want to train the network to predict a curving path around the island as the most likely path for straight, but in the short term, I wouldn't be surprised if FSD beta worked around the issue by having the logic be "select the best path that goes straight -- but ignore those that go through medians."

In the past that ROAD CLOSED would fail 100% of the time. I tested it today and I get a 50% improvement. … I don't want to jump to the conclusion that FSD Beta 10 got better so I'd be curious about your opinion (and others).
If you're asking how FSD beta 10 could improve even without a full firmware update, which could include neural network and path selection logic updates… Some have speculated that map updates could happen between firmware updates, and those generally require being on WiFi to download. One type of map update could be to indicate there are no turn lanes to the smaller Randolph Street segment from Green Street thus trying to have navigation and path planner avoid selecting a turn at the first intersection.

However, even with that type of map update, it's up to the neural network perception to predict which lanes are associated to which intersections, so that could be why it's not 100% success. Indeed according to the map data, there are 3 intersections in close proximity that are all "Randolph & Green," and the barriers make the predictions less confident.

So similar to the potential short term fix of ignoring paths through medians, there could be a firmware update to avoid paths through temporary barriers (assuming those are existing neural network predictions and/or those would need to be trained to better accuracy). Higher level, this probably is a whole set of "what to do with construction and road closures" that would also affect how navigation selects what routes to take, and maybe Tesla has decided this is something to address after a wide release.
 
  • Informative
  • Like
Reactions: Matias and Frenchie
These are the "professional testers who don't work directly for the company", which is what I would expect the people with the FSD beta to be, in a sane world.
Thats not a "sane" world - just an old world. Yes, I've been there and done that too. Infact we used to have external "validators" in one of the companies.

Within the company there would be multiple different type of testers - some who are dev/design with intimate knowledge of the system - and as you move out from the core dev team, people with lesser and lesser amount of inside information. Some of them might just be typical external users ... but with a stricter NDA.

So, I don't think the idea that external beta testers should ideally be *not* typical users has much validity.
 
  • Like
Reactions: run-the-joules
Thats not a "sane" world - just an old world. Yes, I've been there and done that too. Infact we used to have external "validators" in one of the companies.

Within the company there would be multiple different type of testers - some who are dev/design with intimate knowledge of the system - and as you move out from the core dev team, people with lesser and lesser amount of inside information. Some of them might just be typical external users ... but with a stricter NDA.

So, I don't think the idea that external beta testers should ideally be *not* typical users has much validity.

Fair fair. I will also freely admit that I am incredibly biased against "social media influencers", especially who got their cars after I did, and incredibly biased towards myself, in terms of who I think should have the beta. :D
 
If you're asking why Autopilot wanted to drive through barriers in the first place, I believe the path planner knows it needs to make a right turn and looks at the neural network output to "select the best path that goes right" (as opposed to the highest predicted path likely going straight). As you recorded:
View attachment 629509
and others have recorded:
View attachment 629510

The predicted path can happily ignore other predictions such as barriers, road edges or medians/islands:
View attachment 629514

Where in this last example, the path planner wants something going straight, and the neural network not having trained enough on intersections with intersection islands ends up falling back to what it knows of "well, going straight through an intersection path generally looks like this" even if the path goes right through a separately predicted median. Autopilot engineers long term probably want to train the network to predict a curving path around the island as the most likely path for straight, but in the short term, I wouldn't be surprised if FSD beta worked around the issue by having the logic be "select the best path that goes straight -- but ignore those that go through medians."

If you're asking how FSD beta 10 could improve even without a full firmware update, which could include neural network and path selection logic updates… Some have speculated that map updates could happen between firmware updates, and those generally require being on WiFi to download. One type of map update could be to indicate there are no turn lanes to the smaller Randolph Street segment from Green Street thus trying to have navigation and path planner avoid selecting a turn at the first intersection.

However, even with that type of map update, it's up to the neural network perception to predict which lanes are associated to which intersections, so that could be why it's not 100% success. Indeed according to the map data, there are 3 intersections in close proximity that are all "Randolph & Green," and the barriers make the predictions less confident.

So similar to the potential short term fix of ignoring paths through medians, there could be a firmware update to avoid paths through temporary barriers (assuming those are existing neural network predictions and/or those would need to be trained to better accuracy). Higher level, this probably is a whole set of "what to do with construction and road closures" that would also affect how navigation selects what routes to take, and maybe Tesla has decided this is something to address after a wide release.


I think that makes sense. It was more about how come it "improved" without a firmware update. I still think conditions are changing the car decisions a lot. You add a cone or remove one and the decision that the car takes is different. I'll still put together the video and share. Might be good for people to see what conditions could result in different outcomes. Thanks for sharing your thoughts!
 
  • Like
Reactions: Matias
…and this is why giving the FSD beta to social media personalities more concerned with their "influencer" status instead of people with software engineering and testing experience is stupid.

Why do you think it's only in the hands of YouTube posters etc? By definition, you would not know about other beta testers that are not on social media ... because they are not on social media!