Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Wiki MASTER THREAD: Actual FSD Beta downloads and experiences

This site may earn commission on affiliate links.
In many states (Utah being one of them), it's the law that when you turn left, you enter the left most straight lane after completing the turn. One reason for this is so that opposing traffic that is turning right can turn into the lane next to you at the same time you're turning left.

My wife wasn't aware of this law and got a ticket for it around 25 years ago. It was the exact scenario described above... she was turning left and turned into the right most lane. She cut off an unmarked police car that was turning right. Cost her two weekends of traffic school.
But motorcycles can't (legally) lane split at 70mph in Utah like they routinely (illegally) do in urban California, so there's that. Since motorcycle cops seem to be among the worst offenders... :oops:
 
  • Like
Reactions: FSDtester#1
I had the FSD Beta pushed to my car with a 99 score on Friday, and have had some short interesting drives since then. Overall the quality is about what I expected, and I am really impressed! It's easy to be critical when it makes a mistake every 1-2000 decisions, but there is so much new functionality to this machine that I give some latitude with new decisions that it is making completely on its own. I can see Robo taxies in 2 years for sure, with some restrictions like a remotely connected driver perhaps for trouble spots.

I live on a one-lane unmarked road, so quite a challenge to drive the speed limit safely. It's a supreme challenge for FSD to negotiate this safely. This is where I see the most need for improvement. Also, it occurs to me just how sophisticated my brain is, and that I also have the luxury of remembering my road and its particular shape.

In particular, one moment when it had to negotiate an oncoming car that looked like it was coming right for us made me appreciate just what it was trying to understand, without the luxury of memory.

Things that seemed to work pretty good:
Well-marked roads, divided roads, dedicated turn lanes.
Highway 17 is a tricky test as well, but it's my daily commute. FSD beta is amazingly better at this challenge. Regular Autopilot treats every yellow light and turn as a reason to slow down to like 40 mph, which really pisses off the road warriors and tests my patience. It happily navigated The Hill at 5 over the speed limit without being confused by the dangers of cross traffic yellow light warnings, or the warnings around any of the famously bad corners on that highway.
Obstacle avoidance: our road has places where it has painted lines, but is too narrow to safely drive in them. FSD will happily cross the double yellow when it sees this case.
Freeways always worked pretty well, and now additionally some of the short on-ramp interchanges that NOA would give up on work perfectly on FSD.
Driver monitoring, now when I am not attentive it gives me alerts almost immediately. When I am attentive, it hardly nags me at all, which is super nice. This feels an order of magnitude safer and can prevent some of the dumb autopilot users from turning into dangerous missles.
Visualizations - Feel really good to see so much detail. It's good for videos and passengers mostly, I am watching the road very closely

Things that don't work so well:
Lane holding: moments where there is no reason to cross the double yellow it will do so. Though this hasn't happened with oncoming traffic, it is not confidence-inspiring. I was literally was entirely on the other side of a double yellow for at least 2 seconds without any sign the car was correcting this and no hazard to initiate it.
Jerkiness- The car is really making decisions many times per second. Sometimes when going slow the car turns from straight to 90 degrees of turn within a fraction of a second. Pretty sickening feeling really on very sharp turns.
Turning wide for 90 degree intersections - FSD feels like it's carrying a trailer sometimes, and I am hopeful it doesn't confuse someone behind me into trying around.
Phantom braking is pretty bad at times. No reason to stop that I can see and the accelerator isn't easily overriding it.
Kids, dogs and pedestrians - Not seeing the slow-down behavior I want to see around these hazards on a one-lane road. Typically we slow down to about 10-15mph when someone is walking along the one-lane road I live on. It looked like it would have fully ignored the dog that darted out in front of me but I didn't wait around to see.
Defaulting to high beams with FSD seems asinine, learning to be quick to turn this off.

Things I am not used to:
Changing lanes - I was used to requiring confirmation for this and I wish this was an option in Beta street stack. I see the comments that the blinker can be used to cancel the change if you see it in time. However, it is confusing that the car will change lanes without confirmation on a highway like 17, until it gets to the bottom where it's a freeway again and I have to confirm; I have to remember to change my behavior each freeway/highway change depending on the Autopilot mode.
Disengagement behavior set speed - When I use the steering wheel to disengage, I want TACC to default to the speed I am going now. As it stands it defaults to the speed limit. This makes for an unsafe feeling when it already felt unsafe enough to override FSD. I am training myself to use the brake to disengage everything in these cases.
Follow distance on a highway like 17 needs to be set further, without coupling this to the profile level of aggression. I want to be able to set follow distance in FSD the same on streets as on freeways.
Activating wipers and windshield cleaning on its own - This makes sense but was a bit of a surprise
No longer can I change from D to R without my foot on the brake - I can see the reason for this to avoid mode confusion
 
Last edited:
That's the neural net religion, though. Belief without proof that, if we just have enough data flow through the neural nets, then they'll magically figure things out.
Religion and magic?!? Hahaha, that's one way to look at it, but not my way.

Time will tell how Tesla's approach turns out compared to others'. If changes need to be made, Tesla will learn and adjust.
 
  • Like
Reactions: Taser8612
That's the neural net religion, though. Belief without proof that, if we just have enough data flow through the neural nets, then they'll magically figure things out. That works to a certain extent, but not any further.
Then, according to you, all extrapolations are "religion".

The way to think about it currently is that "if we just have enough data flow through the neural nets, then they'll magically figure things out." is the hypothesis. We have seen it work in other circumstances. Obviously FSD is not solved yet - so there can't be a "proof", obviously.
 
Last edited:
That's the neural net religion, though. Belief without proof that, if we just have enough data flow through the neural nets, then they'll magically figure things out. That works to a certain extent, but not any further.

I didn't buy the full report, which is expensive, but the ranking agrees with at least one other I've seen.

I think the fundamental flaws with the Tesla system are:
Some important positions lacking cameras (four corners pointing straight left and right). There's no cross traffic awareness. Also, humans not only have a binocular swivel head that can see a pretty far range, we also have three mirrors that show us a pretty good range too.
Thinking that, if we just get enough data, the neural nets will solve the engineering problem for us. I cannot disprove this as possible, but as I've read it would take a lot more processing power and levels of neural nets to get there. And even then it's bit of a crap shoot.
Dropping radar. I get repeated warnings when in areas of bright light and shadows. It's silly.
Avoiding LIDAR. That solves a major vision problem easy. Why avoid it?

In actual real-world use, this all shows up in countless ways:
I can see the cars ahead of the car ahead of me, and I can slow down when needed.
I can see the brake lights half a mile down the road, or the bunching of traffic, and give some extra following distance.
I can see the traffic light half a mile away and anticipate.
I can see the guy drifting in his lane and about to signal he's moving over, so I give him room before he signals.
I can see the look on the face of the woman across the intersection and know I should go or she should go.
I can see the guy on the other side of the parked truck, walking rapidly towards the street, and swing wide for when he walks into the road, even though I can't see him for most of this interaction.

As far as I can tell, the Tesla system does none of this. As far as I can tell, the Tesla system barely sees past its own nose. The vision recognition is poor, only showing a fraction of cars on the road that I can see nearby, let alone far away. Does it know about that mid-corner bump coming? Definitely not. A significant amount of my driving involves anticipating and avoiding road imperfections, and Tesla doesn't do this at all.

So I think Tesla took the dark side approach to automotive AI. They took the easy path on the low hanging fruit, but they don't seem to have much of a long term technological strategy beyond more data and more processing.

In sharp contrast, have you looked at all at what Waymo does? It's far more sophisticated.

I hope you're right. I don't see any evidence that you are, though.
I’m curious if you own a Tesla and have used FSD Beta. Regardless, you can’t really believe your observations here have not already occurred to the brilliant AI team at Tesla and been put under a microscopic and evaluated from countless perspectives. And Waymo?! Really?! Tesla will be the ones to solve autonomy and it will be sooner than later.
 
Because when Tesla started AP, there were no lidar’s that could be put on consumer cars cheaply.

And, humans don’t use lidar. So, the theory was, with cameras and CNN, we should be able to solve FSD.
And also because Tesla's vision-only approach can create a similar point cloud as LIDAR, complete with the distances. Plus, color is included. So, using LIDAR wouldn't provide Tesla anything other than higher cost and less attractive vehicles. Vision is also less affected by rain, snow, and fog than LIDAR.
 
For those recently added beta FSD'ers, how long did it take to get the load, after you completed 100 miles and a SS of 99-100?
Seems like it's more of a factor of when Tesla decides to run their numbers. We had 100 score with 100 miles when 10.3 was released, but we didn't get added until 3 days after that. At least when they were doing daily additions, it sounded like some people got in less than a day after reaching 100. So trend so far seems to be get to the target score by Thursday before the expected Friday night release to minimize the time maintaining the score. Outside of 2-week "scheduled" releases, there doesn't really seem to be any patterns yet.

day 21 fsd beta estimate.png

Here's some data using data from Teslascope such as Update History | Teslascope Anybody have a better scaling factor to use? I picked 1400 for now based on Elon's tweet of 1200 + buffer of eager FSD Beta population. This estimate would suggest enrolling just under 500/day.
 
Has anyone received the beta this weekend, reached 100+ mi w/ a 100 safety score Friday afternoon on our 2021 MY LR and have driven ab 20 mi over the weekend so far and still haven't received an email or update. I haven't seen any news on the forums of anyone getting the update so I'm wondering if it's cause of the weekend
 
Waymo is already at L4 in Chandler, Phoenix. geofencing is allowed for L4, not L5.
Lots of brains and lots of cash does not equal success. It helps, sure.

Tesla will certainly be the first to claim something, but they won't be the first to achieve anything special in self-driving other than pushing a product to market far too early.

But the thing is, Tesla is behind all the other major players. Why? Everyone else has better strategies to get to L4 and beyond. Tesla was ahead early on, getting the basics of staying in the lane and following at an appropriate distance while on the freeway. Tesla took the low hanging fruit, and good for them. However, the system is just awful at pretty much everything else. From Guidehouse:

73f3ec9c-4084-4e1d-9eb3-1ef1b4fc4250-lb-ads-21.png


I think Guidehouse probably got it right.

2045 is an optimistic timeframe for L4 or L5 from Waymo and similar, with Tesla far behind.

Speech to text is still mediocre to awful across the board. Oh, if you speak slowly and enunciate, it's OK until there's a homophone. Then it's a clustercluck. How about two homophones in a row? Oh dear. Now try speaking at a normal speed, without extra enunciation.

And speech to text is a far easier problem than computer vision and decision-making in a car.

Tesla tries to solve the problem with cheap sensors and semi-expensive processing, essentially believing in the religion of neural nets. Actual L5 self-driving cars are likely going to take not only a lot more sensors, and some expensive ones at that, but also a several orders of magnitude increase in processing power. No relatively simple algorithm is going to solve this multi-faceted puzzle.

Either there's going to be some genius AI breakthrough from somebody, which is possible but unlikely, or there's just going to need to be oodles and oodles of processing, with the problem broken down into countless tiny steps and sensors, all orchestrated together.
 
Last edited:
That's the neural net religion, though. Belief without proof that, if we just have enough data flow through the neural nets, then they'll magically figure things out. That works to a certain extent, but not any further.

I didn't buy the full report, which is expensive, but the ranking agrees with at least one other I've seen.

I think the fundamental flaws with the Tesla system are:
Some important positions lacking cameras (four corners pointing straight left and right). There's no cross traffic awareness. Also, humans not only have a binocular swivel head that can see a pretty far range, we also have three mirrors that show us a pretty good range too.
Thinking that, if we just get enough data, the neural nets will solve the engineering problem for us. I cannot disprove this as possible, but as I've read it would take a lot more processing power and levels of neural nets to get there. And even then it's bit of a crap shoot.
Dropping radar. I get repeated warnings when in areas of bright light and shadows. It's silly.
Avoiding LIDAR. That solves a major vision problem easy. Why avoid it?

In actual real-world use, this all shows up in countless ways:
I can see the cars ahead of the car ahead of me, and I can slow down when needed.
I can see the brake lights half a mile down the road, or the bunching of traffic, and give some extra following distance.
I can see the traffic light half a mile away and anticipate.
I can see the guy drifting in his lane and about to signal he's moving over, so I give him room before he signals.
I can see the look on the face of the woman across the intersection and know I should go or she should go.
I can see the guy on the other side of the parked truck, walking rapidly towards the street, and swing wide for when he walks into the road, even though I can't see him for most of this interaction.

As far as I can tell, the Tesla system does none of this. As far as I can tell, the Tesla system barely sees past its own nose. The vision recognition is poor, only showing a fraction of cars on the road that I can see nearby, let alone far away. Does it know about that mid-corner bump coming? Definitely not. A significant amount of my driving involves anticipating and avoiding road imperfections, and Tesla doesn't do this at all.

So I think Tesla took the dark side approach to automotive AI. They took the easy path on the low hanging fruit, but they don't seem to have much of a long term technological strategy beyond more data and more processing.

In sharp contrast, have you looked at all at what Waymo does? It's far more sophisticated.

I hope you're right. I don't see any evidence that you are, though.
That's the neural net religion, though. Belief without proof that, if we just have enough data flow through the neural nets, then they'll magically figure things out. That works to a certain extent, but not any further.

I didn't buy the full report, which is expensive, but the ranking agrees with at least one other I've seen.

I think the fundamental flaws with the Tesla system are:
Some important positions lacking cameras (four corners pointing straight left and right). There's no cross traffic awareness. Also, humans not only have a binocular swivel head that can see a pretty far range, we also have three mirrors that show us a pretty good range too.
Thinking that, if we just get enough data, the neural nets will solve the engineering problem for us. I cannot disprove this as possible, but as I've read it would take a lot more processing power and levels of neural nets to get there. And even then it's bit of a crap shoot.
Dropping radar. I get repeated warnings when in areas of bright light and shadows. It's silly.
Avoiding LIDAR. That solves a major vision problem easy. Why avoid it?

In actual real-world use, this all shows up in countless ways:
I can see the cars ahead of the car ahead of me, and I can slow down when needed.
I can see the brake lights half a mile down the road, or the bunching of traffic, and give some extra following distance.
I can see the traffic light half a mile away and anticipate.
I can see the guy drifting in his lane and about to signal he's moving over, so I give him room before he signals.
I can see the look on the face of the woman across the intersection and know I should go or she should go.
I can see the guy on the other side of the parked truck, walking rapidly towards the street, and swing wide for when he walks into the road, even though I can't see him for most of this interaction.

As far as I can tell, the Tesla system does none of this. As far as I can tell, the Tesla system barely sees past its own nose. The vision recognition is poor, only showing a fraction of cars on the road that I can see nearby, let alone far away. Does it know about that mid-corner bump coming? Definitely not. A significant amount of my driving involves anticipating and avoiding road imperfections, and Tesla doesn't do this at all.

So I think Tesla took the dark side approach to automotive AI. They took the easy path on the low hanging fruit, but they don't seem to have much of a long term technological strategy beyond more data and more processing.

In sharp contrast, have you looked at all at what Waymo does? It's far more sophisticated.

I hope you're right. I don't see any evidence that you are, though.
You can probably relate to this, I've been in Castro Valley a lot, and live between Industrial and Whipple, and the FSD is pretty bad when it needs to make turns in these narrow streets, and many turns are blind with large vehicles parked right up to the corners. I don't find my car is learning from trips I make, but that's OK. The car has headed straight for a curb at low speed 3 times now, slow enough where I can disengage before I run into the curb. I hope it's learning, but cramped residential neighborhoods aren't handled well. I'm sending lots of videos, like turning into a left turn lane, and stopping 3 car lengths behind the light, with someone behind me. Video sent. It managed to make a jerky left turn.

The good part of the FSD beta so far, is that it's like learning the strong points of the previous autopilot, so I pretty much know where it will do well, and if I have concerns, I disengage, and drive normally. I disengage a lot when other cars are behind me. More traffic than I expected this weekend. Exploring the FSD limits is like a game of chicken, but I won't do it with other drivers in my path just yet until I know more about how it behaves. Watching the plotted path helps a lot.
 
  • Like
Reactions: Jigglypuff
Lots of misinformation in your post, trollish. Waymo is already at L4 in Chandler, Phoenix. geofencing is allowed for L4, not L5.
Lots of brains and lots of cash does not equal success. It helps, sure.

Tesla will certainly be the first to claim something, but they won't be the first to achieve anything special in self-driving other than pushing a product to market far too early.

But the thing is, Tesla is behind all the other major players. Why? Everyone else has better strategies to get to L4 and beyond. Tesla was ahead early on, getting the basics of staying in the lane and following at an appropriate distance while on the freeway. Tesla took the low hanging fruit, and good for them. However, the system is just awful at pretty much everything else. From Guidehouse:

73f3ec9c-4084-4e1d-9eb3-1ef1b4fc4250-lb-ads-21.png


I think Guidehouse probably got it right.

2045 is an optimistic timeframe for L4 or L5 from Waymo and similar, with Tesla far behind.

Speech to text is still mediocre to awful across the board. Oh, if you speak slowly and enunciate, it's OK until there's a homophone. Then it's a clustercluck. How about two homophones in a row? Oh dear. Now try speaking at a normal speed, without extra enunciation.

And speech to text is a far easier problem than computer vision and decision-making in a car.

Tesla tries to solve the problem with cheap sensors and semi-expensive processing, essentially believing in the religion of neural nets. Actual L5 self-driving cars are likely going to take not only a lot more sensors, and some expensive ones at that, but also a several orders of magnitude increase in processing power. No relatively simple algorithm is going to solve this multi-faceted puzzle.

Either there's going to be some genius AI breakthrough from somebody, which is possible but unlikely, or there's just going to need to be oodles and oodles of processing, with the problem broken down into countless tiny steps and sensors, all orchestrated together.
 
Has anyone received the beta this weekend, reached 100+ mi w/ a 100 safety score Friday afternoon on our 2021 MY LR and have driven ab 20 mi over the weekend so far and still haven't received an email or update. I haven't seen any news on the forums of anyone getting the update so I'm wondering if it's cause of the weekend
Yes. Got it Friday night with a recent uptick to 99. I signed up a month ago, but I'm not sure that matters.