Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
Given that the version number increased just by 5 (last digit), I would expect to see few visible changes. (And still 6-week old code). On the other hand, if this code is from 6 weeks ago, why would they put out a .1 release followed by a .5 release a few days later?

Guess we’ll have to wait and see.

Quick turn-around like that is probably a bug fix, possibly not even related to FSD per se.
 
Kinda scary that the neural network made this prediction to "follow" oncoming traffic even for 1 frame (16ms):

wrong follow.jpg

Even continuing through the turn, the lack of clear double yellow lines from this angle resulted in many predictions to make a turn into the "inner-most left turn lane" even though map data should have indicated there were 3 lanes for oncoming traffic. This is most likely the types of sources for "two steps forward, one step backward" where something that used to rely on map data gets biased more to vision based predictions, which hopefully have high success rates but can have unexpected failures.
 
I simply can’t understand why it would want to drive through a roadblock. Shouldn’t there be software 1.0 code “don’t drive through roadblock”?
I would guess Autopilot team wants to offload as much as possible to the neural network, and as we can see in the barrier example as well as my immediately previous post, different neural network outputs/predictions aren't necessarily based on each other.

predict one-way.jpg


Even after the neural network realized the vehicles on the left side of the road are oncoming traffic, the lane/intersection part of the neural network still makes its predictions based on what it sees (or fails to see) on the ground. A human would most likely infer oncoming traffic means those are not lanes for me (as opposed to "all these vehicles are driving the wrong way"), but the neural network thinks "no double yellow lines? all lanes are for my direction!"

The specific problem of rerouting because of roadblocks is a bit tricky right now because navigation is probably almost all software 1.0 code. Also, this is fairly uncommon, so maybe Autopilot folks just haven't gotten around to writing up "don't drive through roadblock."
 
  • Informative
Reactions: Matias
Kinda scary that the neural network made this prediction to "follow" oncoming traffic even for 1 frame (16ms):

View attachment 629721
Even continuing through the turn, the lack of clear double yellow lines from this angle resulted in many predictions to make a turn into the "inner-most left turn lane" even though map data should have indicated there were 3 lanes for oncoming traffic. This is most likely the types of sources for "two steps forward, one step backward" where something that used to rely on map data gets biased more to vision based predictions, which hopefully have high success rates but can have unexpected failures.
Yes. It should be relatively easy to program “if you see cars driving towards you, use the right side of the road”?
 
  • Funny
Reactions: mikes_fsd
#FSDBeta 10.1 2020.48.35.6 - Lots of suicide dual turn lanes tested. Check it out.

Thanks for your hard work and nerves of steel! Some nice turns in this video.

Chuck also updated and clarified that using the accelerator is indeed treated as an intervention and the data is sent to Tesla even though FSD isn’t disengaged, which confirms what DirtyTesla said. Glad to hear it, since that just makes sense.
 
It seems Elon is saying that the version in his car is 8.1, which is the 8th major update. Not sure what's currently in the testers' cars.

Also, I still don't think fsd beta is going wide any time soon, based on Elon's response. He avoided addressing the release schedule. I'm still hoping it'll be widely released in the USA in 3-6 months.
 
Wow, only 1 disengagement in 22 miles because of an edge case. Also, it was impressive that it recognized the gate.

In the video, he says there were also 4 interventions. We need to count interventions too. So it was really 5 interventions/disengagements in just 22 miles.

It seems Elon is saying that the version in his car is 8.1, which is the 8th major update. Not sure what's currently in the testers' cars.

Elon has the bleeding edge. So 8.1 could be a higher version than what even the FSD Beta testers have. Or the FSD beta testers also have 8.1. Hard to say, especially since Elon says that they are switching to a completely new numbering system.

Also, I still don't think fsd beta is going wide anytime soon, based on Elon's response. I'm still hoping it'll be widely released in the USA in 3-6 months.

The video you showed had 1 intervention/disengagement per 4-5 miles. Tesla is going to need to improve that rate by A LOT before they can even hope to release it wide to the general public. So, I agree a wide release is pretty far off. But 3-6 months is probably a bit optimistic IMO.
 
The video you showed had 1 intervention/disengagement per 4-5 miles. Tesla is going to need to improve that rate by A LOT before they can even hope to release it wide to the general public. So, I agree a wide release is pretty far off. But 3-6 months is probably a bit optimistic IMO.
So if you consider this an unacceptable rate, what is an acceptable one, and why? Remember an intervention can be as simple as a pressed accelerator because the driver felt the car could be going a bit faster.

I think it’s important to distinguish between different classes of error here.

— the car was doing or apparently about to do something unsafe that risked the safety of people or property.
— the car did something that was considered poor and/or dubious driving but was not a safety hazard.
— the car did something that was ok, but the driver would have preferred it to have done it differently.
 
So if you consider this an unacceptable rate, what is an acceptable one, and why? Remember an intervention can be as simple as a pressed accelerator because the driver felt the car could be going a bit faster.

I think it’s important to distinguish between different classes of error here.

— the car was doing or apparently about to do something unsafe that risked the safety of people or property.
— the car did something that was considered poor and/or dubious driving but was not a safety hazard.
— the car did something that was ok, but the driver would have preferred it to have done it differently.

I completely agree that there are different classes of errors. The first one is the most critical since it is safety related. The second one is still bad, even though you say it is not a direct safety hazard, it involves bad driving that could be a problem. The last one is trivial and unimportant since it is not bad driving or a safety issue. So the first two classes of errors need to be minimized as much as possible. It is hard to put an exact number. But the first class which is safety related would probably need to be in the 1 per 100k-200k miles range IMO.

Ultimately, autonomous driving means no human intervention allowed. So the autonomous driving system needs to operate the vehicle safely and smoothly if you don't allow any human intervention (like removing the steering wheel and pedals).
 
Last edited:
  • Like
Reactions: pilotSteve