Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
I wonder whether Tesla will ever make it easier to blend in driver input. It's very difficult to smoothly disengage with a deadzone. In situations like this the only real option is to push up the gear shift level to disengage, and you have to be fast of course.

It's one thing to have their currently garbage system on the freeway, but quite another in tight spaces. I'm sure as long as they have sufficient precision on their torque sensing they could blend it in. But of course they use it as a driver monitor, so they can't.
I'm not sure it's that hard to disengage when you're holding the wheel like you're supposed to. I do it occasionally with Autopilot when it wants to drive too close to things. When the car started turning too tight on the left turn he could have just resisted and it would have disengaged somewhat smoothly. Certainly more smoothly and more safely than waiting for it get into a position where the turn was impossible to complete while staying in his lane.
I'd really like to drive a car with OpenPilot which apparently does not disengage on steering input (and of course uses OEM electric power steering racks).
This also didn't look like a perception problem. The car actually saw the line just fine and decided to drive a full car width inside it.
Screen Shot 2020-11-07 at 8.58.23 PM.png
 
I'm not sure it's that hard to disengage when you're holding the wheel like you're supposed to. I do it occasionally with Autopilot when it wants to drive too close to things. When the car started turning too tight on the left turn he could have just resisted and it would have disengaged somewhat smoothly.

Yeah probably. It's still kind of garbage. I'd love to be able to have more input.

The car actually saw the line just fine and decided to drive a full car width inside it.

Having more blended input could have "solved" this problem. As long as the car thought the driver input was still an acceptable solution, anyway.

On another topic, not that there was any debate, but that video sure does prove without a doubt that the car uses quite detailed map data to generate the base visualizations. (The steep approaches to the intersections which still show full detail including crosswalks, even when there would be no way for the high-mounted camera to see the intersection, make it very clear.) I guess they blend in the perception onto that base data or something. Presumably if the perception is deemed good enough and it conflicts, then that takes priority?

Screen Shot 2020-11-07 at 9.30.36 PM.png
 
Last edited:
This guy is gonna be the first to have an accident with beta FSD. All the others that have posted appear to understand the responsibility of testing beta software. Hands no where near the wheel on narrow streets. We've seen what it does in similar conditions with Brandon's videos. It's great to see the progress but it's really not that hard to do it properly.
More people need to be using the system like Brandon and James Locke. Ready and willing to override anytime they don’t feel comfortable or the car is doing something that would impede other drivers on the road. Brandon mentioned in his latest vid that the car has swerved with cars in adjacent lanes and he was able to override before it crossed any lines (though it did scare the car next to him), and I would imagine it would be hard for this guy to do so with his hands off the wheel the vast majority of the time.

Side note: Glad to see Brandon start to do some editing of how videos to cut out parts where he isn’t providing commentary or the car isn’t doing something interesting.
 
  • Like
Reactions: AlanSubie4Life
Yeah probably. It's still kind of garbage. I'd love to be able to have more input.



Having more blended input could have "solved" this problem. As long as the car thought the driver input was still an acceptable solution, anyway.

On another topic, not that there was any debate, but that video sure does prove without a doubt that the car uses quite detailed map data to generate the base visualizations. (The steep approaches to the intersections which still show full detail including crosswalks, even when there would be no way for the high-mounted camera to see the intersection, make it very clear.) I guess they blend in the perception onto that base data or something. Presumably if the perception is deemed good enough and it conflicts, then that takes priority?

View attachment 606365

I don't think that video proves it without a doubt. For the cases I see in that video they could detect crosswalks as features of the intersection based on the brightly lit red hand / white walking person crosswalk indicators. Just like a human ;)
 
I don't think that video proves it without a doubt. For the cases I see in that video they could detect crosswalks as features of the intersection based on the brightly lit red hand / white walking person crosswalk indicators. Just like a human ;)

Lol. I guess in that example it could work for one crosswalk (and somehow it gets the number of lanes on the cross street correct?). Ok, probably not without a doubt. I mean, it could make inferences about what exists, but I would be SHOCKED if it worked that well, consistently. I’m sure there are ways to test this though - drive up to a more complex/unusual intersection on a blind hill in San Francisco neighborhood. It is not uncommon for them to be not a simple 4-way intersection.

I’m sure if you watch more of these videos in San Fran you could draw a more solid conclusion. I’ve seen enough, my priors are set. ;) It’s not a hill I would die on.
 
For the cases I see in that video they could detect crosswalks as features of the intersection based on the brightly lit red hand / white walking person crosswalk indicators.
Maybe more simply, most intersections with crosswalks have crosswalks on each side, so most of the time this prediction is correct. Indeed the neural network could pick up the crosswalk signal, and perhaps it'll pick up the white "NO PED CROSSING" signs with enough training data after a wider rollout to correctly predict no crosswalk on the left:

no crossing.jpg
 
  • Funny
Reactions: AlanSubie4Life
4D should be much better at tricky light conditions. It is not as dependent on frame by frame labelling but will use future better information to label all frames, for example side cameras to label stop signs instead of forward facing camera.
 
cross traffic may not be stopped by “protected” left turning traffic - it’s common for there to be flashing yellow left arrows
Yeah, so perhaps it's not so much the left turn is preventing the cross traffic from continuing straight but more of the left turning vehicle decided there was enough of an opportunity window to enter the intersection, so there was higher likelihood for a right turn on red to share that window to enter the intersection. Sure, a right turn will be in the way of the crossing traffic while the left turn will exit the cross road, so Autopilot would need to correctly predict the cross traffic speed also knowing how fast it will be going after making the right turn.

Perhaps more simply in either "protected" or "unprotected" cases, Autopilot needs to accurately predict vehicles that will be in the lane it wants to be in, and maybe things are just slow for the initial FSD beta because Tesla is being cautious. Then again, there have been videos of the software turning into the closer lane while traffic was moving quickly in the outer lane, and for some testers, this was too dangerous of a maneuver as the outer lane vehicle could easily switch into the closer lane potentially resulting in Autopilot stopping in the middle of the intersection.

left turn.jpg


Or… maybe Autopilot just didn't detect the vehicle and incorrectly thought it was safe. :confused:
 
Wasn't that because someone was waiting behind him? Not making a right turn on red when safe is super annoying to other drivers.
Yup, and currently as a driver assist feature, the driver can choose to accelerate sooner than what Autopilot would have. Personally, I adjust the speed with the accelerator or scrollwheel relatively often on city streets with current Autopilot, but I could see people just letting Autopilot do its thing or just completely turning it off because the behavior isn't what they would do.

Tesla may do wider release but I don't understand the point of doing so.
I suppose that's mainly a data collection issue. Can Tesla figure out problematic situations with its private beta testers and deploy firmware updates or remote data collection triggers for the wider fleet to capture, e.g., one of Brandon's problems seems to involve lane lines vs rail lines in the same direction that the fleet could find. Or do some types of disengagements (or lack of disengagement) require a wider release to calculate aggregate statistics?