Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta Videos (and questions for FSD Beta drivers)

This site may earn commission on affiliate links.
After watching AI DRIVR's video, my conviction in Tesla's approach is even higher, lol.
I really don’t like his videos because he basically refuses to intervene at all even if his car is impeding other drivers. The move he praised so much in the middle of his vid where his car was trying to pass a stopped vehicle with an oncoming car was also stupid. Yes, it’s nice to see that the car tried to fix its mistake, but he never should have let it make the mistake in the first place. And That last right turn in his video his car basically crept so far forward it scared the oncoming driver into stopping and giving up the right of way to avoid what they thought would be a collision if they kept driving.
 
  • Disagree
Reactions: scottf200
I really don’t like his videos because he basically refuses to intervene at all even if his car is impeding other drivers. The move he praised so much in the middle of his vid where his car was trying to pass a stopped vehicle with an oncoming car was also stupid. Yes, it’s nice to see that the car tried to fix its mistake, but he never should have let it make the mistake in the first place. And That last right turn in his video his car basically crept so far forward it scared the oncoming driver into stopping and giving up the right of way to avoid what they thought would be a collision if they kept driving.
Yeah, that first mistake was bad. At residental speeds (25mph) it wasn't as bad as it could have been but it still wasn't great. Like the guy in the video said - the auto corrective behavior was nice to see - but ultimately the car should not have tried to or have been allowed to make that mistake. We can clearly see the oncoming car, did the Tesla not see it, think it was going slow/stopped, or just ignore it?

I would have said the second mistake was huge, but I rewatched the video, and you see that the light actually turns green before the car goes. That other car definitely saw a yellow light as it was approaching and may or may not have been intending to stop where it stopped anyway. Had there been a collision, the Tesla may have been in the right.

As a human driver though, I wouldn't have creeped that far ahead.
 
  • Like
  • Informative
Reactions: mhan00 and Matias
I really don’t like his videos because he basically refuses to intervene at all even if his car is impeding other drivers. The move he praised so much in the middle of his vid where his car was trying to pass a stopped vehicle with an oncoming car was also stupid. Yes, it’s nice to see that the car tried to fix its mistake, but he never should have let it make the mistake in the first place. And That last right turn in his video his car basically crept so far forward it scared the oncoming driver into stopping and giving up the right of way to avoid what they thought would be a collision if they kept driving.

He does explain why it's important to see fsd beta try to fix its own mistakes. Perfection shouldn't be the goal of fsd, as the world is imperfect and figuring out how to deal with your or others' mistakes is crucial. I've seen two recent Waymo examples where it simply gets stuck (like the infamous simple cone fiasco) because it needs the world to fit into its rigid programming. It's nice to see V9 eventually figure out certain issues.
 
We can clearly see the oncoming car, did the Tesla not see it, think it was going slow/stopped, or just ignore it?
I'm not sure the center mounted camera could see the oncoming car. Of course this is a major user comfort issue since the car will have to peek out in some situations when the driver can clearly see an oncoming car (AI Driver's camera is somehow mounted on his head?). This is another case where the display needs to have longer than 200ft range...
1627935105463.png
 
  • Like
Reactions: idriveacar
He does explain why it's important to see fsd beta try to fix its own mistakes. Perfection shouldn't be the goal of fsd, as the world is imperfect and figuring out how to deal with your or others' mistakes is crucial. I've seen two recent Waymo examples where it simply gets stuck (like the infamous simple cone fiasco) because it needs the world to fit into its rigid programming. It's nice to see V9 eventually figure out certain issues.
Well yeah, except if the penalty of not dealing with a mistake is death.

Spoiler Alert: The penalty is death.
 
  • Like
Reactions: daktari and Matias
Interesting that it shows the stop sign but doesn’t connect it to why the car in front might be stopped. Software issue imo.

I remember when fsd beta was just released, seeing Brandone's issues, like the wobbly pathing and mini islands, and thinking they were fairly severe issues, but two or three updates later, they were fixed to a large extent. I think we're going to be experiencing this level of progress now that Tesla has fully committed to vision only.

Don't get me wrong, it's still possible that Tesla will max out the hw3 cpu before achieving 2-5x human stats, but at least that's fixable.
 
  • Like
Reactions: qdeathstar
Interesting that it shows the stop sign but doesn’t connect it to why the car in front might be stopped. Software issue imo.
Here is a situation that FSD Beta struggles with. It wants to pass cars that are stopped at stop signs:

A lot of people have been talking about these behaviors in the recent releases. It seems to be a result of complaints that the car might just sit dumbly behind a parked or double-parked car. So now it's overly aggressive and it doesn't really know which is traffic and which is a stopped or double-parked vehicle.

I thought about how to explain to someone how to figure that out, and it's really not so easy. It's the old "I know it when I see it" answer. And sometimes human drivers will be unsure, but then someone waves them around with not much more than a hand/wrist flip - and that can be just subtly different from a "hold up for a moment" gesture. And in all such cases, the risks of going around are on you, not on the person who waved you along.

Sometimes the stopped vehicle is displaying flashers, but thats also far from a reliable clue. Could be off, could mean there's a serious hazard just ahead.

Not an easy problem for any AV.
 
  • Like
Reactions: diplomat33
A lot of people have been talking about these behaviors in the recent releases. It seems to be a result of complaints that the car might just sit dumbly behind a parked or double-parked car. So now it's overly aggressive and it doesn't really know which is traffic and which is a stopped or double-parked vehicle.

I thought about how to explain to someone how to figure that out, and it's really not so easy. It's the old "I know it when I see it" answer. And sometimes human drivers will be unsure, but then someone waves them around with not much more than a hand/wrist flip - and that can be just subtly different from a "hold up for a moment" gesture. And in all such cases, the risks of going around are on you, not on the person who waved you along.

Sometimes the stopped vehicle is displaying flashers, but thats also far from a reliable clue. Could be off, could mean there's a serious hazard just ahead.

Not an easy problem for any AV.

Agreed. In fact, we've seen clips from other AV companies where remote assistance had to tell the AV it was ok to go around a double parked delivery truck because the AV was just sitting there and was not sure what to do.
 
Here is a situation that FSD Beta struggles with. It wants to pass cars that are stopped at stop signs:

That’s why the logic Tesla is using to determine when to go around a vehicle is broken. This is, i don’t want to say common, but it happens often enough in the vids I’ve seen to be a real concern. It doesn’t happen in every video, but often enough that it’s a real problem
 
Agreed. In fact, we've seen clips from other AV companies where remote assistance had to tell the AV it was ok to go around a double parked delivery truck because the AV was just sitting there and was not sure what to do.
Thinking about it a bit further, I believe the perception of whether and when to go around can be dependent on long-persistence data. A delivery truck on a city street with flashers on and the back door open - fairly clear you might go around, but not if one or more cars are ahead of you, with drivers inside and occasionally creeping forward.

There are many related possibilities but you have to wait to see how it develops over tens of seconds or more. I think this is a fundamental issue for these machine learning nets because they don't seem to have some kind of long-persistence subloop. I thought this was coming to Tesla FSD and I think it actually has, but the temporal understanding only spans a few seconds at best. It's clearly not possible to store a one minute (or more) pipeline of full-resolution and full frame-rate video, so if truly long persistence is to be achieved it has to be based on a pipeline of highly processed perceived-object data from a downstream NN layer. Not necessarily coded explicitly. It will likely become part of the ML "Software 2.0" solution, but the flow architecture needs to include some loop-back access to a fairly long (order of minute(s)) history of the scene elements. It's no good IMO if the system is making a completely fresh assessment every few seconds as a traffic jam, accident, demonstration or whatever slowly-develeloping situation unfolds. Unfortunately I think this is exactly a limitation of many of these NN pipelines including Tesla's - though clearly they're aware of this lssue. The question is whether they have the flexibility to create and refer back for such long-term inputs to their NN decisions, with the present hardware setup.

There are corollary topics here, like the ability to recognize obstacles or difficult traffic and modify the nav route with at least a bit of history. A few minutes to a few days to "remember" new/evolving construction. At least a few minutes to help resolve Chuck Cook's nav loop (where it gives up on a difficult/inadvisable left turn, only to go around the block, immediately forgetting what just happened and trying again, over and over. (And BTW Tesla hasn't made it very easy for the owner/occupant to advise or request a better route.)

It seriously can't be human-like or even robotically successful in traffic if it can only remember the last few seconds of its own life history, no matter how good the training set was.
 
  • Love
Reactions: AlanSubie4Life
Thinking about it a bit further, I believe the perception of whether and when to go around can be dependent on long-persistence data. A delivery truck on a city street with flashers on and the back door open - fairly clear you might go around, but not if one or more cars are ahead of you, with drivers inside and occasionally creeping forward.

There are many related possibilities but you have to wait to see how it develops over tens of seconds or more. I think this is a fundamental issue for these machine learning nets because they don't seem to have some kind of long-persistence subloop. I thought this was coming to Tesla FSD and I think it actually has, but the temporal understanding only spans a few seconds at best. It's clearly not possible to store a one minute (or more) pipeline of full-resolution and full frame-rate video, so if truly long persistence is to be achieved it has to be based on a pipeline of highly processed perceived-object data from a downstream NN layer. Not necessarily coded explicitly. It will likely become part of the ML "Software 2.0" solution, but the flow architecture needs to include some loop-back access to a fairly long (order of minute(s)) history of the scene elements. It's no good IMO if the system is making a completely fresh assessment every few seconds as a traffic jam, accident, demonstration or whatever slowly-develeloping situation unfolds. Unfortunately I think this is exactly a limitation of many of these NN pipelines including Tesla's - though clearly they're aware of this lssue. The question is whether they have the flexibility to create and refer back for such long-term inputs to their NN decisions, with the present hardware setup.

There are corollary topics here, like the ability to recognize obstacles or difficult traffic and modify the nav route with at least a bit of history. A few minutes to a few days to "remember" new/evolving construction. At least a few minutes to help resolve Chuck Cook's nav loop (where it gives up on a difficult/inadvisable left turn, only to go around the block, immediately forgetting what just happened and trying again, over and over. (And BTW Tesla hasn't made it very easy for the owner/occupant to advise or request a better route.)

It seriously can't be human-like or even robotically successful in traffic if it can only remember the last few seconds of its own life history, no matter how good the training set was.
From the most recent presentation this year, the snippets being sent to Tesla are 10 seconds, so I presume that is the current persistence. Also it appears they only recently switched to persistence of previous data. I discussed this a bit here. Previously they were doing things frame by frame and only using a smoothing function to smooth things out. Now they are treating things more like "video" and getting velocity and acceleration data from it (thus being able to replace radar).
Tesla.com - "Transitioning to Tesla Vision"

I don't know how this persistence extends however to the other decision making of the car beyond that.
 
I see you on that, and feel the same.

But city driving with all it's chaos and complexity is so different from highway driving. FSD beta is so far away from anything like AP on a uncompicated highway.

I wish they focused on getting to level 4 highway first instead of city driving.

Same here. I wish they tried to at least fix things like not swerving into the middle of a merging lane, or manage regular curves without freaking out
 
  • Like
Reactions: daktari
Luckily that only seems to be policy error, since car knows that there’s a stop sign (it is visible in visualisation).
The thing that strikes me is that there is no awareness of the cars stopped at the limit line that are causing the cars in front of the Tesla to be stationary. If you notice, it visualizes two cars ahead, but there's a huge gap between the first of the two and the limit line.

I could see a situation where passing would make sense - if that first visualized car was disabled and there in fact was a gap between that car and the stop sign, traffic would indeed go around that car. But that's an exceptional situation, where this appears to simply be failure to perceive the one or two vehicles in front of the two shown on display, in which case passing would not be an option.