This clip is only 12 seconds long but the reduction in speed is well done.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
I really don’t like his videos because he basically refuses to intervene at all even if his car is impeding other drivers. The move he praised so much in the middle of his vid where his car was trying to pass a stopped vehicle with an oncoming car was also stupid. Yes, it’s nice to see that the car tried to fix its mistake, but he never should have let it make the mistake in the first place. And That last right turn in his video his car basically crept so far forward it scared the oncoming driver into stopping and giving up the right of way to avoid what they thought would be a collision if they kept driving.After watching AI DRIVR's video, my conviction in Tesla's approach is even higher, lol.
Yeah, that first mistake was bad. At residental speeds (25mph) it wasn't as bad as it could have been but it still wasn't great. Like the guy in the video said - the auto corrective behavior was nice to see - but ultimately the car should not have tried to or have been allowed to make that mistake. We can clearly see the oncoming car, did the Tesla not see it, think it was going slow/stopped, or just ignore it?I really don’t like his videos because he basically refuses to intervene at all even if his car is impeding other drivers. The move he praised so much in the middle of his vid where his car was trying to pass a stopped vehicle with an oncoming car was also stupid. Yes, it’s nice to see that the car tried to fix its mistake, but he never should have let it make the mistake in the first place. And That last right turn in his video his car basically crept so far forward it scared the oncoming driver into stopping and giving up the right of way to avoid what they thought would be a collision if they kept driving.
I really don’t like his videos because he basically refuses to intervene at all even if his car is impeding other drivers. The move he praised so much in the middle of his vid where his car was trying to pass a stopped vehicle with an oncoming car was also stupid. Yes, it’s nice to see that the car tried to fix its mistake, but he never should have let it make the mistake in the first place. And That last right turn in his video his car basically crept so far forward it scared the oncoming driver into stopping and giving up the right of way to avoid what they thought would be a collision if they kept driving.
I'm not sure the center mounted camera could see the oncoming car. Of course this is a major user comfort issue since the car will have to peek out in some situations when the driver can clearly see an oncoming car (AI Driver's camera is somehow mounted on his head?). This is another case where the display needs to have longer than 200ft range...We can clearly see the oncoming car, did the Tesla not see it, think it was going slow/stopped, or just ignore it?
Well yeah, except if the penalty of not dealing with a mistake is death.He does explain why it's important to see fsd beta try to fix its own mistakes. Perfection shouldn't be the goal of fsd, as the world is imperfect and figuring out how to deal with your or others' mistakes is crucial. I've seen two recent Waymo examples where it simply gets stuck (like the infamous simple cone fiasco) because it needs the world to fit into its rigid programming. It's nice to see V9 eventually figure out certain issues.
Interesting that it shows the stop sign but doesn’t connect it to why the car in front might be stopped. Software issue imo.
Interesting that it shows the stop sign but doesn’t connect it to why the car in front might be stopped. Software issue imo.
A lot of people have been talking about these behaviors in the recent releases. It seems to be a result of complaints that the car might just sit dumbly behind a parked or double-parked car. So now it's overly aggressive and it doesn't really know which is traffic and which is a stopped or double-parked vehicle.Here is a situation that FSD Beta struggles with. It wants to pass cars that are stopped at stop signs:
A lot of people have been talking about these behaviors in the recent releases. It seems to be a result of complaints that the car might just sit dumbly behind a parked or double-parked car. So now it's overly aggressive and it doesn't really know which is traffic and which is a stopped or double-parked vehicle.
I thought about how to explain to someone how to figure that out, and it's really not so easy. It's the old "I know it when I see it" answer. And sometimes human drivers will be unsure, but then someone waves them around with not much more than a hand/wrist flip - and that can be just subtly different from a "hold up for a moment" gesture. And in all such cases, the risks of going around are on you, not on the person who waved you along.
Sometimes the stopped vehicle is displaying flashers, but thats also far from a reliable clue. Could be off, could mean there's a serious hazard just ahead.
Not an easy problem for any AV.
That’s why the logic Tesla is using to determine when to go around a vehicle is broken. This is, i don’t want to say common, but it happens often enough in the vids I’ve seen to be a real concern. It doesn’t happen in every video, but often enough that it’s a real problemHere is a situation that FSD Beta struggles with. It wants to pass cars that are stopped at stop signs:
Thinking about it a bit further, I believe the perception of whether and when to go around can be dependent on long-persistence data. A delivery truck on a city street with flashers on and the back door open - fairly clear you might go around, but not if one or more cars are ahead of you, with drivers inside and occasionally creeping forward.Agreed. In fact, we've seen clips from other AV companies where remote assistance had to tell the AV it was ok to go around a double parked delivery truck because the AV was just sitting there and was not sure what to do.
I agree, good to see how well it handles multiple situation, seeing how it recovers was a bonusAfter watching AI DRIVR's video, my conviction in Tesla's approach is even higher
From the most recent presentation this year, the snippets being sent to Tesla are 10 seconds, so I presume that is the current persistence. Also it appears they only recently switched to persistence of previous data. I discussed this a bit here. Previously they were doing things frame by frame and only using a smoothing function to smooth things out. Now they are treating things more like "video" and getting velocity and acceleration data from it (thus being able to replace radar).Thinking about it a bit further, I believe the perception of whether and when to go around can be dependent on long-persistence data. A delivery truck on a city street with flashers on and the back door open - fairly clear you might go around, but not if one or more cars are ahead of you, with drivers inside and occasionally creeping forward.
There are many related possibilities but you have to wait to see how it develops over tens of seconds or more. I think this is a fundamental issue for these machine learning nets because they don't seem to have some kind of long-persistence subloop. I thought this was coming to Tesla FSD and I think it actually has, but the temporal understanding only spans a few seconds at best. It's clearly not possible to store a one minute (or more) pipeline of full-resolution and full frame-rate video, so if truly long persistence is to be achieved it has to be based on a pipeline of highly processed perceived-object data from a downstream NN layer. Not necessarily coded explicitly. It will likely become part of the ML "Software 2.0" solution, but the flow architecture needs to include some loop-back access to a fairly long (order of minute(s)) history of the scene elements. It's no good IMO if the system is making a completely fresh assessment every few seconds as a traffic jam, accident, demonstration or whatever slowly-develeloping situation unfolds. Unfortunately I think this is exactly a limitation of many of these NN pipelines including Tesla's - though clearly they're aware of this lssue. The question is whether they have the flexibility to create and refer back for such long-term inputs to their NN decisions, with the present hardware setup.
There are corollary topics here, like the ability to recognize obstacles or difficult traffic and modify the nav route with at least a bit of history. A few minutes to a few days to "remember" new/evolving construction. At least a few minutes to help resolve Chuck Cook's nav loop (where it gives up on a difficult/inadvisable left turn, only to go around the block, immediately forgetting what just happened and trying again, over and over. (And BTW Tesla hasn't made it very easy for the owner/occupant to advise or request a better route.)
It seriously can't be human-like or even robotically successful in traffic if it can only remember the last few seconds of its own life history, no matter how good the training set was.
I don't know how this persistence extends however to the other decision making of the car beyond that.
Luckily that only seems to be policy error, since car knows that there’s a stop sign (it is visible in visualisation).Here is a situation that FSD Beta struggles with. It wants to pass cars that are stopped at stop signs:
I see you on that, and feel the same.
But city driving with all it's chaos and complexity is so different from highway driving. FSD beta is so far away from anything like AP on a uncompicated highway.
I wish they focused on getting to level 4 highway first instead of city driving.
The thing that strikes me is that there is no awareness of the cars stopped at the limit line that are causing the cars in front of the Tesla to be stationary. If you notice, it visualizes two cars ahead, but there's a huge gap between the first of the two and the limit line.Luckily that only seems to be policy error, since car knows that there’s a stop sign (it is visible in visualisation).