You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
That's not what this discussion is about. The question is why isn't this behavior of FSD beta predictable in simulation? Why wasn't a scenario close enough to this caught by Monte Carlo simulation?A simulation can only respond to the inputs given for the scenario. It can not simulate the irrational, unpredictable behavior of humans.
Why wasn't a scenario close enough to this caught by Monte Carlo simulation?
You can definitely create a model for how safe a human feels. They have a crazy amount of disengagement data to look at to analyze and see how comfortable people are with cars coming towards them at high speed. Watching the video again I do see your point that if Chuck hadn't disengaged there probably would not have been a collision even with no evasive action by the other car. That's an extremely rude way to drive though and not really practical for a system that needs to be monitored by humans.I'd venture that it wasn't caught because the safety performance of any given maneuver can be really subjective.
For that particular case, it seems to me that Chuck took over because he thought the vehicle was too slow to completely enter the median. But the only on-coming vehicle was in the middle lane, so the vehicle was in no imminent danger of a collision. You can model this scenario in a simulation and figure out how often the maneuver results in a collision; but you cannot use a simulation to model how safe a human in the car feels during the maneuver.
Dude constantly says it was just a bump in the video comments, his rim is fine.
No way that was a dip in the road, he full-on hit that protruding curb at speed and went up a good 8". I'm afraid I don't believe him.
View looking back at the curb
View attachment 844407
You can definitely simulate that! You just have to make sure margins are sufficient.but you cannot use a simulation to model how safe a human in the car feels during the maneuver.
Anyway this can definitely be simulated.
Based on the visualization and driving behavior, it seems like FSD Beta 10.69 did not see the sign / pole / concrete base. But maybe it just wasn't actually that close. The keynote about occupancy network does show that skinny poles aren't necessarily a problem:Do we feel that FSD Beta saw the pole though?
There are some obvious parameters that you can use to evaluate passenger feelings of safety. You can measure the levels of lateral jerk, acceleration and deceleration. These are key areas that cause people to feel less confident about the cars driving. You can certainly measure the cars proximity to other objects in the simulation (cars, people, UFOs, etc). It is not difficult to come up with minimum proximity values for different types of objects based on speed and situation. You run your simulation and assess whether the car exceeded any of these comfort factors.You can model subjective feelings of safety or discomfort, but how else do you validate your model other than testing it in real life?
Maybe they have run this scenario through a simulation, and their model predicted that occupants would feel safer slowly approaching the dense traffic ahead in scenarios where the traffic behind the vehicle is light. No reason that behavior couldn't have already come from a simulation, but if it did, the assumptions they programmed into the simulation have now been proven incorrect.
Good catch about something related to the pole showing up in the visualization, but I have my doubts that it diverted path due to the pole vs the curb ahead where the parking spots end. The "pole" actually appears twice in the visualization different from the screenshot you shared:So we can estimate that FSD first diverted its path for the pole about 30 feet prior to it, and it was first rendered on the screen about 20 feet prior to it
in all fairness, that pole isn't the easiest to see for me. It may have been easier in real life but in the static picture it kind of blends into the other poles on the other side of the sidewalk.For future reference, you can frame reverse / frame advance through a YouTube video with the comma and period keys on your keyboard respectively.
At about 25 frames prior to the pole being approximately at the front of the vehicle, you can see the intent-line make a deliberate shift to the left. And then twice at about 17 frames prior to the pole, and 13 frames prior to the pole, you can see it render on the FSD visualization.
This video runs at 30 FPS, and 25 MPH is about 36 feet per second. So we can estimate that FSD first diverted its path for the pole about 30 feet prior to it, and it was first rendered on the screen about 20 feet prior to it. EDIT: Also just realized that this segment is running at at least 8x speed due to the speed of the hand-gestures. So it's possible it first saw the pole closer to 80 yards away, and first rendered it on the screen closer to 50 yards away.
So there's no question that FSD saw the pole in a parking lot while traveling at 25 MPH. Here's a screenshot from 13 frames prior.
View attachment 844520
I get the feeling that you're overestimating the abilities of simulation and/or don't understand how much goes into a simulation. (Don't take this wrong, I don't mean it disparagingly) I work in healthcare and we do simulations but even with decades of experience the simulations are still poor models of a complex biological system. This is similar - it's a highly complex and dynamic system with numerous inputs that Tesla is still developing and has relatively limited experience with. If you don't understand the complexities at this point there's not much point in continuing the conversation.Maybe, I'm just curious why simulation doesn't work in this case. It's hard for me to imagine them getting to the performance required for driverless operation if they haven't figured out how to use data from the fleet to solve Chuck's ULT.
What I would like to know is why it often signals when going around a curve (sometimes even the wrong signal direction) BUT rarely signals when moving into a turn lane.
Where is 'here?' In MN you're not.For things like roundabouts here you are required to signal; even if you're going in a constant circle.
Almost no one does though.
Yup totally agree, and as I would expect, it seems like Tesla is prioritizing what's important to them although frequency of issue could be an additional differentiator. Pretty high on their list is safety such as improving predictions about intersections so that FSD Beta doesn't drive into oncoming traffic. There can also be "uninteresting" improvements that satisfy business needs such as preparing for single stack highway driving where the marginal benefit to users might seem little given that Navigate on Autopilot is already quite capable.Tesla will work them when and if they decide to work them. You don't necessarily work all the simple ones first.
Because you are warning anyone in the other lane that you are merging? Doesn't seem extraneous to me.Why signal out of courtesy? What benefit does that serve? Honestly it’s discourteous as it is extraneous noise that could interfere with legitimate signals.
Elon clarifying that the "early Beta" 1,000 is mostly employees: