Also known as "following too close"
Actually you want to be far enough back to detect the pothole on your own even if the car in front of you hits it directly. Particularly if it does.
If that's the best they can do, it's not ready to use. And I don't think it's the best they can do; this isn't a computing power problem, this is a fundamental design error. This is *bad driving*. If I saw a human following this algorithm (and to be fair, I have), I would tell the human to cut it out.
I'm glad the instrument cluster shows what AP is "thinking", but it's doing a crap job, because it's been programmed by incompetent drivers.
Primary guidance when driving should be, first, the *detected stationary objects* which you are avoiding, followed by the *calcuated blind spots* which you are *also* avoiding (because they might contain things you can't see). This is how I learned to drive. After that you start dealing with such things as moving objects which you are avoiding, and finally the route you're actually trying to take.
I'm glad that there are so many people here who seem to have a good understanding of how Tesla's Autosteer is currently working. As currently done, it's really unacceptable and should be scrapped. Thankfully, it actually seems like a simple problem to get a decent system working -- the error is that they've told it to do the wrong thing, so tell it to do the right thing and you can probably get it working.
There is something here about Bolinger Bands...
Anyway,
1) the car should be creating a vector field (let's say seen in the top view) from all the information coming in. This would include trajectories of cars on all sides including front and back. This includes lane markers on each side and the momentum, legitimate curvature of dashed lines or dots ( consider how lane definition is done throu intersections with 90 degree corners that are optional). This includes oil drips and tire paths and residual gravel that piles up where tires don't go, and grass that lives in the middle of a rutted gravel road. The car is moving through a field of directional information.
2) the car needs to weigh that directional information differently depending on which way it is trying to go. At the fatal gore point, it should have been set to: weigh (track) the right lane marker rather than the left; weigh the trajectory of the cars on the right rather than on the left. At every gore or decision point the system needs to know which way you want to go. Autopilot cannot work well without knowledge of that preference.
3) On a country road with a crumbling shoulder, the dashed center line is the guiding vector.
... There are a couple of points here.
A) most voices here are expressing valid positions
B) autopilot is not well posed if no directional/side preference is expressed before gore points/singularities.
Fear of Elon and lack of common vocabulary may have prevented Tesla employees from effectively explaining item B to Elon - they were not saying it right.
Gladwell wrote a book called "Outliers" that details cockpit communication culture that caused Korean airlines to crash more frequently than other airlines.
Edit add
[Chapter Seven - Turnaround in the Skies: "Captain, the weather radar has helped us a lot."]
Some of the recent lossy process at Tesla is caused by attenuated communication between [too] narrow specialists and the management... The vocabulary might be too narrow.
Anyway, autopilot is not well posed without a directional preference before gore points. With that preference the mountain view accident would not have happened. The car would have tracked the right lane marker and the cars on the right, avoiding the gore point altogether.
This is a simple fix for Tesla (Elon may have already alluded to this during the annual meeting, at least that is how I read/guessed it.)
[edit: upon reflection, you can set side tracking preference based on radar return, as in over weight the vectors on side where less ground clutter is being filtered out. This should help with some construction zone issues. ]