Versus your unmarked dirt track comparison, it is intuitively far easier to train a NN to drive in the centre of a nicely defined highway lane with good clear white lines and cats-eyes every 10 metres. But it's not obvious to me why you think it would be impossible to train a NN to recognise where the road edge begins and the grass verge starts. That seems a far easier thing to achieve than tarmacking and line-painting every obscure rural road in the world.
The bit I can see that is trickier, is for example the behaviour response when driving on a very narrow country road, when an oncoming agricultural vehicle cannot pass without you first reversing halfway off the road onto the grass bank. But as I said in my previous message, what percentage of journeys carry a greater than zero probability of this type of event?
IMHO, understanding the area
surrounding the road, and the implications of driving off the road, are absolutely critical to making a self-driving system. There's lots of places around here where if you go off the road, you're going off a cliff to your death. Human drivers drive much more cautiously in such places, and AI drivers need to as well. The system needs to be able to calculate a "
maximum width which will safely bear a vehicle" at the edge of the road, the penalties for being wrong, and adjust driving accordingly. The weighting factor for how bad the shoulder is should be multiplied by how bad the road and weather conditions are (aka, how likely you are to unintentionally go into the shoulder to begin with). The car should also be able to assess a
negative-width shoulder - e.g. where obstruction or undercutting of the road surface has extended into the road itself.
On bad roads / bad weather around here where there's unsafe shoulders, it's standard for drivers to drive down the middle of the road (even where there's lines), unless there's oncoming traffic approaching. AP needs to learn to do this too.
The day I see, on the AP display, road shoulders accurately modeled, colour-coded by danger, and collision hazards on them (rocks, trees, human structures, etc) flagged by severity, and the current road's driving conditions accurately flagged, is the day that I'll think, "
Driverless self-driving is probably ready."
Thankfully, I imagine that Tesla
does care about this and is working on this, if only for wanting to have the ability to pull a car off the road if there's an emergency (or at present, if the driver is unresponsive). Shoulder assessment needs to be applied more broadly than that, but it'd be a critical first step.