Not directly related to op but seems like this thread is in the right area:
Can someone explain what to me feels like a huge missing element in the AP / FSD model?
Raven S 10.2.1 hw3.
Compared with my own driving style, Tesla automated driving feels as though it is very literal and shallow. The car will suddenly decide that an object or road characteristic has magically appeared from nowhere, without prior evidence. When I'm driving unaided, I imagine a probability / certainty / likelihood 'mask' that is constantly changing. When I suspect decreasing certainty regarding a potentially critical aspect of the road ahead, I slow down.
Having the car so keen to drive right up to the posted speed limit, but based on only a short distance ahead of the car makes for a white knuckle experience!
I feel as the driver I need to see confidence indicators to let me know how solid the car is in its current environment and how vigilant I need to be.
Long / medium / short range confidence.
Route confidence.
System internal confidence (compromised sensors / actuators)
Without this information, especially when I can be pretty certain the AP systems will already be compromised by poor weather or road surface, it is impossible to decide at what point to take over control.
If the car perceives lower certainty at long / medium range, then it should slow down or at least give the driver warning - somewhat along the lines of a CPU load indication.
Also, repeated driver inputs such as changing road positioning should be capable of slightly biasing the car's behaviour. If the car knows it has reduced confidence on near-side lane marking, then the driver should / needs to trim the positioning accordingly.
Many small roads in Europe have 60mph limit - even country lanes. The car's determination to drive at the speed limit whenever its somewhat short term view suggests is possible results in a completely worthless and dangerous algorithm.
Having random cones and other elements repeatedly suddenly appear and dissappear on the IC display suggests there is no averaging of "confidence" but that the NNs are quite prepared to tell me that many objects are appearing from nowhere. Reflections (from glass surfaces or wet roads) seem especially good at generating fantom objects.
Do NNs intrinsically and therefore automatically deal with 'confidence'? Are they biased towards self confidence or selfdoubt? Is there a way of tapping into the NN to see its stress / confidence level dynamically?