Ok - granted this title is a little click baity, but hear me out.
While no one knows for sure how v12 works, Tesla (I suppose mainly Elon) have claimed that it is 'photons in, controls out'. This implies that there are *no* heuristic rules anywhere in the stack. Every action taken by v12 is the result of learning how (good) humans drive.
This is clearly true in many situations, where v12 seems surprisingly human-like. However, two counter-examples make it clear that there are still heuristics in place, and these heuristics overrule whatever the NN might want to do:
While no one knows for sure how v12 works, Tesla (I suppose mainly Elon) have claimed that it is 'photons in, controls out'. This implies that there are *no* heuristic rules anywhere in the stack. Every action taken by v12 is the result of learning how (good) humans drive.
This is clearly true in many situations, where v12 seems surprisingly human-like. However, two counter-examples make it clear that there are still heuristics in place, and these heuristics overrule whatever the NN might want to do:
- Stop signs. No way they trained this thing on millions of examples of humans stopping at stop signs, and it learned to come to a complete stop at all times. (Yes, I know this was requested by the NHTSA. But my point remains - that hard-coded rule must still be in place today).
- Creeping at intersections. v12, just like earlier versions, stops way back from the actual intersection first, and then begins a very gradual creep. No human drives like that. I understand why - it's clear that the b-pillar cams are the achilles heel of FSD. But even a driver with that same limited visibility wouldn't stop that far back first, and then begin gradual creeping.