Re: crossing double yellow and trying to pass a line of cars at a stop sign, almost causing a head on collision (above):
1. Let's say somehow this behavior is "trained out," based on IDK, continued testing? How can anyone be sure it hasn't just been reduced enough that it's OK in the tests run, but there's still a latent condition in the NN that would trigger the behavior again, when IDK, the angle of the sun is different than all other times and some latent set of nodes gets happy and boom, does it again?
2. Why should I not be terrified to think that there could be 1000 latent conditions lurking in the NN, just waiting for that lucky day when the car drives me off the cliff? Really, no one can probe the NN for the exact function of every combination of inputs and outputs? How can anyone validate anything? What fundamental understanding am I missing?