I have a question on this oft-expressed idea/assumption/sentiment -- that FSD becoming statistically safer than a human will drive massive rapid public acceptance.
It seems to me, given the example of how almost every fire in a Tesla makes USA national news, that FSD is going to have to be almost perfect - far better than "just beats a human". Every single fatal accident involving FSD will be guaranteed to make the national news at first, and the general public thinks once again, as they have been preconditioned to think by the FUD, that you cannot trust the robots.
Stats show in the USA in 2021, we had an average of roughly 50 fatal accidents a day. If even 1 of those per WEEK is related to FSD in its first release year, we will be hit over and over every week with a national-level story of robot-caused deaths. And even if the news stories say "statistically these cars are safer" in a quick sentence at the end, statements like that are, in the viewing public's collective mind, more than offset by a video of a burning Tesla, an ambulance, and grim looking first responders.
The argument that "the statistics show its safer per mile" convinces very few airplane-fearers to take a plane rather than a car, and I think that same argument will convince very few low-information drivers that taking their hands off the wheel and putting the AI in charge is safer.
I grudgingly think the rollout of FSD will therefore be slow, and only helped along as public attitudes change, and that pace may well be glacial. I am actually encouraged that other automakers are advertising at least limited autonomy, as this may help the public attitude change needed to make us all safer in the long run.
Can someone talk me off the ledge here?