Yes, we all want autonomous cars to reduce deaths. But the balance there is do we want a single company to decide how much risk to take on the way there with no public auditing? Do we want them to be able to do it without producing any public data on the risk/reward of this public operation?
Tesla has already indicated that FSD is a risk. They require a "safety score" (their name) in order to get "FSD." Hence, we know for a fact that they are putting the public at some increased risk.
What we don't know: How did they evaluate the acceptable risk? How does their tool identify low risk testers? What benefits is Tesla getting from the risk they are taking? What benefits is society getting? Are there alternatives to the risk?
This is exactly the role regulators take all over the USA. The FAA, FDA, EPA, etc all do this all the time. They audit that what a private company is doing is a reasonable risk/reward trade. Yet when it comes to Tesla and FSD, it seems many people want to allow them to make all decisions internally and opaquely, and believe that public FSD beta testing must be some magic tipping point on the way to autonomous cars that are safer than humans, but we have zero proof of that, and Tesla isn't even trying to argue that.
There isn't a single company, that I'm aware of, deciding how much risk to take. There are multiple companies testing various SAE levels on public roads, all under the regulations of various agencies. They are required to report incidence to NHTSA, and there are reports from NHTSA on that data. There have been recent discussions on these forums about the number of accidents on Tesla AP/NoA/FSD in relation to the number of cars on the road, miles driven, etc., on the most recent June 2022 NHTSA report. One interesting thing found in that report is that Tesla is one of the few companies that gives very complete data, due to the telemetry data being kept on the vehicle and sent to Tesla remotely. The report showed that there were other companies that relied almost solely on user reports, or the telemetry wasn't available after the incident.
How did they evaluate the acceptable risk?
Like other companies testing various SAE levels, the primary goal (prime directive if you will) is safety. Don't hit things and follow the traffic laws. They start in simulations, then move to test tracks, then move to limited locations with select testers (employees), then when they have all the data they can collect from controlled locations, they move the test to uncontrolled locations (either with safety drivers for higher SAE levels, or public testers for lower SAE levels). They collect data either with our knowledge (pressing a report button) or without (telemetry is automatically uploaded to Tesla). If an increase in incidence occurs after an update, rollout of software halts, and in some cases is rolled back. We've seen this with previous updates.
How does their tool identify low risk testers?
Much discussion has been had regarding the Safety Score program. Some think it's an excellent tool, and other think it's a charade. I think the answer lies somewhere in the middle (like most things). The Safety Score program prepares people for paying more attention to their driving habits. People become hyper-aware of the metrics being monitors, such as following distance and aggressive stopping/turning. The most common phrase you hear is "driving like a grandma". Does it really gauge good vs bad drivers? I can't answer that, except to say that it forces people to watch their driving more than they would normally. They have to do so for some number of miles for some amount of time. In some cases, it may change habits that some people had. It also weeds out some drivers - I've read many on the forums who have shown disgust with the program and opted out so they could drive normally. So, the Safety Score has had some effect. On the other side, I also believe it's there for regulatory scrutiny. There must be some CYA for any company testing hardware/software in public. Regulations, currently, do not require a safety driver for L2 testing. But I'm sure there must be some gatekeeping that has to occur to satisfy watchful eyes.
What benefits is Tesla getting from the risk they are taking?
All but the most fervent naysayers will admit that the FSD Beta program has improved over time. Regressions do occur, but are usually ironed out in a future update, and new issues emerge. Then they are ironed out. Phantom Braking is getting better for many people, and confidence on turns is increasing. The benefit Tesla gets is data. They get to see how their code and neural nets are working in the real world. And they make adjustments and tweaks to the software/hardware based on the data they've received. Their end goal is to have a product that drives safer than a human, with minimal human interaction. And, obviously, charge a price for this feature.
What benefits is society getting?
Cars that drive safer than a human. Less accidents. Less deaths. And possibly, less traffic.
Are there alternatives to the risk?
There are always alternatives. We stay with horse and buggy and never get the next technological advancement. Or we can slow down progress and take our time getting there - perhaps we'll have FSD in 50 or so years. But mankind is on a road of discovery, and I don't think it's going to slow down.