Of course they should. The regulators are not going to force everyone to buy a Tesla simply because it's 5X safer than a human! They wouldn't do that anymore than they forced everyone to buy a Volvo in the 1970's because they were multiple times safer than anything else available.
You misread. I posited that Tesla's tech is 10X safer than human and 5X safer than competitor. You would need to argue that the competitors tech, being just 2X safer than human, is good enough. In either case, a regulator is never forcing the industry everyone to buy a Tesla; the competitor always has the option to sell a vehicle to be driven by humans without the questionable autonomy tech.
Now some people have speculated that once autonomy tech becomes 10X safer than human than human driving would be banned. That would take us into to a whole new ballgame because demonstrating that your tech is mere as good (1X) as human would no longer be good enough; it would be banned along with unaided human driving. In this situation, it becomes debatable whether even 2X safer than human is good enough for regulators. Indeed some human drivers could argue that they themselves are 2X safer than the average human driver and ought to be allowed to drive unaided as well. This then would be a tough situation because they would no longer have the option of marketing vehicles for human driving.
Personally, I doubt that governments will ever get to the point where they ban unaided human driving. Market forces will strongly favor the best autonomy tech and it will become ubiquitous without being banned. As they fraction of vehicles on the road with 10X or better autonomy rises, the roads will become incrementally safer, even for hapless human drivers. For example, if you're the only human driver on the road and all other vehicles on the road have autonomy good enough to avoid having an at-fault accident with you, then the only accidents you get into are where you yourself are at fault. So your total risk of having an accident is reduced as the risk from other vehicles drops to zero. So to the extent that even unaided human driving becomes less risky over time and in response to vehicle autonomy uptake, governments have even less motive to ban it and face the ire of a certain segment of autonomy resistant drivers.
So I guess I net out to a position where most governments continue to allow unaided drivers, but hold autonomy tech to a higher standard. Governments will likely want to protect unaided human drivers from accidents with other vehicles where autonomy is at fault. Thus, autonomous tech would need to prove that it can avoid most accidents with other human drivers and especially any at-fault accidents. Also of course, autonomy must avoid accidents with pedestrians, animals and stationary objects. I suspect that any autonomous tech that can achieve this is likely more than 2X safer than human drivers. Indeed any autonomy tech that is merely 1X as safe as human on average is probably less safe than humans in as many scenarios as it more safe than humans. This raises the question, in which scenarios would regulators tolerate an autonomy system that is less safe than human? If the answer is none, then the average safety must be much higher than 1X. Indeed NHTSA looks to be doing some data mining on Tesla data to find out where if any Tesla's autonomy tech might be inferior to human drivers. If they find anything, then Tesla will have to improve upon that and be able to demonstrate superior performance in the future. So nominally I think Tesla needs to be at least 2X safer than human on average just to pass current regulatory scrutiny.
Meanwhile, Ford Blue needs to be able to navigate a bend in the road.