This is the Musk/Hotz argument, and I guess adopted by a lot of the Silicon Valley community on deep learning vision AI. The OP
@Trent Eady is a Tesla-positive writer on Seeking Alpha, whose opinion certainly aligns with this vision.
It is not a bad argument, but it needs to be noted there is a lot of hyperbole behind it - and a self-serving purpose. Let me explain.
The self-serving purpose is that this "vision AI" community wants and believes it can jump ahead of the more comprehesive autonomous driving projects (think multi-redundant suites, redundant computers and systems, rigorously implemented and tested driving policy, responsibility taken by the car company for the driving etc.) through cheap sensor suites (on the extreme think cell-phone level cameras), fleet learning and especially deep learning for driving (think Tesla FSD, comma.ai, no responsibility for the driving taken by the maker of the system).
Why is this a self-serving purpose? Because this vision AI community does not have the resources or the time (they are behind on both) to go the comprehensive route. To jump ahead, they must rely on aggressive deep learning and fleet validation, which they see as their opportunity and believe is the disruptive opportunity more traditional players are missing with their redundant sensors and lidars and manual labelling/teaching and what not.
To put their idea to the extreme it goes something like this: Let's just strap cheap cameras on cars, hook them up to a barely powerful-enough generic CPU/GPU running a neural network, drive them a lot (on the roads and perhaps in simulators) so the NN learns to drive, deploy this to massive fleet with data collection, hand out the fleet data to regulators (shadow mode and/or real mode), rinse and repeat enough times and we're there. That's basically the disruptive idea. It is not a bad idea.
But regulatory-wise, success of this disruptive idea depends on selling the notion to regulators that the fleet data of this will be sufficient proof of the safety of their system. Not controlled "clinical" trials, not a comprehensive approach, but an aggressive machine learning approach relying on commodity hardware deployed as quickly as possible on vast consumer networks.
And that's one reason you have for guys like Musk/Hotz/Eady talking about how unethical it would be to deny this route. The success of the concept they are rooting for depends on it. I'm not saying they don't believe it, I'm just saying this additional angle is affecting the opinion a lot. Just as a company with a more rigorous approach might advocate for more rigorous testing prior to approval.
Now, of course all of these players will mix and match. Some "vision AI" guys will have secondary sensors and redundancy more than others - while some traditional players will certainly employ also techniques similar to the vision AI guys. In the end they might even all end up in the same place, having just taken wildly different routes to get there.
We shall see who succeeds, who gets there first and who is right. I do share the concern what an early, high-profile crash might do to autonomous efforts. Let's hope someone doesn't go ahead
too fast either to push it back for all.