It's a "copy the incompetent" technique. It is literally impossible to make a good self-driving car this way.
Behavior cloning is fine. The problem is, quite specifically, whose behavior you're cloning; and who evaluates whose behavior you are cloning.
If you're going by "the majority of people on the road", well, we already know the majority of people on the road are bad drivers.
This is correct. Unfortunately, they're getting data from bad human driving, and then the data is being evaluated by bad human drivers.
That's all correct, but they're training it wrong. (To be clear about this, they're probably doing more or less OK on the vision / object identification stuff, it's the driving policy decision side I don't see any work on.)
To be clear, they can take the same neural network architecture and retrain it, probably in a matter of weeks, using competent people -- which may put them well ahead of any "competitors" -- but they haven't started yet.
As far as I can tell, he's totally unqualified to make those decisions. Zero qualification in his LinkedIn profile for making such decisions -- zero. They need someone who actually knows what the vehicle really ought to do, not some random engineer.
They're making path planning decisions based on "average driver behavior" (terrible), and the random whims and tastes of engineers at Tesla with no particular driving qualification (perhaps better but still very bad). That's just wrong. Hire some professionals to answer the path planning questions, and let the engineers stick to implementing it.
I don't think the field of safe driving commands huge salaries; Tesla should be able to afford a few specialists.