I think that what we're seeing with disparate views regarding driver assistance is just an extension of the age old problem that a lot of innovative products have, that of designing an effective man-machine interface.
I managed a fairly large aircraft procurement about 16 or so years ago, where the same problem arose. The technical difficulty in presenting an operator with a large amount of information from various sensors and weapons systems, and allowing the operator to interface with it effectively, were huge. It came to a head when a large touch screen was presented by the techies as the preferred solution. I simply couldn't believe they could be so damned stupid, as there was absolutely no way their interface could be operated in an aircraft in combat, with the crew wearing normal flying clothing and SE.
The answer to that was to actually stick some of these interface designers in the aircraft, fly them around and show them how challenging it is to make small, precise, movements on a screen, whilst subject to the normal vibration, acceleration levels etc of flight. It was apparent that the design team simply had no understanding of the environment, hence the reason for their enthusiasm for their preferred solution. To their credit, they then built a simulator, so they could try out different interfaces, and they pulled in a couple of retired aircrew as advisors.
By the same token, some designers of things like phone apps, need to understand that not everyone has prehensile thumbs that are only 3mm wide at the tip, and that not everyone has 20-20 near-vision. Haptic feedback on any user control is extremely helpful. For decades, aircraft secondary controls were designed so that pretty much everything could be done by touch. Deliberately removing the ability to be able to input information using just the feel from fingertips inevitably makes things less precise, and more challenging for the user.
It often seems to me that the Model 3 is very similar to that prototype cockpit I was shown all those years ago. The technology is impressive, but the ability of the driver to interface with it isn't as good as it could be. It's almost as if those designing the interface don't drive, or at least don't drive a RHD car when they are right handed, so cannot use their dominant hand to operate secondary controls. Instead of designing the cabin around the most effective way to present the driver with key information, including that from the driver assist systems, and optimise the way that the driver can interact with it, it seems the cabin has been designed around the idea of having a clean look, with much of the things related to actually driving the car being a lower priority.
It may be that this emphasis away from the driver's needs is deliberate, as the move towards greater autonomy comes closer, but right now we're in a position where the car has to be driven by a driver. I can understand why organisations like NCAP are critical of the man-machine interface. Whilst it may well be a good solution at some point in the future, when the level of autonomy is higher than it is now, it seems a sub-optimal solution for this interim period, when all of the self-driving systems are just driver assistance features. I suspect it's also why some (most?) other EV manufacturers are sticking with displays in the driver's eye line, or are using HUDs, at least for now. During this interim phase, before we get fully autonomous vehicles, there's an even greater need to present the driver with clear information, to allow the driver to be able to quickly understand what the driver assist features are doing, and easily respond if they aren't doing exactly what's wanted.