The neural network implementations, in almost all machine learning hardware applications as of 2024, are binary digital logic machines. They are not implemented with analog neurons with associated DACs, multipliers and ADCs (or specialized nonlinear analog decision comparators or whatever). Nor are they pulse-density analog machines.
There have indeed been such concepts, and research performed in that vein - I myself am interested in that and think it could have a real future - but that isn't the way it's being done in any serious large scale commercial ML/NN deployment that I'm aware of.
The ML computers are very high speed arrays of familiar synchronous-logic processing units to make up the "NPU", architecturally specialized to perform accumulate and multiply (dot product) functions, and of course associated high-speed memory to hold dynamically changing operation results as well as the NN "program" in terms of the associated weights, all using relatively coarse fixed-point "like" numerical representations (I read that Tesla came up with their own preferred number format, but I don't know where in the training/inference universe it's actually being used). The closest widely-deployed silicon that met these requirements, as of a few years ago, were found in graphics cards and their core GPUs. That's why you often hear and read about giant GPU training clusters, and lots of business for Nvidia as the leading example of companies that somewhat lucked into the huge hardware market for the current AI boom.
With each generation of development beyond the early graphics-card arrays, I think the architecture is becoming more refined towards purpose-built ML computing. I personally don't know a lot about this, nor at what point people will stop calling them "GPUs". Tesla's own Dojo project is a non-Nvidia example of custom silicon and extensive support hardware, but again it's an extremely high bandwidth digital computer module intended for efficient expansion, and in the meantime Tesla is buying tons of Nvidia along with most if not all other big players.
On a smaller scale, but with impressive computing power and efficiency, the same comments hold for the inference processors within the car. Tesla did design a fairly impressive and power efficient autopilot computer for HW3, a better (but higher power) one for HW4, and I wouldn't be surprised if they already have prototype silicon for HW5. In terms of volume deployment I think Tesla is currently the leader in this regard. Nvidia, Qualcomm, probably Intel for Mobileye's Orin, and others including Huawei et al in China, are working on these things.
Most of these projects are not just silicon processor development, but I think these companies all have in-house self-driving platform developments beyond just the idea of selling chips or computer boards to carmakers. I think some of the recent and existing-generation robotaxi companies don't have particularly efficient in-car computing; we hear about trunks stuffed full of computers and cooling equipment. But their volume is currently low and I'm sure they will be taking advantage of supplier developments in this space .
To finish this by throwing in the requisite v12 content: a big question that hangs over the v12 approach is whether the couple of million HW3 computers already out there, or even the faster HW4, have enough compute (inference) power to achieve the goal. People make a lot of pronouncements here in the forum, but I don't think the answer is completely known even inside Tesla, much less to the rest of us. There's a lot of talk these days about the ratio of training compute and data size to the inference compute assets. The field is moving rapidly and there are very encouraging reports that massive and properly targeted training effort can result in a very compact, efficient and capable inference implementation, i.e. could work very well on HW3. The counterpoint is that the training investment could be too high to achieve that goal, and that a more tractable training infrastructure (and training cycle time) could be enabled if the in-car hardware were better than HW3|4 by some factor. I'm far from being knowledgeable enough to make a prediction, but of course I have my hopes!