So let's look at this from a business standpoint. Tesla knew 2 years ago or so that they needed massive compute power to train the NNs. What were their options?You are wrong. Has been available to key customers in beta status since summer. You are welcome!
1. Try to use off-the-shelf hardware (servers/cloud/GPUs etc) and deploy massive numbers of these.
2. Find someone who has built the kind of custom hardware they need.
3. Build it in-house.
#1 isnt efficient .. CPUs can do it, of course, but at huge power expense (which means $$$$ and time). #2 there are limited options, pretty much the only choice is Google. #3 means dedicating resources to the design. No doubt there was some "ooh we want to build that" from the design team (after all, they already had chip design experience from HW3), but in-house designs always overrun, and have a huge long-term maintenance cost.
So, why didn't they use Google? I think it's pretty obvious. They would be taking (indirectly) a dependency on their #1 major competitor .. Waymo. From a business perspective, this is high-risk. Also, the Tesla mind-set and company culture is vertical integration .. they want to own the entire stack, so they can fine-tune and control it going forward as a competitive edge over less integrated competitors.
I don't know if this will turn out to be a good idea to not, but I'd put money on something like the above being part of their decision-making process.