Cool. My understanding also is that there is no “learning” in the vehicle - the Model is computed centrally - then runs on individual cars - there is no local AI.
Thing is, Everybody Says That. But I have my doubts. First: I do have a background in some serious DSP. And for some time I kept up with the beginnings and development of neural networks.
Feedback in neural networks, where the weights interior to the neural networks and a selection of outputs of the neural network feeding back into the inputs array of data have been part of the model since Day One. Heck, we of the wetware persuasion do this
all the time.
Doing this sans constraints is likely a good way to descend into madness. But multivariate constraints on multivariate input, multivariate output, with multivariate feedback has been a thing for a very, very long time. One can constrain interior state variables, the range of input, range out outputs, and the ranges of feedbacks pretty much as one sees fit, without even resorting to what goes on in the various stages of neural networking. And, yeah, I've done this, non-neural-network style, on and off as part of work for decades.
It gets even weirder. I mentioned the word, "State Variables". Let's see if I can give an example. There's this class of problems called, "System Identification". A classic one was this Soviet-era problem involving predicting the water level of a large river system with feeder rivers in Northern Siberia, the better to predict flooding, set dam flow levels, and all that. One might have a bunch of rain gauges scattered across several thousand square miles as basic input and some automated water level monitors here and there. But complete monitoring was pretty much impossible: Way too much area and not enough money to instrument it all.
So, create models. Lots of models. Create the models algorithmically. The main intent is feed input data into the model, look at the data coming out of the model, compare that data with real, measured data from the field, then run algorithms that re-run the model as a function of time, change weights, function types, the number of state variables that change as a function of time, and modify the model to minimize the error difference between the predicted and actual outputs. Do enough of this, stepping the complexity of the model up as one goes, and, eventually, it can be used to
predict what the various water levels in the river system are going to be like due to rainfall here, there, and everywhere, with a starting configuration of water levels at T=0. As I said, this is a problem in
system identification.
Thing is, the model itself has state variables that change over time.
But those state variables don't have to have a basis in reality. They may not be water flow rates, pond levels, resistance to water flow, or anything else: So long as the model works what values these variables take don't matter, so long as the end result works. Further, if constraining those state variables makes the model work better, then, well, why not?
Interestingly, the
starting values for those not-based-in-reality state values can make
big differences in what the end results might be.
So, now let's switch back to Tesla. Big, complicated algorithms in now-you-see-it, now-you-don't decisions on which way to drive, set the accelerator, set the brakes, with a fast-processing neural network built in, with multitudinous numbers of interior state variables both inside the neural network and outside, feedback of all sorts up the wazoo, and constraints everywhere. Finally, the whole business is a research project:
NOBODY has done stuff like this before. Research is sometimes jokingly referred to as, "The process of running up alleys to find out if they're blind."
I've mentioned the idea that there might be feedback-sensitive neural network algorithms built into Tesla's software and hardware. Others have pulled up tweets and such from people inside Tesla that have stated that They Don't Do That. Well, that may or may not be completely true. But, even if it is,
initial values count. So do constraints. And.. Tesla
needs data on how well all this works in the real world.
As an example of how this might work: Seed the state variables at fixed intervals (on power-up? before each drive? who knows?) with a random number generator. With or without constraints; with or without correlation on how other state variables are set. See how it all works due to performance criteria that Tesla designs and tracks. Report numbers back to the mothership. Use that to change stuff going forward: Either with the next software release or, if one wants to get strange, on the next drive
.
Thing is: Proving that Tesla does or doesn't do any of the above is hard to do. Except.. I've certainly seen different behavior on different days. Just watch our guy with his unprotected left turns: Minor changes in the environment is all that it is.. or is it deeper than that?
Fun.