Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
So I took the time to read her paper. It advances fairly generic criticisms of AI systems, which should be nothing new to AI practitioners. I think her notion of top-down (reasoning, interpretation, even "intuition") verses bottom-up (perception, sensing, classifying) is helpful for identifying her concern. Essentially both are needed. The vehicle needs to accurately perceive what is in it immediate environment, but then the top-down reasoning needs to challenge perceptions and adapt to uncertainty. She is pessimistic about any AI system being able to integrate both bottom-up and top-down cognition, and believes that a serious "rethink of reasoning under uncertainty" needs to happen. Likely this is because she has not seen such a system yet and would likely set a very high bar on evaluating such a system.

Now when I watched Tesla's most recent AI Day, I was thoroughly impressed that Tesla has in fact been rethinking quite a lot. They have done enormous work to stitch together the 8 video feed into a coherent view of objects in 3D and temporal space. And in particular vectorized layer that does immediate interpretation of the video feed is used to refine the lower levels of analysis. Here we see an example of integrated bottom-up (video feed up) and top-down (vector rendering down) reasoning. This integration stabilizes the field of perception for the vehicle. And Tesla is able to demonstrate just how much uncertainty and instability is resolved by this break-through integration.

But the FSD system goes much further. The system is able to model and predict the behaviors (or future observations) of vectorized objects. For example, the system makes predictions about the future path of the road to imaging what the portions of road presently out of vision might look like. So the vehicle is constantly prediction what upcoming segment of the road will do. Here causal analysis is not required. Rather a predictivist approach is quite suitable. As the makes predictions of future road observations, it is able to evaluate how those predictions change as the vehicle perceives more of the road. This is a critical step because error in prediction is a way to measure uncertainty. The system can actually register that predictions of the road ahead are presently unstable and unreliable, and this measure of uncertain becomes a factor in how the vehicle proceeds. Obviously, if the road ahead is hard to perceive and predict because snow is falling or some other issue causes uncertainty, the vehicle can be trained to drive more cautiously.

It is not absolutely necessary that the system is able to perform causal analysis that would for example recognize that the falling of snow causes uncertainty about the road ahead. Nevertheless, I would not be at all surprised if Tesla has trained FSD to detect meteorological conditions. The system may well register that snow is falling and such information can factor into how it conducts the vehicle. This would mirror human causal reasoning. "Hey, it's snowing. I should drive more slowly, adjust the lights, avoid hard braking, etc. And I know the road looks different because all this snow." So yes, this is top-down reasoning, and it can be integrated into the whole driving system.

Tesla goes even further than what Cummings imagines, or maybe this is the "intuition" part. Tesla planning is based on observing vectorized objects, e.g. small human, trash can, etc. It has predictive models of behavior for each kind of object. Using these predictive models is can simulate (imagine) future actions. For example, the "small human" can dart out into the roadway, or the "trash can" could just remain in place. Simulating all these possible behaviors simultaneously generates scenarios. Each scenario is an imagination of what could be. This ability to imagine is very important. First, it can correct misperceptions. Suppose, for example that the "trash can" darted out into the road. This behavior immediate signals an inadequacy in system cognition. In no scenario did the "trash can" dart into the road. This implication is that the vehicle does not have a valid cognitive grasps of the situation. That "trash can" could in fact be a small human or there could be some force knocking a genuine trash can into the road that the system had not detected or create a scenario for. So when something happens that is too far off from the distribution of generated scenarios, the system can register high uncertainty, slow down, and recalibrate its cognition. The second use of scenarios is to plan action. There will be scenarios where the "small human" darts into the road. The vehicle needs to conduct itself in a manner that allows it to safely stop under such scenarios. Human drivers do this all the time. When you see a small child near a road, you can imagine that it might dart out into the road. So you slow down and drive more cautiously to maintain a substantial margin of safety when passing the child. Here again a causal interpretation is not absolutely necessary. Rather a predictive model of behavior allows the system to generate scenarios and set a course that is safe under all scenarios. Prediction drives simulation and simulation is a kind of imagination or intuition that enables the system to adapt to complex contingencies. Causation if it applies here would basically apply to the vector representation. The object is a "small human" (cause); therefore, the following simulated behaviors (effects) may occur: object dart in road, object does not move, etc. So causation is most helpful if the object is properly identified. This is actually a limitation of the causal model. If the "trash can" darts into the road, the system has to reject the causal labelling immediately. Fortunately, the system can process observations, actions and cognition much faster than humans. Thus, within millisecond of the "trash can" darting into the road, the system has recalibrated its cognition to respond to "object darting into road." This happens seconds before a human driver would be able react to the change in cognition. Indeed it is not entirely necessary to properly classify "object darting into road." Human drivers can react and hit the brakes a few seconds before they correctly identify the object as a child. The human driver only needs to respond to the emergency of something darting into the road to take appropriate evasive action. Likewise the self-driving system too can take evasive action before it is able to correctly reclassify objects and so update its prediction-based simulations.

I'm quite impressed that Tesla has been able to jump into on-the-fly simulation (scenario generation) capabilities. This is definitely integrated top-down reasoning. Perhaps what we call "intuition" is simply our own ability to rapidly generate scenarios in our imagination and quickly recognize critical scenarios even before we actually perceive them play out physically. I recall that as AlphaGo got really good. It started playing in ways that human play though were based on god-like intuition. The training of AlphaGo against itself had been able to penetrate so many novel strategic options that it went beyond what human players had encountered. So it was able to arrive at "intuitive" plays that human players could not account for in there own reasoning. My hope is that Cummings will have an opportunity to see more clearly how Tesla has advanced their own framing of the AI model. I very much suspect that they are accomplishing the breakthroughs that would like to see. Top-down and bottom-up integration is definitely happening at multiple layers of Tesla's FSD.
 
(#2 relates to service and superchargers. )
Congratulations! The best service is no need for service, etc.
But, having had to wait for 5 weeks for a new battery during total radio silence, I can certainly relate to that question.

However, all service discussion is considered totally off topic and bannable. So again, congratulations on your fortune so far.
This is why I think FCF should be spent on service. The management will know where the weaknesses are and ought to provide the funds to improve the situation.
If Tesla grows 50% every year and aims to sell 20 million a year at some point then the service will have to expand exponentially.
 
  • Like
Reactions: Lessmog