Hi all, I’ve read through everyone’s comments and concerns, and unfortunately I feel there’s a lot of misleading information on both sides of the fence on this situation. In my opinion, I feel the evidence is currently somewhere in the middle.
Firstly, there’s a lot of things I’m going to cover in this post, so please bare with me.
Let’s cover the ‘Phantom Breaking’ first. Yes, this is a real thing which unfortunately still rarely occurs in Tesla’s. However, from my findings, I’ve noticed this usually occurs is specific locations. It leads me to believe it could be a miscalculation between the speed limit detected in the system, and the actual speed limit of the road. It occurs so suddenly and quickly passes, that the vehicle was reactivity to a sudden speed limit decrease in that area, however it was a false reading and then corrects itself. I’ve also noticed this in a very high traffic area, where traffic is backed up 50-60% of the time. Additionally, this area was under construction for nearly 2 years, and the speed limit was lowered via signs. My assumption is the vehicle is slowing down as it is still detecting this area as a construction zone, even though the signs have been removed and the speed limit is no longer reduced. If either of these is the case, I believe this should fixed or on the path to being fixed with the coming updates to detect and read speed limit signs, as the current data in the system is showing a false negative for that location.
Regarding the main thread discussion, a ‘Quantum Leap’. In this regard I do not believe it will be a quantum leap in the actual capabilities of the vehicles. Tesla has always put out updates in incremental steps, and tests features gradually to gather better information and improve accuracy. This situation will likely be no different. I’m not saying you won’t receive a rough ‘feature complete’ version of software on the next update, I just don’t believe it will be a literal quantum leap. Yes, it may be a feature complete version, but there will likely be a lot of interventions required by drivers. That’s okay, there’s nothing wrong with this, as it incrementally progresses closer to the end goal. I feel a lot of people are anticipating a near perfect system with this rewrite and new software, however that will not likely be the case. I could be wrong, but based on previous statements and software updates etc, it’s likely there will be some great new features, but require more driver input than most are expecting with this rewrite. A good example was with one user stating even Waymo has interventions from time to time, simply because the driver is unsatisfied. That’s to be expected, especially with a brand new software rewrite and updated features. So be vigilant out there.
What I believe the Quantum Leap that Elon is referring to, is the data that is being collected and information that will be gathered based on this new rewrite. Instead of the ‘2.5D’ information, this rewrite will allow Tesla to collect date more efficiently and more accurately, in order to submit it into the Neural Network. The network will be able to analyze this data at higher rates, and recognize errors or flaws quicker. However, this doesn’t mean the next update after the rewrite will make drastic improvements. Of course it might, but not based on the Project Dojo timeline. We are anticipating Project Dojo to ramp up over the next year, and for Tesla to have one of the most powerful computers on earth (at this current time), to utilize their neural network and machine learning. Once this system is fully operational, is when we will see the ‘March of 9’s’ start to take place. Until that time, we can only expect a rough Level 3 autonomous vehicle for Tesla’s vision system. Again, I could be wrong, but we know the vision system requires far more data than the traditional vision + radar + lidar etc suite. (Yes I’m aware Tesla uses radar and ultrasonic sensors, but let’s just call it a Vision System for ease. And let’s call anything with Lidar+ just ‘Lidar’). We will likely see some great new features and updates roll out over the next 3-10 months, but until Dojo is fully operational and analyzing data from the upcoming software rewrite, it’s a bit naive to assume Tesla will achieve Level 4 or 5 with just the software rewrite. Thus, I believe the rewrite will allow for the quantum leap to occur, once Dojo is operational.
That being said, I do however believe Level 4 and (based on the NHTSA standard) most Level 5 situations comparable to current human capability. To break it down, the argument that human drivers currently operate on a vision system, and a vehicle should be able to do so too, is a true statement. One can always make the argument of neural nets not as accurate as human brains, or technology might not be as good as a human. However you would like to state these arguments, they ignore the progress of technology. We currently have camera technology that sees better than humans, and computers that function at greater capacities than us for dedicated tasks. Self driving is a dedicated task, and we are not also teaching the car to write a book, play jump rope, or know how to grow a garden. A computer with a dedicated task can certainly out perform a human. I’m not stating that these cameras or computers are currently in Tesla’s, only that it’s inaccurate to assume a vision system will never work for self driving. Especially with 360° cameras compared to a human head with 2 eyes on a swivel. Additionally, we have seen how Autopilot does improve the safety of the the vehicle, and allows it to drive on the highway at arguably Level 3 if you remove the steering wheel nag. But with this current system, it assists in showing the problem is more software based. Highway driving is less demanding and an easier problem to solve, but that’s why it’s available first compared to city driving with Tesla’s system. They are working their way to more difficult problems, and city driving with more specific lanes, cyclists, pedestrians, etc make it far more difficult than highway. However more difficult, not necessarily impossible with a vision system. It could easily only be better software required. At this time, it is my opinion that the current hardware in Tesla’s will be capable of full self driving, to the degree of a perfectly accurate and always 99.999...% functioning human being. I cannot expect a vision system to see much more beyond what I can, but that is how our current driving world is designed anyway. I mentioned Tesla will be capable of self drive at Level 5 in most situations (based on the NHTSA standards), as long as those situations are comparable to current human driving conditions. Snow, rain, etc. To state ‘All Conditions’, I’m anticipating that means driving in a white-out blizzard or similar situation where a human is best to stop. I do not see a Tesla exceeding conditions such as that. This may one day be possible, but I believe different hardware would be required for driving in situations where a human cannot visibly see anything. Again, I cannot expect a vision system to see much more than what a human is capable of seeing.
Solving the self driving problem is much more difficult when only using a vision system. As one user mentioned, it’s like teaching a human to see, and learn everything about the crazy visual world and driving at the same time. It takes time, and a lot of data. A Lidar system is more reliable and easier to develop. We’ve seen this with Waymo. Yes, Waymo does have Level 4 Self Driving vehicles. That is a fact which some are trying to argue isn’t real. Because none of us have experienced self driving, doesn’t mean Waymo or even Tesla doesn’t have a system capable of that, but based on the data on the road and what is taking place, Waymo has Level 4 autonomy in very limited amounts available. Of course you can’t buy any vehicle with full self drive, so let’s leave that out of the conversation. Also, let’s leave prices out of the conversation as well, because we all know advances in technology and increases in production and availability brings costs down. Now before I get too far ahead, it’s important mention that in 2018 Waymo’s vehicles with safety drivers had been involved in dozens of accidents, but mostly at low speeds, and most of the autonomous miles collected by Waymo had occurred with a safety driver in place. It doesn’t mean the vehicle was at fault in these collisions, as it could have been a human taking over and causing an incedent as well. The one thing that concerns me is, I hope Waymo has far more than only 10 million miles collected in data. For a company operating in more than 25 cities, with some systems running 24/7, I would hope it’s 5-10x that. I could be wrong with my information on Waymo, just basing it off the limited research I’ve done and what I’ve read here. However, I would be curious to see updated data for Waymo, for collisions, take over rate, and percentages driven with safety drivers and without. Nonetheless, Waymo is out there self driving right now, and the reason for that has to do with Lidar, and not the amount of data they have. Lidar makes the system a lot more accurate, and certainly requires a lot less data. Instead of teaching the car how to see visual in the real world and base everything off that, lidar adds an extra sense and allows the vehicle to reach out and touch things. It’s easier to train something that has more capabilities, and it allows for more redundancies. Basing everything only off a vision system is far more complex, which is additionally why Tesla is not as far as Waymo when it comes to self driving. Arguing that Tesla doesn’t have extra redundancies is fair, but it’s silly to ask what would happen if a camera failed on a Tesla. At the same time, what would happen if a lidar system failed on a Waymo? I assume the car would slow down, and try to pull to the side of the road with it’s hazzards on, if it can do so safely. Ultimately it will stop the car in the safest way possible. Redundancies are great, but you also have to anticipate failure in everything. Thus the vehicles should do its best to keep any occupants and nearby drivers safe as possible.
Basing everything on vision is a different approach to Lidar, and certain scenarios work better with vision. For a quick example, being able to identify a small tire track trail to someone’s country house would require a vision system to identify it, and not HD maps or lidar. Of course a Lidar system which also has vision could recognize this, but it’s unlnown if the companies using Lidar are working on this problem, or if they are focused on other scenarios. This is a scenario Tesla is/will be focused on, as it’s considered a driving path and will be driveable especially if it’s to an owners house. I believe it’s situations like these why we see Waymo mapped to specific areas, because sometimes out beyond the traditional city, the real world can get even more weird. (I actually know someone who has a small dirt path such as this leading to their house, and has a thin wooden bridge which goes over a set of train tracks so you can always safely cross to get to their house... Talk about weird). These are unique scenarios which Tesla hopes to have greater success with based on their vision system. They are both trying to solve the same problem, but approaching it in similar yet different ways. One has so far lead to a quick jump ahead, but the progress since has been minimal, whereas the other has been slow and tedious. I believe once Project Dojo is operational sometimes mid to late next year, is when we will really see the finite accuracies come to fruition from Tesla’s approach. Until then, this rewrite will give us some new features and capability to get there, but they will still require intervention at a typical rate. Again, I could be wrong, and one could argue a rough version of ‘feature complete’ is a quantum leap compared to what we have now, but it’s a bit presumptive to assume it will jump from level 2 to level 4 soon after this rewrite. We may get a lot of amazing features all at once, but not without staying vigilant and keeping those hands on the wheel for a nag. The quantum leap will likely be when Dojo is able to analyze the rewrites data and information, and accurately carve out a better picture of our wacky world. Take it with a grain of salt, but I think we’ll see the start of the March of 9’s sometime later in 2021. And if the Dojo Project is the Quantum Leap capability, I think we would quickly see it improve from there, in all real world cases.