Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Blog Musk Touts ‘Quantum Leap” in Full Self-Driving Performance

This site may earn commission on affiliate links.


A “quantum leap” improvement is coming to Tesla’s Autopilot software in six to 10 weeks, Chief Executive Elon Musk said a tweet.

Musk called the new software a “fundamental architectural rewrite, not an incremental tweak.”






Musk said his personal car is running a “bleeding edge alpha build” of the software, which he also mentioned during Tesla’s Q2 earnings. 

“So it’s almost getting to the point where I can go from my house to work with no interventions, despite going through construction and widely varying situations,” Musk said on the earnings call. “So this is why I am very confident about full self-driving functionality being complete by the end of this year, is because I’m literally driving it.”

Tesla’s Full Self-Driving software has been slow to roll out against the company’s promises. Musk previously said a Tesla would drive from Los Angeles to New York using the Full Self Driving feature by the end of 2019. The company didn’t meet that goal. So, it will be interesting to see the state of Autopilot at the end of 2020.

 
Last edited by a moderator:
Hi all, I’ve read through everyone’s comments and concerns, and unfortunately I feel there’s a lot of misleading information on both sides of the fence on this situation. In my opinion, I feel the evidence is currently somewhere in the middle.

Firstly, there’s a lot of things I’m going to cover in this post, so please bare with me.

Let’s cover the ‘Phantom Breaking’ first. Yes, this is a real thing which unfortunately still rarely occurs in Tesla’s. However, from my findings, I’ve noticed this usually occurs is specific locations. It leads me to believe it could be a miscalculation between the speed limit detected in the system, and the actual speed limit of the road. It occurs so suddenly and quickly passes, that the vehicle was reactivity to a sudden speed limit decrease in that area, however it was a false reading and then corrects itself. I’ve also noticed this in a very high traffic area, where traffic is backed up 50-60% of the time. Additionally, this area was under construction for nearly 2 years, and the speed limit was lowered via signs. My assumption is the vehicle is slowing down as it is still detecting this area as a construction zone, even though the signs have been removed and the speed limit is no longer reduced. If either of these is the case, I believe this should fixed or on the path to being fixed with the coming updates to detect and read speed limit signs, as the current data in the system is showing a false negative for that location.

Regarding the main thread discussion, a ‘Quantum Leap’. In this regard I do not believe it will be a quantum leap in the actual capabilities of the vehicles. Tesla has always put out updates in incremental steps, and tests features gradually to gather better information and improve accuracy. This situation will likely be no different. I’m not saying you won’t receive a rough ‘feature complete’ version of software on the next update, I just don’t believe it will be a literal quantum leap. Yes, it may be a feature complete version, but there will likely be a lot of interventions required by drivers. That’s okay, there’s nothing wrong with this, as it incrementally progresses closer to the end goal. I feel a lot of people are anticipating a near perfect system with this rewrite and new software, however that will not likely be the case. I could be wrong, but based on previous statements and software updates etc, it’s likely there will be some great new features, but require more driver input than most are expecting with this rewrite. A good example was with one user stating even Waymo has interventions from time to time, simply because the driver is unsatisfied. That’s to be expected, especially with a brand new software rewrite and updated features. So be vigilant out there.
What I believe the Quantum Leap that Elon is referring to, is the data that is being collected and information that will be gathered based on this new rewrite. Instead of the ‘2.5D’ information, this rewrite will allow Tesla to collect date more efficiently and more accurately, in order to submit it into the Neural Network. The network will be able to analyze this data at higher rates, and recognize errors or flaws quicker. However, this doesn’t mean the next update after the rewrite will make drastic improvements. Of course it might, but not based on the Project Dojo timeline. We are anticipating Project Dojo to ramp up over the next year, and for Tesla to have one of the most powerful computers on earth (at this current time), to utilize their neural network and machine learning. Once this system is fully operational, is when we will see the ‘March of 9’s’ start to take place. Until that time, we can only expect a rough Level 3 autonomous vehicle for Tesla’s vision system. Again, I could be wrong, but we know the vision system requires far more data than the traditional vision + radar + lidar etc suite. (Yes I’m aware Tesla uses radar and ultrasonic sensors, but let’s just call it a Vision System for ease. And let’s call anything with Lidar+ just ‘Lidar’). We will likely see some great new features and updates roll out over the next 3-10 months, but until Dojo is fully operational and analyzing data from the upcoming software rewrite, it’s a bit naive to assume Tesla will achieve Level 4 or 5 with just the software rewrite. Thus, I believe the rewrite will allow for the quantum leap to occur, once Dojo is operational.
That being said, I do however believe Level 4 and (based on the NHTSA standard) most Level 5 situations comparable to current human capability. To break it down, the argument that human drivers currently operate on a vision system, and a vehicle should be able to do so too, is a true statement. One can always make the argument of neural nets not as accurate as human brains, or technology might not be as good as a human. However you would like to state these arguments, they ignore the progress of technology. We currently have camera technology that sees better than humans, and computers that function at greater capacities than us for dedicated tasks. Self driving is a dedicated task, and we are not also teaching the car to write a book, play jump rope, or know how to grow a garden. A computer with a dedicated task can certainly out perform a human. I’m not stating that these cameras or computers are currently in Tesla’s, only that it’s inaccurate to assume a vision system will never work for self driving. Especially with 360° cameras compared to a human head with 2 eyes on a swivel. Additionally, we have seen how Autopilot does improve the safety of the the vehicle, and allows it to drive on the highway at arguably Level 3 if you remove the steering wheel nag. But with this current system, it assists in showing the problem is more software based. Highway driving is less demanding and an easier problem to solve, but that’s why it’s available first compared to city driving with Tesla’s system. They are working their way to more difficult problems, and city driving with more specific lanes, cyclists, pedestrians, etc make it far more difficult than highway. However more difficult, not necessarily impossible with a vision system. It could easily only be better software required. At this time, it is my opinion that the current hardware in Tesla’s will be capable of full self driving, to the degree of a perfectly accurate and always 99.999...% functioning human being. I cannot expect a vision system to see much more beyond what I can, but that is how our current driving world is designed anyway. I mentioned Tesla will be capable of self drive at Level 5 in most situations (based on the NHTSA standards), as long as those situations are comparable to current human driving conditions. Snow, rain, etc. To state ‘All Conditions’, I’m anticipating that means driving in a white-out blizzard or similar situation where a human is best to stop. I do not see a Tesla exceeding conditions such as that. This may one day be possible, but I believe different hardware would be required for driving in situations where a human cannot visibly see anything. Again, I cannot expect a vision system to see much more than what a human is capable of seeing.
Solving the self driving problem is much more difficult when only using a vision system. As one user mentioned, it’s like teaching a human to see, and learn everything about the crazy visual world and driving at the same time. It takes time, and a lot of data. A Lidar system is more reliable and easier to develop. We’ve seen this with Waymo. Yes, Waymo does have Level 4 Self Driving vehicles. That is a fact which some are trying to argue isn’t real. Because none of us have experienced self driving, doesn’t mean Waymo or even Tesla doesn’t have a system capable of that, but based on the data on the road and what is taking place, Waymo has Level 4 autonomy in very limited amounts available. Of course you can’t buy any vehicle with full self drive, so let’s leave that out of the conversation. Also, let’s leave prices out of the conversation as well, because we all know advances in technology and increases in production and availability brings costs down. Now before I get too far ahead, it’s important mention that in 2018 Waymo’s vehicles with safety drivers had been involved in dozens of accidents, but mostly at low speeds, and most of the autonomous miles collected by Waymo had occurred with a safety driver in place. It doesn’t mean the vehicle was at fault in these collisions, as it could have been a human taking over and causing an incedent as well. The one thing that concerns me is, I hope Waymo has far more than only 10 million miles collected in data. For a company operating in more than 25 cities, with some systems running 24/7, I would hope it’s 5-10x that. I could be wrong with my information on Waymo, just basing it off the limited research I’ve done and what I’ve read here. However, I would be curious to see updated data for Waymo, for collisions, take over rate, and percentages driven with safety drivers and without. Nonetheless, Waymo is out there self driving right now, and the reason for that has to do with Lidar, and not the amount of data they have. Lidar makes the system a lot more accurate, and certainly requires a lot less data. Instead of teaching the car how to see visual in the real world and base everything off that, lidar adds an extra sense and allows the vehicle to reach out and touch things. It’s easier to train something that has more capabilities, and it allows for more redundancies. Basing everything only off a vision system is far more complex, which is additionally why Tesla is not as far as Waymo when it comes to self driving. Arguing that Tesla doesn’t have extra redundancies is fair, but it’s silly to ask what would happen if a camera failed on a Tesla. At the same time, what would happen if a lidar system failed on a Waymo? I assume the car would slow down, and try to pull to the side of the road with it’s hazzards on, if it can do so safely. Ultimately it will stop the car in the safest way possible. Redundancies are great, but you also have to anticipate failure in everything. Thus the vehicles should do its best to keep any occupants and nearby drivers safe as possible.
Basing everything on vision is a different approach to Lidar, and certain scenarios work better with vision. For a quick example, being able to identify a small tire track trail to someone’s country house would require a vision system to identify it, and not HD maps or lidar. Of course a Lidar system which also has vision could recognize this, but it’s unlnown if the companies using Lidar are working on this problem, or if they are focused on other scenarios. This is a scenario Tesla is/will be focused on, as it’s considered a driving path and will be driveable especially if it’s to an owners house. I believe it’s situations like these why we see Waymo mapped to specific areas, because sometimes out beyond the traditional city, the real world can get even more weird. (I actually know someone who has a small dirt path such as this leading to their house, and has a thin wooden bridge which goes over a set of train tracks so you can always safely cross to get to their house... Talk about weird). These are unique scenarios which Tesla hopes to have greater success with based on their vision system. They are both trying to solve the same problem, but approaching it in similar yet different ways. One has so far lead to a quick jump ahead, but the progress since has been minimal, whereas the other has been slow and tedious. I believe once Project Dojo is operational sometimes mid to late next year, is when we will really see the finite accuracies come to fruition from Tesla’s approach. Until then, this rewrite will give us some new features and capability to get there, but they will still require intervention at a typical rate. Again, I could be wrong, and one could argue a rough version of ‘feature complete’ is a quantum leap compared to what we have now, but it’s a bit presumptive to assume it will jump from level 2 to level 4 soon after this rewrite. We may get a lot of amazing features all at once, but not without staying vigilant and keeping those hands on the wheel for a nag. The quantum leap will likely be when Dojo is able to analyze the rewrites data and information, and accurately carve out a better picture of our wacky world. Take it with a grain of salt, but I think we’ll see the start of the March of 9’s sometime later in 2021. And if the Dojo Project is the Quantum Leap capability, I think we would quickly see it improve from there, in all real world cases.

I see this is your 2nd message. Welcome to the forum. But may I suggest you format your text with some breaks. It's just one big wall of text. It's kinda hard to read.
 
Definitely. Thanks!

I’m on mobile, and wasn’t sure how it would look. I’ll see if I can edit it properly when I get a chance on the desktop.

By the way, Waymo now has 20M autonomous miles.

If lidar had a failure, Waymo car also has plenty of other sensors so it would probably be able to keep going. Waymo has 29 HD cameras for redundant 360 degree coverage, a main 360 degree lidar, perimeter lidar all around and perimeter radar all around! So if the main roof lidar failed, Waymo would still have 29 cameras and perimeter lidar and perimeter radar. So I think it could still drive just fine. That's the whole point of redundancy. Waymo can completely self-drive even if lidar fails or a camera fails because of the extra sensors. It would probably take multiple sensor failures at the same time to cripple the self-driving. Although it is possible that Waymo's safety procedure would still dictate slowing down out of an abundance of caution.

By the way, if you have not already done so, I would recommend you read Waymo's safety report. It has a ton of great information on how Wayno's FSD works. Go here and then click on the pdf: Safety Report – Waymo
 
Last edited:
@S3XY CARS Just to add my last post, here is a quick little video on Waymo's new 5th Gen FSD hardware. If you pause the video, you can get some good views of the different sensors. You can see that Waymo has large HD cameras for the front and 360 degree cameras in the roof pod just below the main roof 360 degree lidar. you can also see additional perimeter cameras, lidar and radar in the front fenders, in the back and on the sides. So I think the cars would be able to self-drive just fine even with a lidar failure. And the sensors are self-cleaning and self-heating to prevent dirt obstruction or freezing in the winter. Like I said, I think you would need pretty catastrophic sensor failures all at once to cripple the FSD. So yeah, the multiple sensors make Waymo's FSD very robust against sensor failures.

 
I've followed most of this thread and am kinda in agreement that lidar would probably make Tesla's job a little easier. But I suspect not very much, in no small part due to complete lack of vehicle to vehicle and vehicle to infrastructure communication. The need for secure communication of genuine speed restrictions and other messages (visual / human and system / infrastructure originated) to avoid pranks and other nefarious activities is imo essential and without it, all the vision (lidar or visible spectrum) systems will hit similar obsticles.

Another concern I have is dealing with sensor conflicts. More sensors = more data but data from different sources often represents a different 'view' and depending on what the differences are, you have to either give up immediately or some how decide which 'view' to go with. The complexity suddenly shoots up a couple of orders of magnitude and if you have a simple 'lidar system is better' (more trustworthy) or 'visible light system is better' algorithm then arguably you could be worse of than at least working with harmonised data from a coordinated set of similar sensors. If you had to work with only lidar or only visible, which would be 'better'? I think I would go visible.


Edit: How does Waymo handle interactions between their cars at junction? Do they have any v2v communication? .
 
Last edited:
  • Disagree
Reactions: mikes_fsd
The complexity suddenly shoots up a couple of orders of magnitude and if you have a simple 'lidar system is better' (more trustworthy) or 'visible light system is better' algorithm then arguably you could be worse of than at least working with harmonised data from a coordinated set of similar sensors.

I don't think this is true with ML. The machine works with a complex function (filter) of the many sensor inputs and this results in probabilities of different outcomes. The only algorithmic decision is choosing between trusting x% object in front is pedestrian, and y% object in front is vehicle at speed z (i.e how defensive do you intend to design for).

The weighting of each sensor when applied to each scenario set 'just' drops out of the training process.
 
The only algorithmic decision is choosing between trusting x% object in front is pedestrian, and y% object in front is vehicle at speed z (i.e how defensive do you intend to design for).

I expected that could be an answer!

Even so, it feels counter-intuitive that a certain sensor in certain situations could not contribute misleading data. But I can see that if you just look for repeated detectable petterns you can train a system to recognise them.

I guess the idea that when one sensor type goes against a strong probability from another, then you could drop it from your inputs. Still, hard to visualise. And would you weight different sensors' inputs credibility? Under different perceived level of visibility?
 
Last edited:
Edit: How does Waymo handle interactions between their cars at junction? Do they have any v2v communication? .

I am not aware of Waymo having any v2v communication. I suspect they treat other Waymo cars just like any other car on the road. So the Waymo car perceives them with their sensors, predicts their paths and reacts, just like any other car on the road.

Another concern I have is dealing with sensor conflicts. More sensors = more data but data from different sources often represents a different 'view' and depending on what the differences are, you have to either give up immediately or some how decide which 'view' to go with. The complexity suddenly shoots up a couple of orders of magnitude and if you have a simple 'lidar system is better' (more trustworthy) or 'visible light system is better' algorithm then arguably you could be worse of than at least working with harmonised data from a coordinated set of similar sensors. If you had to work with only lidar or only visible, which would be 'better'? I think I would go visible.

There are a lot of companies doing autonomous driving with sensor fusion from different sensors (cameras, lidar, radar). They all have software to fuse the data together and handle discrepancies. It requires extra software but it is doable.
 
sensor fusion from different sensors

If it is possible (as in companies specifically developing that subsystem) that would further support the argument for benefits of additional sensors. All you are doing is building the highest confidence 'image' by whatever means.

The merging process must surely be AI based itself, unless you give some inputs absolute priority to force a 'stop'. Eg: ultrasonics when parking. But not so easy in 4D where you are committing in advance of driving decisions based on predictions. If you have already committed based on very high level of certainty, what does it take to 'force a last minute charge'?

Any ways, clearly all stuff that high paid brains are grappling with daily.
 
If it is possible (as in companies specifically developing that subsystem) that would further support the argument for benefits of additional sensors. All you are doing is building the highest confidence 'image' by whatever means.

Yes, it is possible. Waymo, Cruise and others have already done it.

The merging process must surely be AI based itself, unless you give some inputs absolute priority to force a 'stop'. Eg: ultrasonics when parking. But not so easy in 4D where you are committing in advance of driving decisions based on predictions. If you have already committed based on very high level of certainty, what does it take to 'force a last minute charge'?

Any ways, clearly all stuff that high paid brains are grappling with daily.

Yes, I imagine it is AI based. Companies like Waymo already have sensor fusion software that works on a "4D rewrite".
 
I expected that could be an answer!

Even so, it feels counter-intuitive that a certain sensor in certain situations could not contribute misleading data. But I can see that if you just look for repeated detectable petterns you can train a system to recognise them.

I guess the idea that when one sensor type goes against a strong probability from another, then you could drop it from your inputs. Still, hard to visualise. And would you weight different sensors' inputs credibility? Under different perceived level of visibility?

You could fairly easily have a scenario where if a vision sensor detects what might be rain, lidar is de-prioritised and radar given a higher weighting - but only if all sensors are consistent with rain (otherwise you have a faulty sensor with mud on the lens).

By training on imperfect data, you allow the filter to determine which weights are most often likely to give the right results - there isn't necessarily any design input into working out when to adjust the weights.
 
All very niche edge cases.

Yes, LiDAR in iPad Pros and coming to iPhone 12 is ‘niche’ LOL. Tossing out an entire kind of technology out of hand is akin to ‘there’s a worldwide market for 5 computers’ (Watson, 1943) or ‘people will get tired of staring at TVs’ (Zanuck, 1946). The human brain receives, analyzes and interprets vastly different kinds of input in order to make it possible for you to walk through a doorway without banging into it (unless you are 13). Sound, light, heat, smell, vibration (low frequency sounds, technically), etc.

Dismissing LiDAR is like saying, ‘I don’t need to hear to be able to navigate my living room’ until the power goes out on a moonless night, then it’s a whole different ballgame. Laser rangefinders were big and expensive when they first came out. Now, like cellphones, they are small and cheap (unless you are getting a smartphone).

IMNSHO, I think it might be more appropriate to say something on the order of, ‘LiDAR is too [expensive | immature] at this time and we’ll continue to evaluate it in the future and when it makes sense, we may include it.’
 
Yes. And I can see how a system could learn that. It's weired because human actions must be guided by very similar process of recognition and confidence but we are so unaware of it!
And I think you nailed the solution right there: we need AI/ML algorithms that allow it to receive various superhuman-level signals and interpret them quickly and accurately in order to build a coherent picture of the environment it’s in and what it should do next based on that.
 
  • Like
Reactions: Battpower
2020.36 will have some nice AP updates:

Speed Assist Improvements
"“Speed Assist now leverages your car’s cameras to detect speed limit signs to improve the accuracy of speed limit data on local roads. Detected speed limit signs will be displayed in the driving visualization and used to set the associated Speed Limit Warning.”

Green Light Chime
"A chime will play when the traffic light you are waiting for turns green. If you are waiting behind another car, the chime will play once the car advances unless Traffic-Aware Cruise Control or Autosteer is active When Traffic Light and Stop Sign Control is activated, a chime will play when you can confirm to proceed through a green traffic light. To enable, tap Controls > Autopilot > Green traffic Light Chime"

TACC
"Quickly adjust the Traffic-Aware Cruise Control or Autosteer set speed to the current speed by simply tapping the cluster speedometer. You can still tap the speed limit sign to adjust the set speed to the speed limit"

Tesla releases new software update to visually detect speed limit signs, and more - Electrek
 
Green Light Chime
"A chime will play when the traffic light you are waiting for turns green. If you are waiting behind another car, the chime will play once the car advances unless Traffic-Aware Cruise Control or Autosteer is active When Traffic Light and Stop Sign Control is activated, a chime will play when you can confirm to proceed through a green traffic light. To enable, tap Controls > Autopilot > Green traffic Light Chime"
This is an interesting one. I've been told that it is unsafe to use your phone at a stop light because it makes you lose situational awareness. In fact it is illegal to do so in California (though I admit to having done it). It seems like that would be the primary use case for this feature.
 
  • Like
Reactions: diplomat33
This is an interesting one. I've been told that it is unsafe to use your phone at a stop light because it makes you lose situational awareness. In fact it is illegal to do so in California (though I admit to having done it). It seems like that would be the primary use case for this feature.
Are you considered driving a vehicle when your stopped? I was stopped at an intersection, playing with my phone, a police motorcycle that was lane splitting drove by me and just shook his head "no".