Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Blog Musk Touts ‘Quantum Leap” in Full Self-Driving Performance

This site may earn commission on affiliate links.


A “quantum leap” improvement is coming to Tesla’s Autopilot software in six to 10 weeks, Chief Executive Elon Musk said a tweet.

Musk called the new software a “fundamental architectural rewrite, not an incremental tweak.”






Musk said his personal car is running a “bleeding edge alpha build” of the software, which he also mentioned during Tesla’s Q2 earnings. 

“So it’s almost getting to the point where I can go from my house to work with no interventions, despite going through construction and widely varying situations,” Musk said on the earnings call. “So this is why I am very confident about full self-driving functionality being complete by the end of this year, is because I’m literally driving it.”

Tesla’s Full Self-Driving software has been slow to roll out against the company’s promises. Musk previously said a Tesla would drive from Los Angeles to New York using the Full Self Driving feature by the end of 2019. The company didn’t meet that goal. So, it will be interesting to see the state of Autopilot at the end of 2020.

 
Last edited by a moderator:
Tesla defines FSD for us through karpathy and Elon's comments. Their definition of FSD is being able to put an arbitrary point anywhere in the world and have the car navigate to that point on publicly accessible roads. I'm assuming no off-road type situations are considered in the definition.

I think that the challenge of FSD for any company is the same. The challenge isn't the sensors. The challenge is to have vision be good enough to the point where you can reliably depend on it to navigate the car. I think that vision is a necessary foundation for FSD. We require vision to make driving decisions. Any FSD approach will eventually need to read signs, identify an object as an obstacle versus something that can be run over, respond to human traffic controllers, see traffic light colors and traffic controls, etc. it doesn't really matter how many other non-vision sensors you have, you need to solve vision in order to drive in a human world.

The main challenge with vision right now is that there's not that much evidence / research to show that vision is capable of the level of accuracy and precision required to enable reliable FSD. So far, based on the evidence and implementations available, I think Tesla is the furthest in its vision system for reliability and accuracy. That's mostly the reason why I think Tesla is the furthest in pursuit of FSD. Essentially, if you agree that vision is a foundation for FSD, it's kind of difficult to say that another company is further or has more potential in vision data than Tesla.

I'm not sure why/how you are disconnecting sensors from vision. Any device or organism uses its sensors to synthesize its vision. We use all of our senses--sight, sound, touch, taste, smell--to create our vision of the world around us to use as part of our decision-making process along with our memory and other mapping data. Same with cars. Our Teslas use their various sensors (cameras, sonar, radar) to create the vision we see onscreen that the computers use to make decisions, along with mapping data. For sure Waymo, and any car with LIDAR and 3D HD maps, have a more accurate vision of the space around them for decision-making which is how they approach true self driving. Read any legitimate article about autonomous/self-driving and you will understand that Tesla is at Level 2 and Waymo is at Level 4.
 
Last edited:
No, I don't.

Here's the problem. With only one sensor, you cannot afford a single critical mistake since there is no back up. Keep in mind the complexity of driving too. Driving a complex intersection in the city, there might be a lot of cars moving in different directions, pedestrians, cyclists, traffic lights, yielding to other vehicles etc and your camera only has to do it all and cannot make a single critical mistake. It will take a lot longer to achieve that level of reliability. In fact, solving all the problems in driving to that level of reliability with camera-only is a daunting challenge. But if you have multiple sensors, then they can help other out. It's easier to solve the problem. And we already see FSD with lidar that works. Several companies have already been able to demo FSD with lidar, with no driver in the car at all. We have not seen that with camera-only yet. Mobileye has a great demo with camera-only but they say that camera only is not reliable enough to fully remove driver supervision so they plan to include lidar. So I do think that we will see lidar FSD become more ubiquitous before camera-only FSD is solved.



IMO, Hotz is a joke. He's a hacker who pulled off a cheap knock off of AP1. He has no FSD. I don't value his opinion at the same level as the many experts working on autonomous driving who actually have FSD. And they are including lidar in their sensor suite and they say it is needed.

And if it requires driver monitoring then it is not FSD, it is a driver assist. Hotz is talking about doing driver assist that drives around, not FSD. Yes, you can absolutely do driver assist with camera only which Tesla is doing now. And that driver assist may even be able to drive around a bit but that is not FSD. FSD is defined as L4 or L5 where the car can drive without any human intervention or monitoring.


...so just to be clear...in the absolute sense...you think that no matter how well developed and sophisticated tesla's AI is or could be...the current sensor suite cannot meet FSD criteria ever?
 
...so just to be clear...in the absolute sense...you think that no matter how well developed and sophisticated tesla's AI is or could be...the current sensor suite cannot meet FSD criteria ever?

No I am not saying that. It would be absurd to say that something will never ever be possible. Given infinite time, I am sure Tesla would eventually achieve FSD on the current sensor suite. Although, I do think the Tesla sensor suite has some weaknesses. I was merely responding to whether Tesla can "solve FSD" before lidar becomes ubiquitous. I think FSD with camera+radar+lidar+HD maps is a much quicker and more reliable path to get to FSD. In fact, we can say that camera+radar+lidar+HD maps will be the first to achieve FSD because it already has.
 
Last edited:
Including you.

@DanCar not sure where you're getting your data from but this seems inaccurate. Provide your source or show one article indicating Waymo no longer having a dependency in geofencing. That's just how Lidar works IIRC.

It's also not scalable because lidar cost and size are both enormous... so moot point to IMO until both scale down massively.
 
  • Disagree
Reactions: diplomat33
That's just how Lidar works IIRC

100% false. Lidar does not require geofencing. lidar works by sending out thousands of laser pulses and measuring the time it reflects back in order to detect objects and measure how far away they are. You don't have to geofence to use lidar.

Waymo is not dependent on geofencing. Waymo has cars in 25+ cities in the US. They choose to geofence their ride-hailing service because they don't want passengers to take their robotaxis in areas that they have not fully validated yet.

It's also not scalable because of lidar cost and size are both enormous... so moot point to IMO until both scale down massively.

100% false. Velodyne has a compact high res lidar for only $100.
Velodyne releases $100 lidar sensor.
 
@DanCar not sure where you're getting your data from but this seems inaccurate. Provide your source or show one article indicating Waymo no longer having a dependency in geofencing. That's just how Lidar works IIRC.

It's also not scalable because lidar cost and size are both enormous... so moot point to IMO until both scale down massively.
Thanks for asking. From a talk Chris Urmson gave, head of Google self driving project about 4 years ago. He said that Google can drive pretty good without a map, but pretty good, isn't good enough. Amnon Shashua from Mobileye said something similar. They can drive without a map, but they add one for redundancy.

Changing the subject: Here is Chris talking in 2015. He is talking about capabilities that Tesla still doesn't have.
Skip to the nine minute mark for more interesting stuff.
Another video from Chris that I find impressive:
 
Last edited:
...so just to be clear...in the absolute sense...you think that no matter how well developed and sophisticated tesla's AI is or could be...the current sensor suite cannot meet FSD criteria ever?

I will go there and say, yes, as long as Elon refuses HD mapping and LIDAR, current sensors and computers that can fit in a car with reasonable power consumption will never get to Level 4-5. I don't see it even getting fully to Level 3 but I will say never to 4 or 5.
 
  • Disagree
Reactions: mikes_fsd
Here is Chris talking in 2015. He is talking about capabilities that Tesla still doesn't have.
And Tesla "Can talk about capabilities that Waymo still does not have, and probably will not have for a decade, like perfected vision without lidar"
Pretty demos and speeches do not an FSD make.

FYI, still waiting to be approved to ride Waymo One service.
 
And Tesla "Can talk about capabilities that Waymo still does not have, and probably will not have for a decade, like perfected vision without lidar"
Pretty demos and speeches do not an FSD make.

Tesla does not have any capabilities that Waymo does not also have. Waymo already has vision that can work without lidar. Waymo does have a lot of capabilities that Tesla does not have.
 
And Tesla "Can talk about capabilities that Waymo still does not have, and probably will not have for a decade, like perfected vision without lidar"
Pretty demos and speeches do not an FSD make.

FYI, still waiting to be approved to ride Waymo One service.
Just to give a little more color. Waymo has the best Machine Learning Scientist available to it, for example the same people that invented GPT3 and GAN. I'd be surprised if Waymo's vision neural network is surpassed by anyone else. The number of people working on vision at Google and Waymo is likely 10x the number of people working on the same at Tesla. Google has had dojo capabilities already for a few years and keeps stretching its capabilities: Tensor processing unit - Wikipedia
I wonder how Google's edge TPU compares to Tesla's hardware 3 chip? Tensor processing unit - Wikipedia
 
  • Like
Reactions: BlindPass
No I am not saying that. It would be absurd to say that something will never ever be possible. Given infinite time, I am sure Tesla would eventually achieve FSD on the current sensor suite. Although, I do think the Tesla sensor suite has some weaknesses. I was merely responding to whether Tesla can "solve FSD" before lidar becomes ubiquitous. I think FSD with camera+radar+lidar+HD maps is a much quicker and more reliable path to get to FSD. In fact, we can say that camera+radar+lidar+HD maps will be the first to achieve FSD because it already has.
Do you think 1000 cars with 5000 lidars laser is good for pedestrians eyes ?
 
Thanks for asking. From a talk Chris Urmson gave, head of Google self driving project about 4 years ago. He said that Google can drive pretty good without a map, but pretty good, isn't good enough. Amnon Shashua from Mobileye said something similar. They can drive without a map, but they add one for redundancy.

Changing the subject: Here is Chris talking in 2015. He is talking about capabilities that Tesla still doesn't have.
Skip to the nine minute mark for more interesting stuff.
Another video from Chris that I find impressive:

Great videos. I am NOT a Google fan but Chris Urmson is smart, honest, and funny, and their technology is infinitely far ahead of Tesla. You cannot put it in years because with current Tesla sensors, stubbornness, and dishonesty, they will simply never get there.
 
Waymo has the best Machine Learning Scientist available to it
And Tesla has the best ML/AI available to them... each one can claim the "best", does not make it so.
Results, results, results. That will show the best. Not demo's, not speeches.
The number of people working on vision at Google and Waymo is likely 10x the number of people working on the same at Tesla.
Yup, and here they are a decade into their little project with all the funding they could desire, and their "BEST" has produced very little useful to the end user. Like I said elsewhere, they are a great PhD thesis project, not a business plan.

Google has had dojo capabilities already for a few years and keeps stretching its capabilities: Tensor processing unit - Wikipedia
I wonder how Google's edge TPU compares to Tesla's hardware 3 chip?
Training supercomputer should not be compared to the AI chip in the Tesla car itself, it should be compared to whatever actually goes into the Dojo supercomputer.