Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
Every time the ML team gets an idea of how to do more in machine learning space and less in C++ code there is a rewrite. There will be more rewrites.

Yes, I know that rewrites are a normal part of development. I only asked the question because this rewrite seems to be more foundational than just a tweak to improve things. Doing 8 distinct camera outputs or doing a single output from 8 cameras seems to be a fundamental design decision of how to do perception that you would assume would happen at the beginning of the project.

But to be fair to Tesla, this rewrite was only possible on AP3. So the old way was probably the best Tesla could do on AP2 and now that AP3 is available, Tesla is doing a rewrite that is better.

This new rewrite does appear to be the correct way to do perception. So I am definitely glad that Tesla is doing it.
 
Here is a part of the Third Row Podcast that I think gives some good insight into the process of doing Smart Summon:

Q: "What's so hard about curbs?"

A: "When we first started on Smart Summon, and you all know since you have the cars, it was laggy. Not perfect. We were just trying as much as we could to make it smooth. With the older Autopilot system that we had before the rewrite, using the cameras, for curbs and stuff, you can only use the side repeater cameras and maybe the back-up camera and the angle at which you are seeing the curb, 80% seeing the actual curb and the rest you are guessing because of the distance, how high the curb is, and all that. That is why it such a hard thing to label and teach the neural net. When doing all the labeling, it is very very complicated. The accuracy has to pretty much be 100% or else there is going to be some flaw in the car driving around the curb or seeing the curb. With the new rewrite, it will be a lot better because we can look around, zoom in, zoom out, and label that curb in 3D and make it much easier."


I guess now we know why Smart Summon sucked when it was released. He basically admits that the cameras could only see 80% of the curb!! WOW!! Frankly, it is a bit shocking that Tesla released Smart Summon with this obvious flaw!

I think his answer is also a perfect microcosm for the entire AP development. Tesla releases stuff before it works reliably, the sensors are not really good enough but Tesla makes it work as best they can, and then later, Tesla eventually figures out how to do the feature the right way. I am not being anti-Tesla. He admits it!!!

It is encouraging though that the rewrite will apparently allow the system to zoom in and out and thus make it possible to see curbs better which should lead to better Smart Summon. Hopefully, the rewrite will indeed lead to much better Smart Summon and much better AP.

@nepenthe This quote illustrates why I make "absolutist" statements about Tesla's hardware not being good enough for L5. This is a former Tesla employee who worked on AP, admitting that the car only has the side repeaters and maybe the back up camera to see curbs when it is backing out in Smart Summon and that the cameras could only see about 80% of the curbs!! That is not good hardware for doing FSD! Now, Tesla is apparently going to fix this in the upcoming rewrite but it is telling that the current sensors have these issues. With additional parking cameras, this problem could have been fixed immediately and Tesla could have avoided all the problems with Smart Summon. And of course, lidar would have solved this problem as well since lidar would have painted a 3D map and shown exactly and accurately where the curbs are. My point being that with more sensors and better sensors, Tesla could have much higher confidence in solving perception and solving FSD.
 
Last edited:
I guess now we know why Smart Summon sucked when it was released. He basically admits that the cameras could only see 80% of the curb!! WOW!! Frankly, it is a bit shocking that Tesla released Smart Summon with this obvious flaw!

I'm shocked, shocked I tell you! To think are you able to see more than 80% of a curb when you park! Do you drive a motorcycle or is it doorless Jeep with extra mirrors?

And of course, lidar would have solved this problem as well since lidar would have painted a 3D map and shown exactly and accurately where the curbs are.

I'm also amazed that lidar can apparently see curbs through snow and leaves.
 
I'm also amazed that lidar can apparently see curbs through snow and leaves.

I'm usually supporting Tesla's vision system in these discussions, but if you can train a neutral network to approximate where curbs would be under leaves and snow, it shouldn't matter whether the network input is camera data or lidar points.

Sensor discussion aside, I think Tesla's greatest asset is actually Andrej Karpathy. He's probably the world's foremost expert in computer vision and practical machine learning. It would be great to see collaboration between Tesla and Waymo at some point, if for no other reason than to get all these experts working together to solve this issue.
 
I'm usually supporting Tesla's vision system in these discussions, but if you can train a neutral network to approximate where curbs would be under leaves and snow, it shouldn't matter whether the network input is camera data or lidar points.

I don't disagree that, with sufficient processing, either could do it. However, that was not what was claimed:
With additional parking cameras, this problem could have been fixed immediately and Tesla could have avoided all the problems with Smart Summon. And of course, lidar would have solved this problem as well since lidar would have painted a 3D map and shown exactly and accurately where the curbs are.

Additional cameras and/or lidar don't solve the problem, they only provide more data.
 
Additional cameras and/or lidar don't solve the problem, they only provide more data.

I think they can. Parking cameras position correctly could see more of the curb and lidar on the top of the car like Waymo has, that scans in 360 degree from a high vantage point, would certainly be able to map the surrounding area, including the curbs more accurately than just having a side repeater camera that only sees 80% of the curb.
 
I think they can. Parking cameras position correctly could see more of the curb and lidar on the top of the car like Waymo has, that scans in 360 degree from a high vantage point, would certainly be able to map the surrounding area, including the curbs more accurately than just having a side repeater camera that only sees 80% of the curb.
Actually I don't think this matters too much for static environment such as curbs. I see a pretty straight-forward development pathway for accounting for things you cannot see, that you have previously been seen:

1. Make a realtime 3D map of your surroundings using image data with classification data (software exists for this and Tesla has demonstrated the use of it)

2. Rather than replacing the 3D map, you update it. Things you've seen multiple times are reinforced, things you see differently gradually vanish, and things completely out of your view you don't change. Things further away than a specific distance is forgotten.

3. Accurately track your vehicle's position within that 3D map. This should be fairly easy using a combination of wheel movements/steering angle, movements of known landmarks in vision and GPS. Should be accurate to ~10cm.

Now the car should be able to navigate around curbs it cannot see, but did see on the way in before park. Because driving logic uses the 3D map for decision-making.


What I'm more concerned about is dynamic objects like people or small things. Your car knows the curb because it saw it before parking and going to sleep, but it cannot see the kid right in front of the car after waking up. Ultra-sonics help somewhat, but are proven to be pretty sort of unreliable.
 
Regarding the possibility of hands free driving, someone has finally hacked the Model 3 enough to get an image out of the interior facing camera: green on Twitter

I think that would provide sufficient resolution for eye tracking/some basic attentiveness tracking.

Yes, I saw that. I am very excited about it. I really hope that Tesla uses it for hands-free driving.

Interesting also that Elon is not backing down on the robotaxi claims. I thought the AP rewrite meant the timelines even for pure functionality would be pushed back, but he’s still maintaining this year: Elon Musk on Twitter

I'll give Elon this, at least he is consistent with his wildly optimistic timelines.
 
Interesting also that Elon is not backing down on the robotaxi claims. I thought the AP rewrite meant the timelines even for pure functionality would be pushed back, but he’s still maintaining this year: Elon Musk on Twitter
Tired of hearing it. Sick of broken AP promises. I wish he would stop talking about it prior to the next major update and let the product do the talking.
 
Interesting also that Elon is not backing down on the robotaxi claims. I thought the AP rewrite meant the timelines even for pure functionality would be pushed back, but he’s still maintaining this year: Elon Musk on Twitter

There are probably 2 motivations for this tweet:

1) Elon is making an educated guess based on the current state of AP development, especially the progress with the new rewrite. So based on that progress, Elon believes that robotaxis can still happen by the end of this year.

2) As the CEO, Elon is known for imposing tough and ambitious deadlines as a way of getting stuff done. Elon is mostly likely giving the AP team the internal deadline of making robotaxis happen by the end of this year, whether or not it can actually be done, in order to push them hard to accomplish the goal.
 
  • Like
Reactions: DDotJ and NHK X
I'm usually supporting Tesla's vision system in these discussions, but if you can train a neutral network to approximate where curbs would be under leaves and snow, it shouldn't matter whether the network input is camera data or lidar points.

Sensor discussion aside, I think Tesla's greatest asset is actually Andrej Karpathy. He's probably the world's foremost expert in computer vision and practical machine learning. It would be great to see collaboration between Tesla and Waymo at some point, if for no other reason than to get all these experts working together to solve this issue.

He's actually not. Infact his hire as a director caused an uproar in the ML community (r/machinelearning) because he had ZERO experience.

There's nothing that andrej has done other than teach a basic CNN 101 course that got really popular. He has no papers, he didn't invent any new NN technique or architecture. Ofcourse Tesla fans see him as some savior. Look up the list of NN advancements in the last 8 years and see if you see his name on any of it..

Let me break it to you, All NN advancement and breakthroughs (RL, GANs, Transformers, AlphaGo, AlphaGoZero, AlphaStar, NLP, WaveNet, Duplex, etc) has come from Google Brain and DeepMind engineers and I mean ALL of it. Why in the world would Waymo partner with andrej? Its like partnering with a random college student.
 
Last edited:
He's actually not. Infact his hire as a director caused an uproar in the ML community (r/machinelearning) because he had ZERO experience.

There's nothing that andrej has done other than teach a basic CNN 101 course that got really popular. He has no papers, he didn't invent any new NN technique or architecture. Ofcourse Tesla fans see him as some savior. Look up the list of NN advancements in the last 8 years and see if you see his name on any of it..

Let me break it to you, All NN advancement and breakthroughs (RL, GANs, Transformers, AlphaGo, AlphaGoZero, AlphaStar, NLP, WaveNet, Duplex, etc) has come from Google Brain and DeepMind engineers and I mean ALL of it. Why in the world would Waymo partner with andrej? Its like partnering with a random college student.

I wasn't following Tesla in 2017, but most of the contemporary articles seem to refute your account:

Tesla hired a top AI expert to lead a critical aspect of Autopilot — here's what we know

"Karpathy is considered a leading expert in computer vision. He received a pHd in machine learning and computer vision from Stanford University. During his time at Stanford, Karpathy designed the University's very first course on Deep Learning."
 
He's actually not. Infact his hire as a director caused an uproar in the ML community (r/machinelearning) because he had ZERO experience.

There's nothing that andrej has done other than teach a basic CNN 101 course that got really popular. He has no papers, he didn't invent any new NN technique or architecture. Ofcourse Tesla fans see him as some savior. Look up the list of NN advancements in the last 8 years and see if you see his name on any of it..

Let me break it to you, All NN advancement and breakthroughs (RL, GANs, Transformers, AlphaGo, AlphaGoZero, AlphaStar, NLP, WaveNet, Duplex, etc) has come from Google Brain and DeepMind engineers and I mean ALL of it. Why in the world would Waymo partner with andrej? Its like partnering with a random college student.
The Dunning Kruger effect is strong with you. Perhaps do some research before spouting off.

Andrej didn't just teach a class, he co-created it from scratch.
He also worked at Google, including interning at Deepmind.

From his bio Andrej Karpathy Academic Website
Previously, I was a Research Scientist at OpenAI working on Deep Learning in Computer Vision, Generative Modeling and Reinforcement Learning. I received my PhD from Stanford, where I worked with Fei-Fei Li on Convolutional/Recurrent Neural Network architectures and their applications in Computer Vision, Natural Language Processing and their intersection. Over the course of my PhD I squeezed in two internships at Google where I worked on large-scale feature learning over YouTube videos, and in 2015 I interned at DeepMind on the Deep Reinforcement Learning team. Together with Fei-Fei, I designed and was the primary instructor for a new Stanford class on Convolutional Neural Networks for Visual Recognition (CS231n). The class was the first Deep Learning course offering at Stanford and has grown from 150 enrolled in 2015 to 330 students in 2016, and 750 students in 2017.
 
  • Like
Reactions: willow_hiller
He's actually not. Infact his hire as a director caused an uproar in the ML community (r/machinelearning) because he had ZERO experience.

There's nothing that andrej has done other than teach a basic CNN 101 course that got really popular. He has no papers, he didn't invent any new NN technique or architecture. Ofcourse Tesla fans see him as some savior. Look up the list of NN advancements in the last 8 years and see if you see his name on any of it..

Let me break it to you, All NN advancement and breakthroughs (RL, GANs, Transformers, AlphaGo, AlphaGoZero, AlphaStar, NLP, WaveNet, Duplex, etc) has come from Google Brain and DeepMind engineers and I mean ALL of it. Why in the world would Waymo partner with andrej? Its like partnering with a random college student.

I agree that NN advancements have come from Google but I think you are downplaying Karpathy. Maybe he doesn't meet your standard but he does have a lot expertise in machine learning.

And certainly, he has done wonders for Tesla. He pretty much saved Tesla's entire AP development that was floundering at the start of AP2.
 
The Dunning Kruger effect is strong with you. Perhaps do some research before spouting off.

Andrej didn't just teach a class, he co-created it from scratch.
He also worked at Google, including interning at Deepmind.

From his bio Andrej Karpathy Academic Website
Previously, I was a Research Scientist at OpenAI working on Deep Learning in Computer Vision, Generative Modeling and Reinforcement Learning. I received my PhD from Stanford, where I worked with Fei-Fei Li on Convolutional/Recurrent Neural Network architectures and their applications in Computer Vision, Natural Language Processing and their intersection. Over the course of my PhD I squeezed in two internships at Google where I worked on large-scale feature learning over YouTube videos, and in 2015 I interned at DeepMind on the Deep Reinforcement Learning team. Together with Fei-Fei, I designed and was the primary instructor for a new Stanford class on Convolutional Neural Networks for Visual Recognition (CS231n). The class was the first Deep Learning course offering at Stanford and has grown from 150 enrolled in 2015 to 330 students in 2016, and 750 students in 2017.

You literally just confirmed every single thing i said, he taught a CNN 101 class that got really popular. That is his only achievement. His claim to fame and why he was hired by Elon. Infact at Autonomy Day, Elon basically said that. Yet Tesla fans hail him as some savior. He isn't done ANYTHING in the ML community. He has not invented anything, HE hasn't done any breakthroughs, any advancements. Do you know how many others have created classes better than that? Thousands..

Like i said Waymo partnering with Andrej would be equivalent to partnering with a random college student who interned here and there which all college student do.
 
I agree that NN advancements have come from Google but I think you are downplaying Karpathy. Maybe he doesn't meet your standard but he does have a lot expertise in machine learning.

And certainly, he has done wonders for Tesla. He pretty much saved Tesla's entire AP development that was floundering at the start of AP2.

Its not my standard. Its more of the standard of what Tesla fans put on him. I watched a recent Tesla video where a prominent $TSLA personality said "no one has a guy like Andrej".

To acclaim such a thing to someone who has done nothing is just another thing on the list of absurdness. Its easy to work iteratively on something that has already been invented and laid out. For example, as a software engineer who develops websites and mobile application. There's no app you can point to me that i couldn't recreate. But what I'm doing is different from the people who actually invent the actual frameworks that make that possible and pushing the entire industry forward with it. Andrej is simply using the tools that's already been invented years ago. There's literally no current novel research being done by the AP team.

This is in contrast to others who are actually coming out with novel researches and breakthroughs that is pushing the entire industry forward.

Sure, Andrej might have done wonders for Tesla, but so could anyone else if given the time he has been given. Remember that Chris Lattner was only given 6 months. Andrej actually has disappointed me with the amount of time he was given. Tesla is way further back than what i expected. Furthermore what Andrej has done so far is nothing unique. We know because we can see the networks running in the cars and their architecture. For the longest they were using inception GoogleNet architecture that was created by Google in 2014.
 
  • Like
Reactions: diplomat33