Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

The «Full» in Full Self-Driving Capability

This site may earn commission on affiliate links.
IMO, the rate of progress depends on whether — particularly after HW3 launches — Tesla can use its training fleet of hundreds of thousands of HW3 cars to do for autonomous driving what AlphaStar did for StarCraft. That is, use imitation learning on the state-action pairs from real world human driving. Then augment in simulation with reinforcement learning.

AlphaStar took about 3 years of development, with little to no publicly revealed progress. The version of AlphaStar that beat MaNa — one of the world’s top professional StarCraft II players — was trained with imitation learning for 3 days, and reinforcement learning for 14 days (on a compute budget estimated around $4 million). So that’s a total of 17 days of training.

In June, Karpathy will have been at Tesla for 2 years. He joined in June 2017. Since at least around that time (perhaps earlier, I don’t know), Tesla has been looking for Autopilot AI interns with expertise in (among other things) reinforcement learning. Karpathy himself spent a summer as an intern at DeepMind working on reinforcement learning. He also worked on reinforcement learning at OpenAI.

The internship job postings also mention working with “enormous quantities of lightly labelled data”. I can think of at least two interpretations:

1. State-actions pairs for supervised learning (i.e. imitation learning) of path planning and driving policy.

2. Sensor data weakly labelled by driver input (e.g. image of traffic light labelled as red by driver braking) for weakly supervised learning of computer vision tasks.

Autopilot and FSD are different from AlphaStar in that Tesla has a plan to roll out features to customers incrementally, so progress is a lot more publicly visible. We didn’t get to see the agents that DeepMind trained, say, 6 months ago. So, we don’t really know how fast the agents went from completely incompetent to superhuman. What’s cool and interesting, though, is that Demis Hassabis seemed totally surprised after AlphaStar beat MaNa:


I don’t think I would be super surprised if, 3 years from now, Tesla is way behind schedule and progress has been plodding and incremental. I would be amazed, but necessarily taken totally off guard, if 3 years from now Tesla’s FSD is at an AlphaStar-like level of performance on fully autonomous (unsupervised) driving.

We can’t predict how untried machine learning projects will turn out. That’s why researchers publish surprising results—we wouldn’t be surprised if we could predict what would happen in advance. The best I can do in my lil’ brain is draw analogies to completed projects like AlphaStar to what Tesla is doing (or might be doing). Then try to identify what relevant differences might change the outcome in Tesla’s case.
 
Last edited:
I almost clicked «Like» after skimming your post, but then I read the last paragraph again and realized you wrote «lil’» brain, not «ill».

J/K :D I agree with you on the unpredictability and information source issue. Wish Tesla & Co, were more open about their accomplishments and challenges. I guess fierce competition is the reason they don’t provide more insight
 
Yeah, I wish Tesla operated like DeepMind or OpenAI or an academic institution and just published tons of papers and code and gave talks about what they’re doing. Maybe one day years from now they’ll do a little more of that.

Kinda the tragedy of this technology is as this stuff moves from academia into commercial products all the public knowledge vanishes into black boxes. Apollo’s open source approach is kinda admirable. They don’t just publish their code, they also make online courses to teach people how to become self-driving car engineers.
 
  • Like
Reactions: OPRCE and Kant.Ing
He said they are only now starting to do intersections or some such, don't remember the exact quote. In other words - don't have working intersections code! And as you can imagine intersections are kind of important in city driving.

Elon: “I'm driving right now the development version of Autopilot and it works extremely well in terms of recognizing traffic lights and stop signs and is now starting to make turns effectively in complex urban environments.”

Source: Tesla Feb 28 Secret Conference Call Transcript - Full Self Driving; Model 3 & More

By the way, sounds like these press calls won’t be secret in the future: Elon Musk on Twitter

Aren't urban left turns what some other companies are still having trouble with?
 
Aren't urban left turns what some other companies are still having trouble with?

It's been reported that unprotected left turns have been a struggle for Waymo's minivans. Ars Technica interviewed a Waymo passenger and got statements from Waymo spokespeople:

Richardson also said Waymo cars sometimes seemed to plan routes that allowed them to avoid tricky situations like unprotected left turns or freeway driving.

"I've watched them come out of parking lots and go right to go a long way around the block to avoid a left turn," he said.

In a statement, Waymo stressed that Waymo vehicles do make left turns on a regular basis.

"Our vehicles complete unprotected left turns in autonomous mode regularly," a spokeswoman told us by email. "However, we've always said that unprotected lefts on high-speed roads are amongst the most difficult driving maneuvers. As our technology is new, we are going to be cautious because safety is our highest priority."
It's hard to put this in perspective without any statistical information. A 5% failure rate is unacceptably high. So is a 1% failure rate, and a 0.1% failure rate. 5% to 0.1% is a 50x spread. If the failure rate were 50x higher or 50x lower, we might be getting the exact same information from Waymo passengers and spokespeople. So, it doesn't seem like we have enough information to make even a ballpark statistical estimate.

"Working" is a fuzzy term. If "working" means ready for commercial deployment without human supervision, then Waymo doesn't have working intersections software. If "working" doesn't mean that... well, then what does it mean? In 2007, we had cars that could autonomously drive through four-way stops:


Just demoing something a few times doesn't mean it's gonna be ready for commercial deployment within 10 years. So when all we really have are demos... how do we judge the state of the technology, and its rate of progress?
 
Last edited:
  • Like
Reactions: OPRCE
You have no F clue what I know or drive and your hero worship is.... creepy.

Umm....

really.png
 
Unprotected left turns are hard for sure but realistically we have no insight into how a Tesla might handle them. Historically when Elon Musk describes something like this positively the eventual shipping reality is quite different...

Are you talking NoA smoothness or something else? Elon runs sub-alpha level code, so we may not be to his level yet. Especially on features currently in development.
 
Elon: “I'm driving right now the development version of Autopilot and it works extremely well in terms of recognizing traffic lights and stop signs and is now starting to make turns effectively in complex urban environments.”

Source: Tesla Feb 28 Secret Conference Call Transcript - Full Self Driving; Model 3 & More

By the way, sounds like these press calls won’t be secret in the future: Elon Musk on Twitter
Yup. it does not say it does intersections (also hear the ARK invest podcast where he says intersections support is not there yet?) it says car is STARTING to make turns effectively, ok? So it's not doing it yet.
 
  • Like
Reactions: OPRCE
I have no visibility into progress of other companies.

I was alluding to post #86 without doing the work of looking it up.
It's been reported that unprotected left turns have been a struggle for Waymo's minivans. Ars Technica interviewed a Waymo passenger and got statements from Waymo spokespeople:

Yup. it does not say it does intersections (also hear the ARK invest podcast where he says intersections support is not there yet?) it says car is STARTING to make turns effectively, ok? So it's not doing it yet.

Effectively in complex urban environments. Could have been doing them fine in non-complex environments.
From the 3rd hand test driver leak, they had disengaged in a 3 lane left turn when the other car encroached (but might not have been necessary).
 
Effectively in complex urban environments. Could have been doing them fine in non-complex environments.
oh yeah? What sort of "complex urban environments do you think Elon is visiting in his personal car nowadays"?

I am no longer giving Tesla much benefit of the doubt after the whole EAP by end 2016 and C2C by end of 2017. Now to make any Tesla statement real it musty be accompanied with some evidence. No evidence = it did not happen.
 
Yeah I bet it works really well driving around SF and LA.

The thing about Elon is that he uses these flowery statements to talk about things that barely work or completely nonexistent.
Such as the model 3 steering wheel looks like a space ship. Or in the AP2 announcement saying "what we have will blow you away, it’s blowing me away" and ended up releasing a 30 MPH Lane keeping assist months later.

I almost clicked «Like» after skimming your post, but then I read the last paragraph again and realized you wrote «lil’» brain, not «ill».

J/K :D I agree with you on the unpredictability and information source issue. Wish Tesla & Co, were more open about their accomplishments and challenges. I guess fierce competition is the reason they don’t provide more insight

Here's the thing though. Back in 2015 there was only one company who was saying that using RL is the only solution for self-driving. Now in 2019 there are dozens of companies now turning to RL, from Volvo to Waymo. Each with different views on implementation, whether they are seriously tackling it with full resource is another question.

If Tesla were more transparent it would expose Elon's BS hype statements.

Most people give SDC companies other than Tesla crap because they are geo-fenced to a particular city and only to a square mile of that city ranging from 10 sq. miles to 100 sq. miles. But if your goal is to prove that RL would work for SDC, you don't try to work with infinite state space. Which would be the case if you are trying to do a Level 5 that works in every city, state and country all at once, but rather limit your operating domain. You get it to work in a city then if it works then you scale up.

This is the same approach DeepMind has always used with AlphaGo and with AlphaStar. With AlphaStar they used one small map and limited the race option. Your goal is not to make the job hard on the first go around, it’s to make the job easy enough while also having the fully functioning smallest form of it working that can also scale up. This is why DeepMind used the limitations it uses. This is also why DeepMind used 500k games for supervised learning in order to bootstrap the RL. This was only done for exploration purposes to solve the problem. They could have easily used games from the bots or even just 1k human games. Why? Because one of the cool things about machine learning is that data can be faked. This is called data augmentation. You can take 1k pictures and turn it into 10k useful different pictures or 1k games into 10k by making changes to the data.

Knowing what we know about DeepMind and how they operate, I'm pretty sure we will see an AlphaStarZero just like we saw an AlphaGoZero. Now if we were to use AlphaStar as a blueprint as some are advocating.

AlphaStar was trained on 500,000 games and each StarCraft game taking on average 15 mins. This translates to 125,000 hours of gameplay. If you were to translate this to driving, 125,000 hours of driving while going on average 35 MPH would generate 4,375,000 miles. So 500k StarCraft games would be equivalent to 4,375,000 miles of driving while going on average 35mph. Case in point, you don't need 'billions of miles'.

A car driving 24 hours will drive 840 miles a day (with an average of 35mph)
100 cars driving 24 hours would translate to 84,000 miles driven in a day.
100 cars driving 24/7 for a full month (30 days) would generate 2,520,000 miles.
You would only need less than 2 months of driving to collect equivalent StarCraft data.

So if we are following the AlphaStar blueprint, then no Tesla is not the only one positioned with the capability to attempt to replicate it.

The point is, Elon has been saying its 2 years away since 2015. Nothing that Elon says is actually based on any internal facts, if you look at any fact, Elon and Tesla is actually miles behind companies like Mobileye even though the automakers have been holding back Mobileye.

Another example is HD maps from production cars https://i.imgur.com/TxauL6C.png

Tesla are miles behind that which is needed for running RL in a realistic simulator. Yet Elon's statement never and will never reflect the actual reality because he loves the attention, it feeds his ego to be seen as the best, so he will say anything to get it and no media ever calls him out. Next year, he will be saying end of 2021 again. Rinse and Repeat.
 
Last edited:
  • Informative
Reactions: OPRCE and lunitiks