Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
Talking about the future of technology requires conjecture about things that exist today because future technology doesn't exist today.

Meant to say: conjecture about things that don't exist today

- - - - - - - - - - -

Let me try to summarize the discussion so far. I put forward an idea that I'll call the imitation learning thesis. The imitation learning thesis says:

If/when Tesla solves perception (which will require, at a minimum, HW3, and probably more NN training after HW3's launch), it will be able to collect state-action pairs from HW3 vehicles at essentially whatever scale it wants. These state-action pairs require less bandwidth and storage than raw sensor data and, unlike raw sensor data, don't require costly human annotation.

State-action pairs can be used for imitation learning. Imitation learning is an approach used and endorsed by Waymo for autonomous driving. According to Waymo's head of research, Drago Anguelov, the reason Waymo doesn't use imitation learning more is a lack of training examples from the long tail of human driving behaviour — something Tesla could, in the future, have the ability to collect. A paper published by Waymo showed (on page 14) that imitation learning achieved an 85-100% success rate on certain driving challenges in simulation. Waymo noted the human success rate for these challenges is unknown (e.g. a human approaching a stopped car at high speed might also be unable to avoid a bad outcome). Waymo also did a few successful trial runs in the real world. Waymo's neural network, ChauffeurNet, was trained on ~1,440 hours (~60 days) of human driving.​

Two Waymo researchers argue that imitation is both 1) intrinsically useful as a way to train a car to drive and 2) instrumentally useful as a way to set up reinforcement learning in simulation:

"...doing RL requires that we accurately model the real-world behavior of other agents in the environment, including other vehicles, pedestrians, and cyclists. For this reason, we focus on a purely supervised learning approach in the present work, keeping in mind that our model can be used to create naturally-behaving “smart-agents” for bootstrapping RL."
In a different domain, StarCraft, DeepMind took this exact approach. First, it used imitation learning to attain performance estimated to be roughly around the human median for competitive play. Second, it used imitation learning to bootstrap reinforcement learning and achieved professional-level performance. StarCraft is in some ways unlike driving, but it has more in common with driving than, for instance, Go: it involves real time strategic and tactical action in a 3D environment with a continuous action space, imperfect information, and a vastly higher number of possible moves at any time interval.

ChauffeurNet and AlphaStar are promising proofs of concept for imitation learning. For Tesla, imitation learning could be 1) intrinsically useful and 2) instrumentally useful as a way to bootstrap reinforcement learning. If Tesla can solve perception, it will be positioned to collect state-action pairs for imitation learning on a unique scale: billions of miles per year. This unique position could be a competitive advance in autonomous driving.
Now let me try to summarize objections to the imitation learning thesis from this thread.

Objection #1: Tesla hasn't solved perception, so the point is moot.

My response: Tesla hasn't solved perception, but the point is not moot. If/when Tesla solves perception, it can apply imitation learning on a scale no one else can.


Objection #2: Billions of miles of state-action pairs data isn't required to attain human-level driving.

My response: We don't know how much data is required.


Objection #3: Other companies are collecting, or will in the future collect, more state-action pairs than Tesla.

My response: What evidence is there that this is true?


Objection #4: Tesla's efforts at solving perception will be made more difficult because it doesn't use lidar.

My response: Yes, in some cases such as road user detection, this is true. However, it's not true when the perception task requires seeing depthless features or light, such as: lane lines, traffic lights, signs, and turn signals. In these cases, only cameras can be used. This is why Mobileye, for instance, is pursuing a camera-first approach to autonomy.


Objection #5: Talking about how Tesla might use imitation learning in the future is just speculation.

My response: Talking about the future of autonomous vehicles is inherently speculative. Investigative reporting from Amir Efrati says that Tesla is using, and plans to use, imitation learning. So the premise is not purely speculative.


Objection #6: Tesla doesn't have a full self-driving simulator, which is a necessary part of training.

My response: Job postings indicate Tesla began looking to hire people to work on a full self-driving simulator no later than November 2017.


Objection #7: Fully autonomous driving may require human perceptual or cognitive capabilities that are fundamentally just impossible for the current machine learning paradigm.

My response: True, but unless we can prove this, we should try anyway. (Also, this objection applies equally to Waymo, Mobileye, Cruise, Zoox, et al. as to Tesla.)​
 
Last edited:
You need to change that "will" into "may". So far we have mostly only seen promises.

Fair. I believe Karpathy is telling the truth, and if I'm wrong then I won't trust him in the future. It is easy to believe what he's saying because it doesn't seem to be a controversial idea in deep supervised learning that a ~10x more computationally intensive neural network would perform better.
 
  • Like
Reactions: EinSV
I believe Karpathy is telling the truth

Crucially, Karpathy’s claim is not a prediction about something in the future. It’s a statement about the current performance of the new, as yet undeployed neural networks. I presume that these NNs were tested on Tesla’s test dataset, and showed better accuracy than the NNs currently running in HW2 cars. So, this is a statement about a quantitative, empirical matter of fact, not just a prediction or a goal. I believe Karpathy is telling and the truth, and not lying about that.

When making a prediction, you can be honest and wrong. (Like Elon.) I presume that Karpathy knows the exact error rates of all Tesla’s NNs, so if he’s being honest then he must be right.
 
Last edited:
Let's be clear about what we're talking about here. Is Mobileye, or BMW, or any other company (besides Tesla) collecting state-action pairs from production cars? If so, I would like to know! If you're aware that they are, please provide a source. If you can't provide a source, then how do you know it's happening? Or do you even claim it's happening? This is a very specific factual question, not a general assessment of progress or capability.

As I have pointed out, Amnon has said it himself, its also included in mobileye's published papers and granted patents. Mobileye has a collaboration with BMW on driving policy and we don't know the full breadth of data that is being collected. What we don't have is a @verygreen with access to a 2019 BMW to see everything that's being uploaded.

"One way to validate is to build a generative model of how human drive. Similar to GANs that create realistic pictures. You can create realistic trajectories of how humans drives by collecting a-lot of data. Using the HD Maps, create a computer game where you have agents driving on realistic roads and the trajectory of their driving paths are mimicking human drivers including reckless human drivers. Then you take our vehicle with our robotic driving policy and you drive in the simulator an infinite number of times (millions) and prove that we don't have accidents."​

HD map data is different than state-action pairs. HD maps only include fixed features of the environment (roads, signs, lights, lane lines), not road users (cars, bikes, pedestrians). HD map data also doesn't include driver input (steering, braking, accelerating).

It would be incredibly interesting to me if Mobileys is collecting data on road users and on driver input from production cars with EyeQ4. I've watched a good number of Mobileye talks and read a few of their papers, but I haven't come across that yet.

Amnon himself said "the data... is not only for building maps"

So we can deduce what IS being uploaded. Yes we don't know if steering/pedals are being uploaded but we do know that you don't need it. Mobileye uses a technique to retrieve driven trajectories already. Infact based on their patents we know that every single trajectory of every car is uploaded. We also know that they upload the speed of the car based on requirements of REM Map. We also know they upload pedestrians as they have feature that shows you pedestrian hot spots and behavior, we also know they upload detected cars because REM Map has a feature for car behaviors, speeding, real time parking, hot spots, open/taken spots, cyclists and bikes behavior, etc.

Some of the features of REM Map;
  • How much do people speed in the city
  • do people slow down near pedestrians
  • construction area behavior
  • pedestrians on the highway
  • jaywalking
  • public transportation passenger load
  • cyclists behavior
  • bike behavior
So no they don't only upload fixed features, but also road users and their behavior.

hrSjtJN.png


Conversely, Bladerskb previously asserted that there is no evidence of Tesla collecting any state-action pair data from the customer fleet, and verygreen (as I understand it) stated that, actually, Tesla has set up collection of the NN mid-level representation (state) and driver input (action). Again, this is a specific factual claim.

This statement makes it look like what i stated was untrue but that's not accurate. The evidence of Tesla collection the specific data you outlined DID NOT exist, if it did you would have quoted it in your thesis. I even specifically mentioned green and wk so they could elaborate on whether this was taking. WK said that steering/pedal are recorded but never uploaded and green said only 2 mins of steering/pedal were during the 0.01% of uploaded driving data. This is the first time that evidence has been presented and it was do to me asking directly for it.

However @verygreen didn't say anything concerning about NN-output (mid-level representation)?

AlphaStar and autonomous driving are just an analogy. StarCraft and driving are different tasks. I think you were right the first time when you said we don't know how much data might be needed.

In my mind, the point of the analogy is just that AlphaStar shows imitation learning can handle complex, long-term, real time tasks in a 3D environment with elements of strategy, tactics, and multi-agent interaction. I don't think comparing hours of StarCraft play to hours of driving allows us to predict exactly how much data is needed for driving. It's just an analogy. For example, AFAIK, StarCraft doesn't have the long tail that driving does.

Well we know that Nvidia's Dave-2 was trained on 72 hours of driving (just 3,000 miles) and its able to drive like a human in all whether conditions and in every state, yet it was only trained in California. So the assumption that billions of miles will be needed is not aligned with what we already know. Based on what we know, you only need tens of millions (miles) of driving data to create an human-like driving agent which you can then use to initialize a policy or use in adversarial self play.

One thing worth pointing out is that Waymo's 10-15 million miles (whatever it is right now) are miles driven while in automation, which are completely useless in what we are discussing.



This is according to @verygreen
 
Last edited:
Virtual disagree
but HW3 will enable improvements in the perception NNs

You need to change that "will" into "may". So far we have mostly only seen promises.

If HW3 has more processing power than HW 2.x then the only ways to not have a better perception NN are:
Tesla already has the best NN possible
Karpathy is totally incompetent (unable to improve current NN at all even with a larger/faster NN)

(Plus @strangecosmos 's observation that they are running HW3 in testing, so have data to back up the claim)
 
  • Like
Reactions: EinSV
As I have pointed out, Amnon has said it himself, its also included in mobileye's published papers and granted patents. Mobileye has a collaboration with BMW on driving policy and we don't know the full breadth of data that is being collected.

Wait, so is Mobileye and/or BMW collecting state-action pairs from production cars, or do we not know what data is being collected from production cars? None of the talks, papers, or patents I have seen so far specify what data is collected from production cars.

"A lot of data" doesn't necessarily mean data from production cars. 1 million miles from engineering cars is a lot of data.

Infact based on their patents we know that every single trajectory of every car is uploaded.

Can you please quote specifically where it says the trajectory of every trajectory of every production car (not engineering car) is uploaded? I know you have cited a patent already and I looked through it, but I have not seen any language saying specifically this.

The evidence of Tesla collection the specific data you outlined DID NOT exist, if it did you would have quoted it in your thesis.

I quoted Amir Efrati's report that Tesla uploads the data necessary for imitation learning. I wasn't just deducing it.

I checked with verygreen and they confirmed Amir's report is right. So, verygreen's confirmation isn't the first evidence, it's corroborating evidence.

You yourself have cited Amir Efrati's reporting as evidence, so I know you consider it valid evidence.

Well we know that Nvidia's Dave-2 was trained on 72 hours of driving (just 3,000 miles) and its able to drive like a human in all whether conditions and in every state, yet it was only trained in California. So the assumption that billions of miles will be needed is not aligned with what we already know.

It essentially just doing lane keeping, correct? What's most important for reinforcement learning in simulation is modelling how road users will react to your vehicle's behaviour, especially in complex urban scenarios.

One thing worth pointing out is that Waymo's 10-15 million miles (whatever it is right now) are miles driven while in automation, which are completely useless in what we are discussing.

That is a good point that has previously been overlooked on this forum. Manual driving is what's relevant for imitation learning, not automated driving.

This is according to @verygreen

Verygreen, is is true that Navigate on Autopilot doesn’t use HD maps?
 
Last edited:
  • Like
Reactions: Engr
Wait, so is Mobileye and/or BMW collecting state-action pairs from production cars, or do we not know what data is being collected from production cars? None of the talks, papers, or patents I have seen so far specify what data is collected from production cars.

"A lot of data" doesn't necessarily mean data from production cars. 1 million miles from engineering cars is a lot of data.

Yes they record road users and static features and they use part of the data to create the REM Map and then use insights from the data to provide more information from for smart cities that includes road users, pedestrians, cars, bikes and cyclists and their behaviors.

All of this is automatically done, both the REM Map creating and the Smart Cities Info.
This is then plugged into HERE map which is owned by BMW and VW..

Here is one clip where Amnon talks about it.

22mins 0 secs

This isn't the first time Amnon talks about it. REM Map purpose wasn't just to feed self driving cars. it was to be used for way much more than that.


Can you please quote specifically where it says the trajectory of every trajectory of every production car (not engineering car) is uploaded? I know you have cited a patent already and I looked through it, but I have not seen any language saying specifically this.

This is how crowdsourced driveable trajectory is created and updated in near real time...
There's literally not question about this.


It essentially just doing lane keeping, correct? What's most important for reinforcement learning in simulation is modelling how road users will react to your vehicle's behaviour, especially in complex urban scenarios.

Well it will stop for cars ahead, cut ins, merging cars, avoid parked cars, cars encroaching its lane and objects etc because that's what it was trained on. If it was trained on more things like changing lanes then it will also do that. my point is that the amount of data needed to get something that drives like a human in all facets of the driving tasks doesn't require billions of miles. To goal is never to get it to go 500,000 miles without accident. The goal is to get agents that drives and interacts like humans.

 
Good of you to engage a wider range of people @strangecosmos. I think that is fruitful. One quick comment.

Objection #7: Fully autonomous driving may require human perceptual or cognitive capabilities that are fundamentally just impossible for the current machine learning paradigm.

My response: True, but unless we can prove this, we should try anyway. (Also, this objection applies equally to Waymo, Mobileye, Cruise, Zoox, et al. as to Tesla.)

It does not apply equally because others are using more and/or more diverse set of sensors to offset this issue. What humans compensate for with their brains computers can compensate with superhuman sensors. Tesla’s sensor suite in particular seems quite limited when it comes to seeking car responsible autonomy.

Tesla can of course add sensors in future products but what also matters to many people here is what will happen to Tesla’s 2016 promise of ”Level 5 capable hardware” and their AP2/2.5 cars... and of course this also matters from Tesla’s volume perspective given that their current and growing fleet volume runs this sensor suite, not a future suite...
 
  • Helpful
Reactions: rnortman
So recently, Elon tweeted that we would receive "confirmation-free" Navigate on Autopilot in a March 15 wide release. That day has passed and there hasn't been a peep about it. Who was saying that they were relying on his tweets to make purchasing decisions?

Elon also tweeted in November 2018 that the advanced Summon would come in around 6 weeks. That was six months ago.

Speaking of six months, on January 23rd, 2017 Elon tweeted about that, too. Definitely.
 
Last edited:
Verygreen, is is true that Navigate on Autopilot doesn’t use HD maps?

"HD" is in the eye of the beholder. the maps are used for NoA of course.

we would receive "confirmation-free" Navigate on Autopilot in a March 15 wide release. That day has passed and there hasn't been a peep about it

19.8.1 is it, lists the ULC in the release notes (as you can see in my reddit post on this topic. But they disabled the functionality server side (so the cars have the capability now, Tesla just needs to flip an "enable" bit and suddenly all cars with this release would be able to do it.))
 
"HD" is in the eye of the beholder. the maps are used for NoA of course.

There's a big difference between HQ (High Precision) maps and HD (High Definition) maps and its not subjective. What makes a HD map HD is because it includes ALL semantic information.

The current Tesla HQ maps (below) would be useless for RL self-play which is what current discussion revolve around.

wp-content%252Fuploads%252F2015%252F10%252FScreen-Shot-2015-10-14-at-4.12.52-PM.png%252Ffit-in__1200x9600.png
 
19.8.1 is it, lists the ULC in the release notes (as you can see in my reddit post on this topic. But they disabled the functionality server side (so the cars have the capability now, Tesla just needs to flip an "enable" bit and suddenly all cars with this release would be able to do it.))

Saw your post about it earlier, very interesting find. You have more insight on this than I do, but it seems like they could simply be laying groundwork in advance, but the decision to actually activate the update is independent and could happen three months from now. So it might make his tweet technically correct, but the implication was that it would be available for use from the day of the wide release.
 
  • Like
Reactions: Engr
Well we know that they use their own internal map for AP that's built from GPS in the cars (the Tesla picture i linked above) but its not an HD Map which is a big difference.

Like ”full self-driving”, yes sure if we redefine every word...

It is plain as day from the way NoA’s mistakes that it evaluates lanes visually and basically takes exits based on proximity rather than exact understanding of the lanes that exits or that it is on.

Even Musk let this slip when he said the complex NoA was not working out but the simple one was showing promise. (This was before they launched it.)

Autopilot has access to much more data than they use. The reason they don’t use much of the data that is there is because it is not reliable yet.
 
Last edited:
they actually have hd/hq maps, though the use is inconsistent. They used to download them and then stopped (or my area is no longer covered by the newest releases? Or they just fully banned me? NoA stopped working on my car today, worked perfectly two days ago)
 
  • Informative
Reactions: strangecosmos
they actually have hd/hq maps, though the use is inconsistent. They used to download them and then stopped (or my area is no longer covered by the newest releases? Or they just fully banned me? NoA stopped working on my car today, worked perfectly two days ago)

I am aware of Tesla collecting what they call precision mapping data, I am simply saying NoA does not use it to do its thing. That is why it is pretty much mechanical and stupid in its actions like confusing non-exits for exits etc. It is very simple.

Maybe the automated city driving finally will use it, I can’t see how they could implement that without. Then again Tesla talks of having a fully general solution so maybe they think they can do it without any mapping.
 
"HD" is in the eye of the beholder. the maps are used for NoA of course.

they actually have hd/hq maps

I’m assuming these are 3D maps (or 3D models) that include lane lines and traffic barriers/road dividers?

Here is one clip where Amnon talks about it.

Watched a few minutes starting at 22:00. Didn’t hear where Amnon says all trajectories driven by all production cars are uploaded. Do you have an exact timecode, down to the second?

This is how crowdsourced driveable trajectory is created and updated in near real time...
There's literally not question about this.

I can’t find any language in the patent that says all trajectories of all production cars’ are uploaded. Can you please quote the exact part of the patent that says this?

goal is never to get it to go 500,000 miles without accident. The goal is to get agents that drives and interacts like humans.

There are three distinct goals a company could have:

1) Get to superhuman autonomous driving with imitation learning.

2) Use imitation learning to bootstrap reinforcement learning, and get to superhuman autonomous driving with reinforcement learning.

3) Do (1), then do (2) to make it even better.

For (1), I don’t think Nvidia’s vehicle would qualify. For (2), maybe, but I’m skeptical. For instance, how does it do on crowded streets in a city? How does it handle rare edge cases on the highway?

It does not apply equally because others are using more and/or more diverse set of sensors to offset this issue.

If fully autonomous driving requires something that is fundamentally impossible with today’s AI paradigm, then fully autonomous driving is impossible under today’s AI paradigm.

If something is possible just with a different set of sensors, then it’s not fundamentally impossible with today’s AI paradigm.
 
Last edited: