Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Elon: "Feature complete for full self driving this year"

This site may earn commission on affiliate links.
the bolded part (I bolded it) is not wrong. Tesla is positioned very well to collect this data. not so sure about the actual usefullness of it, but collection could be done just fine.

Tesla is not positioned to collect the volume of such data from consumer cars as suggested by @strangecosmos thesis — especially so because they are not in a position to collect reliable abstracted data such as a compact but accurate 3D model of the world.
 
Whereas you can see a pet or toddler below the hoodline? What about between the wheels on the passenger side, under the car? :rolleyes:

You come from the notion that human-level ability would be sufficient for car responsible self-driving. Maybe it will be, maybe it won’t, but Tesla has a vision blindspot in this regard that others have more robust sensors for.

The thing is two-fold of course. Regulators and acceptable corporate liability levels might require superhuman perception on the front bumper level. But also simply the fact that computer ”brains” are not as advanced as human’s. Whereas a human can mentally take into consideration that curb or such without seeing it right now, this task can be harder for a computer.
 
Tesla is not positioned to collect the volume of such data from consumer cars as suggested by @strangecosmos thesis — especially so because they are not in a position to collect reliable abstracted data such as a compact but accurate 3D model of the world.
Well, I must disagree here

Tesla has today:
1. hundreds of thousands customer cars
2. software that they solely supply for those cars
3. ability to tap into the car's internet connectivity AND they indoctrinated users to let cars use their home internet (so free for Tesla)
4. software that can collect all sorts of ongoing in car things from raw car and user input streams to interpreted results and debug data
5. enough storage to buffer sizeable amounts of that stuff in #4
6. server infrastructure for the cars to push collected data to
7. car side software to receive instructions on what data to collect when.

Tesla does not have:
1. reliable perception

So they could still collect all sorts of data at a moments notice and they are working on their perception. We can reliably say that Tesla is uniquely position to collect all sorts of data from their cars and if they had unlimited resources.- even imperfect perception would not impede them as they would just substitute it with labor then. of course they don't have infinite resources so they don't do it. But in theory they could!
 
@verygreen Yes, we disagree.

I do not consider Tesla realistically positioned to collect such level of state-action-data from consumer cars that they could get a strategic advantage of training their NNs on billions of miles from those.

Your theoretical talk is unfortunately slipping into that same dark area where I positioned the @strangecosmos thesis earlier in this thread. I do not quite know how to respond to that. :) Unlimited resources? Sheesh...
 
Last edited:
  • Like
Reactions: Kant.Ing
You come from the notion that human-level ability would be sufficient for car responsible self-driving. Maybe it will be, maybe it won’t, but Tesla has a vision blindspot in this regard that others have more robust sensors for.

The thing is two-fold of course. Regulators and acceptable corporate liability levels might require superhuman perception on the front bumper level. But also simply the fact that computer ”brains” are not as advanced as human’s. Whereas a human can mentally take into consideration that curb or such without seeing it right now, this task can be harder for a computer.

Human level sufficient? Maybe not (people pull over curbs), but as a benchmark of acceptable safety level, it seems a useful reference.

(one could always back up first to clear the front baffle).
 
@verygreen Yes, we disagree.

I do not consider Tesla realistically positioned to collect such level of state-action-data from consumer cars that they could get a strategic advantage of training their NNs on billions of miles from those.

Your theoretical talk is unfortunately slipping into that same dark area where I positioned the @strangecosmos thesis earlier in this thread. I do not quite know how to respond to that. :)
there's an important difference here between me and @strangecosmos

I am just saying that the data could be collected to almost arbitrary fine point. This could be done today if Tesla wanted.

What I am not claiming (and don't know) is how useful that data would be for any subsequent training.
 
  • Helpful
Reactions: strangecosmos
there's an important difference here between me and @strangecosmos

I am just saying that the data could be collected to almost arbitrary fine point. This could be done today if Tesla wanted.

What I am not claiming (and don't know) is how useful that data would be for any subsequent training.

But there is also an important difference here between you and me.

I am not claiming such data can’t be collected because obviously it can (I don’t see how @strangecosmos was excited by the trivial notion that driving logs are available, did someone really assume they would not be?).

I am claiming it can not be collected on the billions of miles level (in a way that would be sufficient to train NNs on) — thus denying Tesla this particular strategic advantage the @strangecosmos thesis hinges upon.

If Tesla is able to solve reliable vision at some point they could collect more but solving that seems a long way off and continues to be hampered by a more limited sensor selection than other autonomous players have so even this route is not there anytime soon.

No, there is no magical stream of billions of miles of state-action-pairs flowing from the consumer Tesla fleet to TPU pods rented by Tesla. It can not be done on that level of volume.

And at lower level of volume Tesla has no advantage to established players with their test fleets.

Tesla trains their actual NNs on engineering car data just like everyone else.
 
Last edited:
If Tesla is able to solve reliable vision at some point
But they are just on the verge of it, did not you get the memo? /s

They probably can get some limited usefulness data with their perception, but that's unlikely since they don't do mass-scale collection of this yet.

9 gigs of usable internal storage means they can collect... about 25 hours of this data, perhaps somewhat more. So in theory for a car that is on wifi every day, they can collect every single mile.
 
  • Like
Reactions: mongo
But there is also an important difference here between you and me.

I am not claiming such data can’t be collected because obviously it can (I don’t see how @strangecosmos was excited by the trivial notion that driving logs are available, did someone really assume they would not be?).

I am claiming it can not be collected on the billions of miles level (in a way that would be sufficient to train NNs on) — thus denying Tesla this particular strategic advantage the @strangecosmos thesis hinges upon.

If Tesla is able to solve reliable vision at some point they could collect more but solving that seems a long way off and continues to be hampered by a more limited sensor selection than other autonomous players have so even this route is not there anytime soon.

No, there is no magical stream of billions of miles of state-action-pairs flowing from the consumer Tesla fleet to TPU pods rented by Tesla. It can not be done on that level of volume.

And at lower level of volume Tesla has no advantage to established players with their test fleets.

Tesla trains their actual NNs on engineering car data just like everyone else.

But they can test NN10.0 (whatever number) on every car with HW3. If it disagrees (when inactive) or gets disabled, it could log that so Tesla gets the failure rate of the NN over a lot of miles in a hurry. Doesn't help with training, but does with verification. (well... could help with training if there used different NN on different cars to see which was better in the real world).
 
9 gigs of usable internal storage means they can collect... about 25 hours of this data, perhaps somewhat more. So in theory for a car that is on wifi every day, they can collect every single mile.

But of course we know they won’t. That is just not realistic and the usefulness would be questionable too.

Look, the whole thing that made AlphaGo and AlphaStar (the @strangecosmos thesis) such a hero was the continuous feedback loop of the system playing itself time and again. For anything similar to happen — even in just one direction — on a global automotive scale is quite a different task. Let alone working both ways where the NNs in-car keep improving all the time based on this feedback loop... And I’m ignoring the whole lack of simple rules in driving compared to games...

In reality Tesla will train their systems like everyone else: engineering car data and simulators. This diminishes any advantage Tesla may have to a deployment advantage and potentially a validation advantage. These are not insignificant advantages but they are not such strategic game-changers as the @strangecosmos thesis suggests. Unfortunately Tesla also faces a great many disadvantages.
 
  • Helpful
  • Like
Reactions: croman and rnortman
But of course we know they won’t. That is just not realistic and the usefulness would be questionable too.

Look, the whole thing that made AlphaGo and AlphaStar (the @strangecosmos thesis) such a hero was the continuous feedback loop of the system playing itself time and again. For anything similar to happen — even in just one direction — on a global automotive scale is quite a different task. Let alone working both ways where the NNs in-car keep improving all the time based on this feedback loop... And I’m ignoring the whole lack of simple rules in driving compared to games...

In reality Tesla will train their systems like everyone else: engineering car data and simulators. This diminishes any advantage Tesla may have to a deployment advantage and potentially a validation advantage. These are not insignificant advantages. Unfortunately Tesla also faces a great many disadvantages.

What about multiple instances in a virtual environment (GTA or Twisted Metal :))?
 
Look, the whole thing that made AlphaGo and AlphaStar (the @strangecosmos thesis) such a hero was the continuous feedback loop of the system playing itself time and again
yes, I agree this part does not seem realistic.

And I’m ignoring the whole lack of simple rules in driving compared to games...
Huh? there are quite simple rules, get from point A to point B, avoid hitting anything and being hit and also try not to get caught when you break rules from that other book called "rules of the road". ;)
 
But they can test NN10.0 (whatever number) on every car with HW3. If it disagrees (when inactive) or gets disabled, it could log that so Tesla gets the failure rate of the NN over a lot of miles in a hurry. Doesn't help with training, but does with verification. (well... could help with training if there used different NN on different cars to see which was better in the real world).

Yes the Tesla way certainly can have some advantages like validation as you say, also deployment in general (quick updates are possible) and mapping too in the future, once reliable perception is there. But these are very different from the thesis that Tesla bypasses all the other players on the strength of their consumer fleet training their NNs.
 
  • Like
Reactions: mongo
Yes the Tesla way certainly can have some advantages like validation as you say, also deployment in general (quick updates are possible) and mapping too in the future, once reliable perception is there. But these are very different from the thesis that Tesla bypasses all the other players on the strength of their consumer fleet training their NNs.

Yeah, training (in the classical sense) I agree is likely not happening on the road (at the fleet scale). Validation though I think will be, once HW3 + NN10.0 rolls out it will be pretty darn good with better hooks for trouble reporting, with approaching a million cars on the road in 2020, that is a poop ton of miles of testing. Even at 500k cars with FSD, a billion miles only takes 2 months to get (at 12k miles/ year). That is a huge advantage toward regulatory approval and confidence (once you have a good enough NN that is). And if the NN has issues, the fleet can report back the problems and Tesla can roll out a new NN that needs less the 2 months to get to billion miles since there would be even more cars out there...
 
Whereas you can see a pet or toddler below the hoodline? What about between the wheels on the passenger side, under the car? :rolleyes:

But also simply the fact that computer ”brains” are not as advanced as human’s. Whereas a human can mentally take into consideration that curb or such without seeing it right now, this task can be harder for a computer.

@electronblue has it exactly right. Human brains are far more capable than DL algorithms -- especially with the comparatively still-pitiful amount of inference power in HW3, but even with the most powerful inference hardware available today. The reason is that humans have much better contextual awareness of the world around the vehicle and how it operates, fundamentally, than any DL system has ever been able to achieve so far. It is very, very hard to train contextual awareness into a DL model. I'm not saying it's impossible, but (a) many things the human brain does are currently impossible with known ML techniques, regardless of how many TPUs you throw at it, and (b) the amount of power in those TPU chips is still pretty paltry. They can get some damned fine bounding boxes around cars, pedestrians, cyclists, and signs with those TPUs, but knowing where the curb is when you can't see it is another thing entirely. And understanding the motivations and future actions of human beings around you when they are in many ways unconstrained in their behaviors is still another leap.

Also, a human can get out and look to see if their toddler is in front of the car, if they think the toddler might be there. And knowing whether the toddler (or dog, or curb, or whatever) might be there is an example of contextual awareness. Despite this, tragically, many parents run over their own children and pets sometimes. What do you suppose would happen to Tesla if Autopilot did this after they proclaimed it ready for unsupervised operation? (They will never proclaim it ready for unsupervised operation, and this is just one of many reasons.)

Before @strangecosmos points out that AlphaStar demonstrated contextual awareness in StarCraft -- and it certainly did, as it clearly understood the game world beyond whatever was currently shown on-screen -- contextual awareness of a game world is fundamentally different than the real world. A game world is by definition limited in what can possibly be represented in the world to what can be represented by the (hand-written, "Software 1.0") game itself, running on finite (and rather limited) computing hardware. This makes it a rather tractable problem for an AI which is given substantially more resources than the hardware actually running the game world. The real world is not so constrained in its representations, and it has vastly more power available in its, er, simulation engine.
 
That thesis fell apart from the start because it is based almost entirely on conjecture, cherry-picking, and wishful thinking.

I mean this is just a general assertion. Not something that can be specifically addressed with evidence or reasoning. ¯\_(ツ)_/¯

I like it better when people make specific claims I can respond to.

But Occam's Razor says that you should really look at the reliable information we have -- i.e., past and present performance of the vehicles themselves

If we were to take this to its logical conclusion, wouldn't we conclude that fully autonomous vehicles are impossible, since they don't exist today?

Talking about the future of technology requires conjecture about things that exist today because future technology doesn't exist today. But that doesn't mean it's just free-wheeling science fiction. We can look at an existing technique like imitation learning, for instance, and wonder what might happen if it were applied in a new domain or at a new scale.

From just a couple of posts up you sound very sceptical of every other autonomous manufacturer — ones with far better known merits than Tesla in this sphere — and then you do this with crumbs of theoretically positive Tesla news (for your thesis, that is)... go completely overboard in my books with the excitement and importance...

Let's be clear about what we're talking about here. Is Mobileye, or BMW, or any other company (besides Tesla) collecting state-action pairs from production cars? If so, I would like to know! If you're aware that they are, please provide a source. If you can't provide a source, then how do you know it's happening? Or do you even claim it's happening? This is a very specific factual question, not a general assessment of progress or capability.

Conversely, Bladerskb previously asserted that there is no evidence of Tesla collecting any state-action pair data from the customer fleet, and verygreen (as I understand it) stated that, actually, Tesla has set up collection of the NN mid-level representation (state) and driver input (action). Again, this is a specific factual claim.

AlphaStar was trained on 500,000 games

I may be wrong, but AFAIK, the dataset is much larger. We will know for sure when DeepMind releases its paper. But for now, there's this source:

"Blizzard will make about 500,000 more games available each month."
Secondly we don't know the amount of end to end imitation learning data needed for driving. We do know that AlphaStar wasn't trained on billions of miles of equivalent continuous driving data. ... Case in point, you don't need 'billions of miles'.

AlphaStar and autonomous driving are just an analogy. StarCraft and driving are different tasks. I think you were right the first time when you said we don't know how much data might be needed.

In my mind, the point of the analogy is just that AlphaStar shows imitation learning can handle complex, long-term, real time tasks in a 3D environment with elements of strategy, tactics, and multi-agent interaction. I don't think comparing hours of StarCraft play to hours of driving allows us to predict exactly how much data is needed for driving. It's just an analogy. For example, AFAIK, StarCraft doesn't have the long tail that driving does.

So no Tesla siphoning ~0.1% of data isn't a AlphaStar approach and is nothing like AlphaStar at all.

As rnortman put it, the main point is that Tesla has:

a resource they can tap for training FSD when they're ready to do that

I feel like we've already been over this. Post-HW3 imitation learning is going to be much more interesting than pre-HW3 imitation learning. Not necessarily immediately post-HW3, but HW3 will enable improvements in the perception NNs, which we agree is required for effective imitation learning.

If Tesla is able to solve reliable vision at some point they could collect more but solving that seems a long way off and continues to be hampered by a more limited sensor selection than other autonomous players have so even this route is not there anytime soon.

To quote Mobileye:

"While other sensors such as radar and LiDAR may provide redundancy for object detection – the camera is the only real-time sensor for driving path geometry and other static scene semantics (such as traffic signs, on-road markings, etc.)."
The sensor modality Tesla does not have — lidar — doesn't help with things like signs, lane lines, stop lines, cross walks, the colour of traffic lights, brake lights, and turn signals.

Look, the whole thing that made AlphaGo and AlphaStar (the @strangecosmos thesis) such a hero was the continuous feedback loop of the system playing itself time and again.

You're referring to reinforcement learning via self-play. But I have already addressed this point:
  1. Per DeepMind's estimate, AlphaStar attained roughly median human performance on StarCraft using imitation learning alone.
  2. Researchers such as two at Waymo have pointed to imitation learning as a way to create "smart agents" that could enable reinforcement learning in simulation for autonomous driving.
I'm not saying that I can predict the future and that applying the same techniques to driving as to StarCraft will certainly work. I'm just saying that it's an intriguing idea — it seems promising enough to try, and I hope Tesla does indeed try it and that we can see what happens.

Particularly if the alternative is the 15-year-old approach of hand coding cars to drive, we need to explore new frontiers in machine learning for robotics if we're going to overcome the hurdles to fully autonomous driving.
 
Last edited: