Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
Also, are you suggesting others are not using deep learning?

Certainly not. I'm only pointing out that if reinforcement learning turns out to be the solution for driving policy, Tesla's fleet approach has given them the ability to feed orders of magnitude more image data per unit time to their neural nets than anyone else to do the actual reinforcement learning. It may or may not be the case that a fleet as large as Tesla's is needed to refine the networks and find the corner cases. But it is hard to imagine any researcher in the field voluntarily choosing not to train their networks on a fleet of hundreds of thousands of cars if they had a choice.

Everyone else has been forced to operate in the past (and still is forced in the present) to operate with orders of magnitude less training data. It's simply a structural problem - there has been no other automaker willing to iterate the hardware in their actual customer cars at the breakneck pace Tesla has. And until a year ago the hardware did not even exist - Nvidia didn't have these monster teraflop GPU's for sale to put in customer cars 24 months ago.

Look - if it turns out that this huge training data set is not necessary - then Tesla's push in 2016 to build out this AP2 network will have been for naught. That is possible. But if a training set is in fact needed - then Tesla is the only company that's built the fleet to do it.

Maybe simulation will turn out to be the answer - in which case Tesla's fleet advantage goes away. I guess we will see.
 
  • Like
Reactions: Enginerd
@AnxietyRanger - my speculation on the Mobileye break-up, combined with the paper Amnon published in Oct 2016 - is that Amnon Shashua completely agrees with Tesla's approach. I speculate that Shashua realized Tesla would soon not need him and if he let his chips be used while Tesla was simultaneously using Nvidia boards in the same cars to do reinforcement learning on driving policy - that Tesla would kick him to the curb as soon as their own neural network matured. At that point he/Mobileye would have much less leverage and value in the marketplace because Tesla would have shown that you don't need Mobileye's expertise to solve this problem - what you need is a metric crap ton of training data.

If you buy into my speculation, then you can view Mobileye's ship-jumping as an active effort to slow Tesla down before it was too late and Tesla walked away with the self driving prize (and perhaps destroyed Mobileye's value completely as a side effect). The one lever Mobileye had to try to stop Tesla was to take away their image recognition - forcing Tesla to build it from scratch and buying time for Mobileye to find other partners that would give Mobileye much better long-term business terms. And of course Intel bought them, Tesla was slowed down, and we will see what happens next.

But what has still not happened is that nobody else has used Tesla's delay to build out a huge customer network of of cars uploading video data to central servers every night and then push out refined neural nets every few weeks in a gigantic continuous optimization reinforcement deep learning cycle. That effort is only four months old and we will see if it bears fruit.

My take is that Tesla used a small internal corporate test fleet for 8 months to get image and driving lane recognition to the point of being sufficiently functional that they could then switch to using mass fleet video uploads to provide training data for driving policy to the mothership. If they are making rapid progress on driving policy we have no idea because they are doing it all in shadow mode and haven't turned on any features obviously - we will see.
 
Last edited:
Maybe simulation will turn out to be the answer - in which case Tesla's fleet advantage goes away.
That's probably not true either. Because Tesla has an enormous advantage either way. If trillions of simulation miles are required (remember - in a simulator you can speed things up a bit, time is kind of a variable parameter) - Tesla is far ahead of the competition when it comes to validating their simulation results in the field. It's more likely that one of their cars will eventually encounter a simulated situation, than any other car, so they have this advantage, no?
 
  • Like
Reactions: calisnow
However, at the same time nothing much is known publicly about Tesla's progress overall, just that they are having to repeat what MobilEye already did years ago... we'd have to think they can solve all that AND then somehow jumpstart ahead of what Waymo/Google, MobilEye & co. are now doing.

Waymo/Google is a different discussion. But what is Mobileye doing right now on a massive scale besides opening enough bank accounts to store 15 billion dollars? They've announced numerous partnerships that will be mass-scale crowd source efforts but AFAIK they do not have these networks operational in any customer fleets. Do they?
 
  • Like
Reactions: MP3Mike
But what has still not happened is that nobody else has used Tesla's delay to build out a huge customer network of of cars uploading video data to central servers every night and then push out refined neural nets every few weeks in a gigantic continuous optimization reinforcement deep learning cycle. That effort is only four months old and we will see if it bears fruit.

Not even Tesla.

Their NN has stayed the same much of the time.

As for MobilEye not wanting Tesla to copy the performance of their chip, certainly I buy that.
 
@AnxietyRanger - my speculation on the Mobileye break-up, combined with the paper Amnon published in Oct 2016 - is that Amnon Shashua completely agrees with Tesla's approach. I speculate that Shashua realized Tesla would soon not need him and if he let his chips be used while Tesla was simultaneously using Nvidia boards in the same cars to do reinforcement learning on driving policy - that Tesla would kick him to the curb as soon as their own neural network matured. At that point he/Mobileye would have much less leverage and value in the marketplace because Tesla would have shown that you don't need Mobileye's expertise to solve this problem - what you need is a metric crap ton of training data.

If you buy into my speculation, then you can view Mobileye's ship-jumping as an active effort to slow Tesla down before it was too late and Tesla walked away with the self driving prize (and perhaps destroyed Mobileye's value completely as a side effect). The one lever Mobileye had to try to stop Tesla was to take away their image recognition - forcing Tesla to build it from scratch and buying time for Mobileye to find other partners that would give Mobileye much better long-term business terms. And of course Intel bought them, Tesla was slowed down, and we will see what happens next.

But what has still not happened is that nobody else has used Tesla's delay to build out a huge customer network of of cars uploading video data to central servers every night and then push out refined neural nets every few weeks in a gigantic continuous optimization reinforcement deep learning cycle. That effort is only four months old and we will see if it bears fruit.

My take is that Tesla used a small internal corporate test fleet for 8 months to get image and driving lane recognition to the point of being sufficiently functional that they could then switch to using mass fleet video uploads to provide training data for driving policy to the mothership. If they are making rapid progress on driving policy we have no idea because they are doing it all in shadow mode and haven't turned on any features obviously - we will see.
I like this train of thought. Pure, wild speculation of course, but I do agree it's likely
 
  • Like
Reactions: buttershrimp
Waymo/Google is a different discussion. But what is Mobileye doing right now on a massive scale besides opening enough bank accounts to store 15 billion dollars? They've announced numerous partnerships that will be mass-scale crowd source efforts but AFAIK they do not have these networks operational in any customer fleets. Do they?

Waymo/Google is totally a part of my discussion anyway. My main point is that others have been at this for a very long time now and have a public track-record. That can not be ignored when assessing who is leading the self-driving progress.

As for crowd-sourcing, I certainly concede Tesla is ahead of MobilEye on that. I am just not yet quite convinced there is evidence it being enough for them to bypass MobilEye, given how behind they are...

So it comes down to faith. It can happen, but we don't see it yet in anything that is public.
 
I guess what I'm saying is: I don't believe doing something on a massive scale is necessarily the winning recipe. I agree Tesla is doing something on a massive scale, but is that the type of activity that will translate into them taking a lead in an industry where publicly available progress puts them behind, that IMO at this stage is a leap of faith.
 
Well, that as an argument makes some sense. The question is, though, is that the kind of data that will make the difference? They probably can't use that data for deep learning as such.

You don't think the multi-gigabyte video uploads are useful for reinforcement deep learning? Why not - you think the resolution is too low? The images too compressed? We all agree they aren't uploading high res video on the fly over 4G networks. The uploading seems to be happening at night when the cars are at rest and connected to the wifi networks of their owners.

I mean, if the fleet is so great, why is EAP so crap? AP1 obviously benefited from MobilEye massively...

That's where we differ - I think EAP is the 8th wonder of the world because of the speed at which it grew from zero. Warts and all - it is a stunning testament to the current state of the art of deep learning hardware and software. What Mobileye achieved with a more labor intensive approach was also amazing - for its time. If Mobileye was starting today from zero it seems highly unlikely they would use the same labor intensive approach. {Intel, btw, is on an AI company acquisition spree as you know - and claims they are working on solutions to reduce training times by additional orders of magnitude within the next 36 months.}

Anyway yes - time will tell.
 
Last edited:
Waymo/Google is totally a part of my discussion anyway.

That's fine but you can't mix them up - let's talk about Waymo now. Waymo/Google I see as a more serious challenger because they do seem to be the current leader in the field in terms of cars that can drive themselves right now. I think their IP and expertise is much more valuable than Mobileye.
 
  • Like
Reactions: LargeHamCollider
You don't think the multi-gigabyte video uploads are useful for reinforcement deep learning? Why not - you think the resolution is too low? The images too compressed? We all agree they aren't uploading high res video on the fly over 4G networks. The uploading seems to be happening at night when the cars are at rest and connected to the wifi networks of their owners.

I guess time will tell, but I think it is a reasonable argument that testing via mobile video is more likely than actual learning via mobile video. Maybe they will, on some level, do both.

But I am of the opinion that the massive learning will probably not come from remote sources - and that of course is an area where the established players are probably better equipped than Tesla is and having been doing it for a much longer time...

I'm not discounting the fleet, mind you. Maybe they will make much of it. But it is a leap of faith at this stage, one that I think some people are taking a bit carelessly.
 
I guess what I'm saying is: I don't believe doing something on a massive scale is necessarily the winning recipe. I agree Tesla is doing something on a massive scale, but is that the type of activity that will translate into them taking a lead in an industry where publicly available progress puts them behind, that IMO at this stage is a leap of faith.
fair enough, agreed
 
Here's one thought.

Do you guys think Tesla will let a NN drive anytime soon?

Will MobilEye using car makers?

Does Google? Anyone know?

Because to me it would seem that to really make use of deep learning, i.e. the fast track to happiness, one would have to let the NN drive.

EAP does not let the NN drive.

I guess this comes down to the question of labor intensive vs. deep learning. @calisnow is not wrong that image recognition domain has moved onto deep learning, as has machine translation for example.

Will the first Level 4-5 cars have the NN drive? Or are the results of NN getting it wrong so catastrophic that they'd still go the algorithmic route?
 
Here's one thought.

Do you guys think Tesla will let a NN drive anytime soon?

Will MobilEye using car makers?

Does Google? Anyone know?

Because to me it would seem that to really make use of deep learning, i.e. the fast track to happiness, one would have to let the NN drive.

EAP does not let the NN drive.

I thought separately from the image recognition training project - the NN is in fact driving in shadow mode and comparing its actions to the driver's to teach it. Even if Tesla hasn't admitted it - isn't it most likely the case that they're doing it? That's what reinforcement learning for driving policy is - a neural network making the decisions on how to interact with traffic. Right? Am I mistaken on this?

And in any case - to answer your question the answer would be that when the neural net shows sufficient reliability so as not to kill the humans would be when Tesla starts making it available to us in a very limited and "KEEP YOUR DAMN HANDS ON THE WHEEL CUSTOMERS SO YOU DON'T ALL DIE IN THE FIRST 5 MINUTES" kind of way. :D
 
I thought separately from the image recognition training project - the NN is in fact driving in shadow mode and comparing its actions to the driver's to teach it. Even if Tesla hasn't admitted it - isn't it most likely the case that they're doing it? That's what reinforcement learning for driving policy is - a neural network making the decisions on how to interact with traffic. Right? Am I mistaken on this?

And in any case - to answer your question the answer would be that when the neural net shows sufficient reliability so as not to kill the humans would be when Tesla starts making it available to us in a very limited and "KEEP YOUR DAMN HANDS ON THE WHEEL CUSTOMERS SO YOU DON'T ALL DIE IN THE FIRST 5 MINUTES" kind of way. :D

Sounds not impossible. I am not sure if the neural net can do actual learning without access to all the sensory inputs, which would be hard on a remote basis, but certainly as a verification process it sounds plausible.

Now, as to your second paragraph, this is an area where I think Tesla does hold another distinct advantage - schedule-wise. They have shown willigness to ship beta stuff where driver is responsible (and of course their hardware platform appraoch and OTA software upgrade capability uniquely enables this), whereas the approach of other companies is much more conservative.

@verygreen any sign of that happening?

I think we are converging on some concensus on what the advantages of Tesla are. But will it be enough to overcome a lead by others?
 
There is also the question of trusting the NN on what it sees.

Others are using sensor fusion i.e. Lidar and radar as a sort of safety blanket or sanity check on what the vision is seeing. Tesla has no such safety blanket beyond a very narrow frontal radar.

Tesla of course still has time to change this for future suites. But they've been quite adamant about not needing Lidar. I wonder how much of a "sell what you have now" show that is...
 
Sounds not impossible. I am not sure if the neural net can do actual learning without access to all the sensory inputs, which would be hard on a remote basis, but certainly as a verification process it sounds plausible.

Okay well again maybe I'm wrong on this - but I thought that what is happening wrt training is as follows:

Human drives his Tesla to work in the morning.
Shadow NN driver is also "driving."
Some particular distance of this journey is completely recorded on the car's local storage.
Overnight: all sensory input from radar, sonar and the video cameras for that journey are uploaded to Tesla's servers.
Training: The shadow "driver" is rewarded/punished for its recorded actions based on automated feedback or perhaps some amount of hand curated yes/no feedback. Back propagation occurs to adjust the weighting of the variables in the neural network.
Push: The adjusted network variables are pushed out to the cars in the next build.

Rinse repeat. Of course note that in this case we would see zero visible progress on our end because Tesla is not letting the NN network drive "live" whatsoever - but the shadow driving is close to being as good as live driving because the training can compare the recorded actions of the real driver and the neural network. The difference from AP1 is that AP1 never uploaded the full sensory input to Tesla - either in real time or when the car is at rest. Whatever AP1's "fleet learning" was, we were never told and frankly it does seem like whatever it was it must have been of far more limited value.

No?
 
Okay well again maybe I'm wrong on this - but I thought that what is happening wrt training is as follows:

Human drives his Tesla to work in the morning.
Shadow NN driver is also "driving."
Some particular distance of this journey is completely recorded on the car's local storage.
Overnight: all sensory input from radar, sonar and the video cameras for that journey are uploaded to Tesla's servers.
Training: The shadow "driver" is rewarded/punished for its recorded actions based on automated feedback or perhaps some amount of hand curated yes/no feedback. Back propagation occurs to adjust the weighting of the variables in the neural network.
Push: The adjusted network variables are pushed out to the cars in the next build.

Rinse repeat.

No?

Not impossible. Other better experts can comment, but it is my understanding that most learning probably would still happen in a non-remote manner due to the amount of data required.

I find the fleet more likely to be helpful in verification rather than actual deep learning, but I'm not saying the latter is impossible. I think Tesla has been sufficiently vague about this (as with everything AP2), that it is hard to say what is real and what is smoke and mirrors.

I know @Bladerskb has an opinion. ;)
 

No, that's not what is happening, tesla get's a snapshot of your travels based on the logging protocol they download to your car whenever the mothership says. It could be extreme curves or intersections or user disengagements etthe mothership can ask for logs based on anything.

However it's not the entire trip and only the criteria that match the log request and then it's only I "believe' a 10s snapshot including video, telemetry and radar data.