Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
Again, your point is not proven until a manufacturer actually does release and sell such a car. You post a lot about development, and that is appreciated, but the point is not made until the vehicle is actually sold to a consumer. Last minute delays does happen (for example Supercruise was announced in 2012, was poised to release 2016, but delayed until this year and still not out yet).

No difference? I have one of those cars on reservation right now. I can't buy a Waymo Pacifica or a Cruise Bolt. Those will be public transit (no different me than a bus or a train, which I can't own either). There have been many vehicles already with tons of expensive sensors that have worked for many years already as FSD. Personally, I think it's significant if the vehicle is affordable vs something that is out of reach for most people to own.

I appreciate that the competition talk is better on the other thread, so a quick comment on Tesla.

It is telling that other competition has announced projects and features many, many years ago, and even they may be facing delays. Tesla announced AP2 in late-2016, have faced massive delays in even their Level 2 project since, and we're discussing them being ahead of the competition in Levels 4-5?

Other than Tesla making a bold claim and shipping a hardware platform (sufficiency of which is questionable), it really isn't a very believable assumption that they'd be first, is it? At the very least, the assumption relies heavily on faith alone, not on any kind of evidence.

I guess that's one reason why discussing development matters. It puts Tesla's publicly known progress in its likely context. Feel free to elaborate if and why you feel Tesla is ahead of the curve on Level 4-5?
 
Last edited:
  • Like
Reactions: zmarty
I don't think the video is fake, either. But I do not think it is of much value either. Basing anything just on that is ignorant IMO.

His point was implied not explicit - he believes the fact that the video is still there 12 months later is a clue that internally they are making enough progress to believe that cameras + deep learning software trained by hundreds of thousands of vehicles in the real world is still the solution. The team members have changed but at least publicly the technological approach to solving the problem has not.

With Google/Waymo, MobilEye and e.g. Audi we at least have a decade or so of their work and progress in the autonomous space in the public.

The wildcard question is how much a decade of formal research matters in this field - whether the new approach of distributed fleets feeding training data to a central computer will quickly overcome years of formal research. Maybe it won't but maybe it will. There is at least some circumstantial evidence that Daimler & GM may have come to such a conclusion.

Daimler: is a very proud organization with a deep budget and many, many years of research on autonomous driving. Yet the last 24 months have seen Teslas trounce Benzes on the road. And now for some reason Mercedes announced an Nvidia partnership in January of 2017 to produce a car together.

General Motors: also many years of internal research and tens of millions of dollars invested. Yet the exec team saw something in the 12 man start-up Cruise Automation that was so amazing they laid out $1B in stock and cash to purchase it. Cruise was launched only a year prior by an MIT drop-out and had only 12 employees.

Tesla's approach to solving this problem is brand new because it was not physically possible even 5 years ago and now that it is physically possible Tesla is the only company doing it with a fleet of this scale - it's an experiment in massive deep learning in real time that has never been attempted before. AP1's "fleet learning" may not have been what Tesla claimed (yet AP1 got better over time). But AP2 is uploading actual video.

HD lane and position mapping As for HD mapping Tesla claims to already be hard at work on it gathering data and they claim they have been for months or a year now. Mobileye? - bladerskb says they have a massive mapping project already underway but what I have been able to find says the Road Experience Mapping project will not start gathering data until 2018 - despite the fact that Mobileye signed up partners last year. If the cameras are in customer cars I did not know that and haven't seen an article saying so.

Incidentally Amnon Shashua's only academic paper I can find on driving policy was published in October 2016 - not "many years ago." This was after the highly publicized breakup with Tesla, and at the same time Tesla was implying publicly that Mobileye was basically scared of Tesla's internal development efforts and tried to stop them (Mobileye of course said Tesla was lying). Yet this paper was published before Intel bought Mobileye for $15B and a lot of people had been saying for some time that Mobileye's curated neural network IP was rapidly becoming less valuable in this dawning era of teraflop monster mobile computers that could possibly "brute force" learning with less human curation needed and far less training time than in the past. It looks to me like a transparent effort to bolster his reputed expertise in driving policy aka the core of the self driving problem right at the EXACT moment he was "for sale."
https://arxiv.org/pdf/1610.03295.pdf - his paper from October 2016 entitled "Safe, Multi-Agent Reinforcement Learning for Autonomous Driving."

Driving Policy and Tesla Uploads: It does not seem reasonable to believe that Tesla is limiting the use of its video uploads to mere object recognition - it seems far more likely they are working on driving policy reinforcement learning just like Amnon discusses in his paper. The difference between Tesla and Amnon is that while Amnon may have 15 billion dollars from Intel now - he cannot go back in time and build a fleet of 100,000 cars (200,000 by end of 2018 - 400,000 end of 2019) and get their video data to train his network. He doesn't have a fleet. He will soon but as far as I know there is no other fleet of hundreds of thousands of cars running monster hardware (upgradable - 11 tflops today - 20 next year if needed - or more) that continually tests these networks and feeds back video data to train and refine them on a near-continuous basis.

If mobileye has a fleet anywhere near the scale of Tesla's then correct me but I thought they did not yet - that the cameras to do the uploading are coming in 2018 to their newly signed partners.

With others we have actual evidence of their progress over a number of years. There are believable roadmaps, there is history, there is evidence of other things than just recognition etc.

You like history, tradition, stability, organization, written plans, civilized discussions with regulatory agencies lol - you are very European. I mean that in the best way possible.

If Tesla is already doing the same things, then we lack the data that they are. Hence it is more a question of faith at this stage.

Yes, and I have it lol
 
Last edited:
Other than Tesla making a bold claim and shipping a hardware platform (sufficiency of which is questionable), it really isn't a very believable assumption that they'd be first, is it? At the very least, the assumption relies heavily on faith alone, not on any kind of evidence...Feel free to elaborate if and why you feel Tesla is ahead of the curve on Level 4-5?

Making a claim about who is "ahead" on Level 4/5 would be foolish. Sufficiency of the hardware platform is not even slightly questionable if we restrict (as I always do) the FSD "claim" to be that the car gets the data it needs to see the world. Redundancy, dirt proofing lenses etc. - that's not an interesting problem. That's fixed on the next hardware update cycle. So maybe my 2017 car won't get reg approval who cares. This is the "dumb" problem. Let's talk about the intellectually "interesting" problem - does research history matter if the path to solving the problem robustly (ie in difficult driving conditions of faded road markers, missing road signs, unpredictable drivers, etc.) of driving policy? The people who claim to work in the field that I have seen simply say - "nobody knows." This is brand new. Nobody has ever had a fleet of vehicles this large (and now growing at hundreds of cars per day) feeding back video imagery into a deep learning network. I think it's a landmark experiment in the history of computer science and it will be looked back on as a watershed moment - whether it fails or succeeds.

But - if it turns out that what is needed to bring this to fruition in as wide a variety of driving conditions as possible, iss simply massive amounts of training data and a giant fleet with computing hardware powerful enough to run very deep neural networks - then Tesla is the only player that has the hardware in place at scale. They have 100K cars on the road now - all built to anticipate the possible need for even faster computation - and are adding more at 25K per quarter (and soon 50K per quarter). The massive training project has only been live for 120 days and the question is whether or not this ocean of video data will make obsolete years of past research (we will soon see over the next year or so of updates) - good God people have a little patience.
 
Last edited:
It is telling that other competition has announced projects and features many, many years ago, and even they may be facing delays. Tesla announced AP2 in late-2016, have faced massive delays in even their Level 2 project since, and we're discussing them being ahead of the competition in Levels 4-5?

Your comment reminds me of Norvig vs. Chomsky, Peter Norvig wrote a blog post in the last couple years commenting on how grouchy and dismissive Chomsky was at a panel he was on recently looking at computer progress in language translation and voice recognition.

Chomsky is/was critical of the deep learning approach to language problems. Chomsky is a living legend to be respected and his theories of language are (were) the foundation of the field. But decades of his research yielded zero progress to actually finding a working model / version of his universal language theory. Along comes the very recent deep learning upstarts and they start making actual progress in doing something useful in the real world - which pisses chomsky off because of course a lifetime of his work has been thrown aside by a new method (and their method does not stand on the shoulders of his work - it is not a continuation of progress). His response is to sputter on about how machine translation is not *really* understanding language.

I'll find the paper and summarize tomorrow - time for bed now.
 
If mobileye has a fleet anywhere near the scale of Tesla's then correct me but I thought they did not yet - that the cameras to do the uploading are coming in 2018 to their newly signed partners.

Unless they have changed their approach, ME work independently with each OEM (or system integrator). Which other EyeQ3 cars have benefitted from their work with Tesla? None that I am aware of.

So I find it hard to believe that they will be able to build a huge multi-vendor fleet which actively sends data back from customers cars in the near future. For starters, who will bear the cost of that data? It wont be the OEM.
 
  • Like
Reactions: BigD0g
Your comment reminds me of Norvig vs. Chomsky, Peter Norvig wrote a blog post in the last couple years commenting on how grouchy and dismissive Chomsky was at a panel he was on recently looking at computer progress in language translation and voice recognition.

Chomsky is/was critical of the deep learning approach to language problems. Chomsky is a living legend to be respected and his theories of language are (were) the foundation of the field. But decades of his research yielded zero progress to actually finding a working model / version of his universal language theory. Along comes the very recent deep learning upstarts and they start making actual progress in doing something useful in the real world - which pisses chomsky off because of course a lifetime of his work has been thrown aside by a new method (and their method does not stand on the shoulders of his work - it is not a continuation of progress). His response is to sputter on about how machine translation is not *really* understanding language.

I'll find the paper and summarize tomorrow - time for bed now.

You don't have to convince me on neural nets, nor with being anti-Chomsky, who has hurt enough people I know with his theorems, your's truly included. ;)

Not every "Steve Jobs" actually turns out to be Steve Jobs. Not even Steve Jobs was always "Steve Jobs".

What you say is all good and well. At this stage it just takes faith, though, not evidence to believe Tesla can reach faster what others have been doing for long.

For example, nobody (?) is actually driving on neural nets yet. EAP certainly is not. Solving driving policy, then, by neural nets would be a massive leap. Maybe they can do it.

Also, are you suggesting others are not using deep learning?
 
Last edited:
  • Informative
Reactions: buttershrimp
Redundancy, dirt proofing lenses etc. - that's not an interesting problem. That's fixed on the next hardware update cycle. So maybe my 2017 car won't get reg approval who cares.

Well, I would say quite a few people care.

I agree that hardware is not as interesting a problem, given that great hardware solutions do exist and can be upgraded in future products. But since ALL that Tesla has, as an advancement against the competition, is dumb hardware, we can not dismiss this angle.

If we dismiss the hardware that exists, Tesla has nothing publicly known against the competition.

Not to mention Tesla has some commitments based on current hardware...
 
So I find it hard to believe that they will be able to build a huge multi-vendor fleet which actively sends data back from customers cars in the near future. For starters, who will bear the cost of that data? It wont be the OEM.

You may be in for a nasty surprise if you think running shadow mode over 4G on existing fleet will be sufficient for deep learning. Certainly they can verify some things on the fleet, I am not disputing that, but this is a very hard problem towards which others are known to have much more methdological, existing processes. Is Tesla's potential remote learning fleet sufficient to overcome a likely lead of others?

It takes quite a bit of faith to believe Tesla can lead here. I don't think it is impossible, I just think it hinges on faith.
 
You may be in for a nasty surprise if you think running shadow mode over 4G on existing fleet will be sufficient for deep learning. Certainly they can verify some things on the fleet, I am not disputing that, but this is a very hard problem towards which others are known to have much more methdological, existing processes. Is Tesla's potential remote learning fleet sufficient to overcome a likely lead of others?

It takes quite a bit of faith to believe Tesla can lead here. I don't think it is impossible, I just think it hinges on faith.

I have to disagree with this. I'm not a NN engineer, but I am in software and the approach makes a ton of sense to me, if the rumors an tea leaves can be believed. Tesla is not doing "ANY" learning in the cars, no learning at all, they are doing testing in the car. Saying the driver did X I would have done Y do they agree? The learning is being done in a massive datacenter at the mothership where they crunch all the data, try to solve for a particular edge case and then rinse and repeat for the next edge case. Once they have that edge case working they'll push that fix into a release branch etc...

The Tesla fleet is like have 200k test beds for a neural network to give feedback back into the network about the quality of the data and it's understanding of the world.

The problem is the software isn't up to par yet, and that where Kapanarthy comes in. I suspect he's redoing all the deficient models and re-engineering them in his vision approach. This is why we haven't seen anything new and interesting for a while. I suspect in a few more months we'll know if the Kapanarthy model is working or not.
 
K.jpg
 
The Tesla fleet is like have 200k test beds for a neural network to give feedback back into the network about the quality of the data and it's understanding of the world.

Sure. However, what they are testing at the moment is object recognition. Others had this solved already, a long time ago...

The test fleet as well as the aggressive software upgrade ability on an existing hardware platform are certainly some kind of advantages. I doubt anyone is disputing that.

However, at the same time nothing much is known publicly about Tesla's progress overall, just that they are having to repeat what MobilEye already did years ago... we'd have to think they can solve all that AND then somehow jumpstart ahead of what Waymo/Google, MobilEye & co. are now doing.

If we want to believe Tesla is in a leadership position on autonomous, that is.

Tall order. Takes some faith to believe that.
 
Sure. However, what they are testing at the moment is object recognition. Others had this solved already, a long time ago...

The test fleet as well as the aggressive software upgrade ability on an existing hardware platform are certainly some kind of advantages. I doubt anyone is disputing that.

However, at the same time nothing much is known publicly about Tesla's progress overall, just that they are having to repeat what MobilEye already did years ago... we'd have to think they can solve all that AND then somehow jumpstart ahead of what Waymo/Google, MobilEye & co. are now doing.

If we want to believe Tesla is in a leadership position on autonomous, that is.

Tall order. Takes some faith to believe that.

I don't believe they lead at all. Frankly I think Google has this cracked and they are proving that now. They are just not in the car business, nor do they want the exposure. But Google is the leaders IMHO. Until I see more I'm not giving Tesla second place either.

That being said, MobileEye did a completely different approach to learning. They do object recognition yes, but they did it with a trained data set. Basically having a person label pictures as a stop sign, a car etc. And then using that labeled data to learn from i.e. very human resource intensive to label and mark all the data. Tesla is not doing that at all as far as we know.
 
Saying the driver did X I would have done Y do they agree? The learning is being done in a massive datacenter at the mothership where they crunch all the data, try to solve for a particular edge case and then rinse and repeat for the next edge case. Once they have that edge case working they'll push that fix into a release branch etc...

^ This.
 
"Tesla screwed up!"
"It's just a software issue!"
"SCAM!"
"Tesla never promised any timeline!"
"They're asking for a lawsuit!'
"There's no regulatory approval so it doesn't even matter!"

This discussion has become a never ending story and has been rolling for so long and with basically the same arguments thrown back and forth that my mindset has gone from "passionate" to "tired" to now "i really actually don't care anymore".

What a quagmire

Sad thing is several of those are now considered falsehoods or truths overlooked.

Introduction of HW3
SCAM...well that sums that one up
Many timelines (- a year number maybe) have comer and gone
They are most likely asking for a lawsuit given their current course of no-course
There IS regulatory approval IN MY STATE so IT DOES MATTER

This is one post I hate to have to agree with but I do....
 
I don't believe they lead at all. Frankly I think Google has this cracked and they are proving that now. They are just not in the car business, nor do they want the exposure. But Google is the leaders IMHO. Until I see more I'm not giving Tesla second place either.

That being said, MobileEye did a completely different approach to learning. They do object recognition yes, but they did it with a trained data set. Basically having a person label pictures as a stop sign, a car etc. And then using that labeled data to learn from i.e. very human resource intensive to label and mark all the data. Tesla is not doing that at all as far as we know.

Sure, but the point is, MobilEye did all that already.

Even if Tesla had a more advanced way of reaching the same now, they are still behind MobilEye on such a basic thing.

Can they catch up there AND solve the driving policy question better than MobilEye... and better than Google, the definite deep learning pioneer in the world...

I get it that you personally don't think they are leading, but a lot of people on TMC do. And that's putting A LOT of faith on Tesla's fleet on the road making the difference...
 
Sure, but the point is, MobilEye did all that already.

Even if Tesla had a more advanced way of reaching the same now, they are still behind MobilEye on such a basic thing.

Can they catch up there AND solve the driving policy question better than MobilEye... and better than Google, the definite deep learning pioneer in the world...

I get it that you personally don't think they are leading, but a lot of people on TMC do. And that's putting A LOT of faith on Tesla's fleet on the road making the difference...

Agreed, I think the one thing that everyone agrees with for Deep Learning / NN is real world data is key, and the fact that Tesla can gather most real world data of anyone I think is why people are having there love fest.
 
Agreed, I think the one thing that everyone agrees with for Deep Learning / NN is real world data is key, and the fact that Tesla can gather most real world data of anyone I think is why people are having there love fest.

Well, that as an argument makes some sense. The question is, though, is that the kind of data that will make the difference? They probably can't use that data for deep learning as such.

I mean, it probably would, if all other things were equal. But at the same time none of the other things are equal, when comparing Tesla to some of the main rivals/pioneers in this area.

I mean, if the fleet is so great, why is EAP so crap? AP1 obviously benefited from MobilEye massively...

Time will tell. I understand the fleet as of now is still underutilized and, in any case, whatever Tesla has in the labs we are not seeing it yet. But the competition has a lot of publicly seen evidence...
 
I'm pretty sure "real world data" isn't going to solve fully autonomous driving wrt corner cases and driving policy (which, let's face it, is FSD's main challenge in reality now).

Simulation probably will. I don't think ppl fathom the sheer quantity of possible scenarios an AV could encounter and will have to deal with when put in traffic. A houndred, or five houndred thousand Teslas with APHW won't cut it. Yes they will perform admirably on highways and in a bunch of operational domains, but camera snapshots, radar and IMU logs will not be enough for building fully driverless car SW.

They're going to have to automate this experience through enormous amounts of sim
 
  • Like
Reactions: zmarty