Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Buyer Beware: AP2 Extremely Dangerous

This site may earn commission on affiliate links.
That's right @calisnow, Nvidia is all about "end-to-end", however MobilEye doesn't believe in that. True two schools of thought thing, this. Check out this MobilEye YT-vid from 3:00
The end-to-end architecture talks about the situation in which we as developers don't need to put any domain expertise - we don't need to put any smartness - into understanding how to enable autonomous cars: Everything will be done automatically through data.

Semantic abstraction is really the hard work of breaking a problem into its components and working on each component carefully in order to build a complete system.

So let's have a look at these two schools of thought...

 
Last edited:
Very interesting @lunitiks. On an end to end system, how would the system be able to break out lane lines, speed limit signs, etc.? The car wouldn't need to be able to vend that info in order to control the car - it's just baked into the DNN. But if Tesla wants to display it on the IC, for instance, it needs to get it from somewhere. And the videos (for whatever they're worth) of Tesla vision seem to show some object recognition. It would seem like Tesla is pursuing some hybrid approach maybe? Like letting the BIG DNN do the driving and using another layer to override that if it "sees" something in the way. I'm so ashamed of myself for even trying to geek on this! o_O

Or maybe Tesla just pays a bunch of people to sit in car simulators and they pilot our cars remotely like drone operators and the whole AP thing is a fake! :eek:
 
I'm so ashamed of myself for even trying to geek on this! o_O
image.gif
 
  • Funny
Reactions: _jal_
That's right @calisnow, Nvidia is all about "end-to-end", however MobilEye doesn't believe in that. True two schools of thought thing, this. Check out this MobilEye YT-vid from 3:00

@lunitiks - good video - I've watched a number of Shashua's talks and they are always quite informative and illuminating. He is clearly brilliant (as are Musk and Nvidia's CEO) - so the question for me as a layman is twofold:

1 - Which school of thought will get the job done with the limits of computation we have available today?
2 - How much of Shashua's position is his real belief as a scientist on what works best vs. him wearing his CEO hat defending his company's technology now that competition is on the scene?
 
For what it's worth, Mobileye and Nvidia claim to be taking very different approaches to learning. Mobileye did/does a lot of curated, human-annotated image training. Nvidia's approach is to brute force unsupervised learning with massive computational power running more complex neural networks. I guess time will tell if Nvidia's approach can equal and/or surpass Mobileye's.
No, this is not 100% accurate. The CEO of Nvidia has mentioned several times that networks can be trained using their DIGITS software for image recognition, such as signs etc. This relies on labeled data. The unsupervised learning aspect is the driving part and MobilEye does the same for this part. To be clear these are multiple networks working together.
 
No, this is not 100% accurate. The CEO of Nvidia has mentioned several times that networks can be trained using their DIGITS software for image recognition, such as signs etc. This relies on labeled data. The unsupervised learning aspect is the driving part and MobilEye does the same for this part. To be clear these are multiple networks working together.
Yes, this makes more sense. Why should they not use an existing sign/text recognition system if that is available, esp. given the whole idea of the network is that they can be linked to take in all kinds of systems (for example, the radar software probably does processing before it reaches the Nvidia chip; it's unlikely that it's taking in all the raw data).
 
  • Like
  • Funny
Reactions: JeffK and Deans
I don't see anything in that article which gives any insight into either what portions of Autopilot functions were being performed by Mobileye or how difficult it is to replicate - and certainly nothing to indicate that Tesla knew it would be very hard and deliberately lied to customers as you're suggesting.

The biggest problem with that comparison is things that took 18 years to develop in yesterdays world, can be done in a short amount of time in todays world. It's not just that the computers you do the training on are a LOT faster, but you have a TON more data. 2016 was an absolute incredible year in the world of deep learning.

I think the delay has less to do with implementation of features, and more about the need to prove those features out before enabling them. That's where I feel like Tesla wasn't completely honest about it.

The other aspect is some of these features weren't all that great with MobileEye despite how much we liked them. Was AEB really all that great? Just reducing the speed by 25mph? Was TACC really that great when it couldn't see a stopped car halfway into the road?

Some of the early reports of how TACC works on an AP2 car has me thinking that they're trying to do more with it than what AP1 did. I have 25,000 miles on an AP1 car and I can count the unintended deacceleration by TACC with all the digits I have (fingers, toes, and that other one). So it's not too bad, and is definitely not doing anything too fancy.

For whatever reason I'm not concerned about the EAB timeline lie. It's all there, and they just fudged the delivery estimates by 3-6 months. Might upset others, but it's kinda like speeding. If everyone does it, it's normalized. It's part of the tech culture.

The FSD will come back to bite them. I know we live in the post truth era, but the magnitude of that lie is only exceeded by one man.
 
Are we sure all these AP2 reports and horror stories are even accurate?

Big Oil once again may interfere with Tesla's lead in autonomous vehicles and promote bogus reports.

Tesla clearly said their competitors only have demoware - therefore Tesla is years ahead. Electrek reported this, they are always accurate (they also have an accurate TSLA stock ticker on their page because some of their contributors own TSLA shares, very useful):

“In their zeal to play catch-up, traditional automakers have created a get-rich-quick environment. Small teams of programmers with little more than demoware have been bought for as much as a billion dollars.”

Tesla says GM and Uber bought ‘little more than demoware’ w/ $1 billion acquisitions of Cruise and Otto

What if these legacy competitors badmouth Tesla because Tesla so clearly is in the lead?

There may even be moles within Tesla or the DMV to make the numbers look worse, as reported here.

CA DMV Report Sheds New Light On Misleading Tesla Autonomous Drive Video - DailyKanban

Rain? No problem.

Since Elon Musk has already figured out how to deal with climate on Mars * he will also figure out to stop nasty raindrops interfering with AP2 on planet Earth (more sunshine of course also equals more revenue for subsidiary SCTY - another win-win !).

Make AP2 Great Again!

Go Elon!

_______
* Nuke it: Elon Musk elaborates on his proposal to nuke Mars
 
Last edited:
  • Funny
Reactions: mmd and MP3Mike
If this article is true, I'm a little concerned.
Meh, all they were showing is what it's capable of with enough training. They never claimed it was 100% done.

They didn't promise a LA to NY demo until the end of 2017 at the earliest meaning they are going to spend the year collecting data from AP2 cars for training.

Also note the author is citing Paulo Santos, a slimy looking character from SeekingAlpha
 
  • Like
Reactions: MP3Mike
Meh, all they were showing is what it's capable of with enough training. They never claimed it was 100% done.

They didn't promise a LA to NY demo until the end of 2017 at the earliest meaning they are going to spend the year collecting data from AP2 cars for training.

Also note the author is citing Paulo Santos, a slimy looking character from SeekingAlpha
I like your positive spin on it. SeekingAlpha is pretty ridiculous most of the time. I gave up reading their articles about six months ago.
 
So far we have one data point for AP2. I am very interested to see what the next update brings and see if / how things have gotten better. This thing is going to get a little more difficult to soft-soap if after two weeks we just get better auto headlight improvement or something. I had friends in my car yesterday and there was a lot of WTF about the dancing lane lines.
 
  • Like
Reactions: croman and disagree
If this article is true, I'm a little concerned.
I believe the article to be accurate. The only way for Tesla Vision to replace (surpass?) Mobileye is full throttle (massive fleet of vehicles with new hardware testing real world algorithms). Its the quickest and most efficient way to catch up. Unfortunately with that end involves some fanciful marketing techniques and very unrealistic sales promises.
 
The CA figures only account for public road usage. Who cares if they only took the liability risk to demonstrate their vehicles on public roads to film some marketing film? Undoubtedly Tesla is using private land to really run their "fleet" of Level 4 or 5 vehicles. I am actually more bullish on FSD timeline than I am AP1 parity (unfortunately). I think Tesla Vision will grow quickly but not quickly enough to deliver EAP on a reasonable timeframe. FSD will continue and it seems Tesla has a great start on it. Our data will continue this process in a much faster fashion. I've logged nearly 2k miles so far but we haven't gotten an update since 17.3.15 to determine progress.