Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Unedited Mobileye’s Autonomous Vehicle & other CES 2020 Self Driving ride videos

This site may earn commission on affiliate links.
To me this statement was the most interesting one.

Shashua expects the hardware for these initial self-driving taxis to cost $10,000 to $15,000 per vehicle. By 2025, Shashua is aiming to "reduce the cost of a self-driving system below $5,000."

At $10k-15k, I don't think FSD will be mainstream on consumer cars. It will start as a luxury car option. But I do think that when it drops to $5000, FSD will go mainstream on consumer cars.
 
  • Funny
Reactions: Daniel in SD
I'm not a fan of their math, like there 6 different vision algorithms for analyzing objects in camera frames, and we're going to pretend they're all statistically independent of each other and this adds up to MTBF of 10^4 hours. And you just tack on LIDAR and pretend that it isn't going to fail in the exact same places that the cameras do and you're at 10^7.
 
  • Like
Reactions: PhaseWhite
Having redundancy in the perception system makes a lot of sense since you can resolve conflicts by erring on the side of caution. It seems like driving policy is the most difficult part though.
Anyway, Mobileye is clearly better than Tesla since they don't need RADAR or ultrasonics. Just kidding, I think worrying about costs of self driving systems is silly, the first working prototype of any tech product is ridiculously expensive, I see no reason why self-driving cars will be any different.
 
  • Like
Reactions: diplomat33
At $10k-15k, I don't think FSD will be mainstream on consumer cars. It will start as a luxury car option. But I do think that when it drops to $5000, FSD will go mainstream on consumer cars.
The ludicrous option on the Model S used to cost $20k. People paid $15k to drop their 0-60 on the Model 3 by 1s and get 20" wheels with big brakes. I believe Mobileye could sell millions of units a year at $15k.
 
At $10k-15k, I don't think FSD will be mainstream on consumer cars. It will start as a luxury car option. But I do think that when it drops to $5000, FSD will go mainstream on consumer cars.

There is so much pent up demand right now for FSD from lots of sub groups that any Level 4 system that was capable of doing geofenced regions in moderate to good weather would easily be $10K to $15K over the price of the vehicle

We already see this with the Model 3 with the $7500 FSD option that people get mostly out of hope that at some point it will do something useful.

I would have no issues getting a Model 3 (or equivalent MobileEye vehicle) for my mom. I'd then borrow it for trips to cities where I simply don't like driving around in (SF is one of them), and I don't particular like driving in Portland either (too many cyclist on rainy nights that don't have lights).

I don't have too much confidence in a vision only system. I've been involved a lot with Neural Network based vision systems, and I haven't seen any of them get stuff right all the time or even close to right all the time. I myself failed badly the other day where I got out of the lane thinking I saw something on the road. It wasn't anything, and it was just the way the road appeared to be when light was reflecting off the water.

Now that's not to say I don't have a lot of hope for MobileEye, and Tesla. They're really the only ones that can save us from the robotaxi only future.
 
  • Like
Reactions: diplomat33
I wonder how dependent Mobileye is on their HD maps, what happens when they're wrong or don't exist? Does it break?
It'll probably be like Caddy Supercruise: if a road isn't mapped, you won't be allowed to take your hands off the wheel.

The strength of their mapping approach is that the data is continuously crowdsourced from millions of vehicles with ADAS systems that constantly refresh the maps.
I hope they have something like Tesla's RNNs that infer parking lot and intersection topologies from the downstream vision network output, that's pretty critical IMO: https://www.youtu.be/oBklltKXtDE?t=301
For now, Tesla seems to rely on maps to navigate parking lots:

Tesla Owners Can Edit Maps to Improve Summon Routes - Tesla Motors Club
 
It'll probably be like Caddy Supercruise: if a road isn't mapped, you won't be allowed to take your hands off the wheel.

The strength of their mapping approach is that the data is continuously crowdsourced from millions of vehicles with ADAS systems that constantly refresh the maps.

Mobileye's map data is human annotated though, at least for city streets and for now, they said it'll become fully automated in the future. If you've solved fully automated map labeling from vision, then just run the same process locally on the car? You'd have the same results. Maps should just be the fallback if vision is occluded, snow is on the road, etc.

For now, Tesla seems to rely on maps to navigate parking lots:

Tesla Owners Can Edit Maps to Improve Summon Routes - Tesla Motors Club

It still works without maps, but the performance is degraded and it ignores markings and stalls. Hopefully they ship the new parking lot layout inference stuff in V2 or whatever.
 
Last edited:
  • Informative
Reactions: Eugene Ash
Mobileye's map data is human annotated though, at least for city streets, they said it'll become fully automated in the future.
In his presentation Shashua said it's fully automatic. He didn't say anything about human annotation. I have not insider knowledge that would enable me to judge the veracity of his claim.
If you've solved fully automated map labeling from vision, then just run the same process locally on the car?
If they can to it automatically in the cloud, that doesn't mean they can also do it automatically on a little in-car computer (which is heavily restrained in terms of power consumption and cost). You also lose "the wisdom of the crowd", i.e. statistical methods to improve the accuracy of the results by having multiple samples.
 
Last edited:
In his presentation Shashua said it's fully automatic. He didn't say anything about human annotation. I have not insider knowledge that would enable me to judge the vera
If they can to it automatically in the cloud, that doesn't mean they can also do it automatically on a little in-car computer (which is heavily restrained in terms of power consumption and cost). You also lose "the wisdom of the crowd", i.e. statistical methods to improve the accuracy of the results by having multiple samples.

He clearly shows in the slide deck that the mapping is "semi-automated" below 45 mph.... https://youtu.be/HPWGFzqd7pI?t=2430

I don't understand what their "45 mph" cut-off is about though, I'd expect the boundary to be based on the class of road or something, i.e. divided highway or secondary road, etc. Even city streets can be 45 mph as he says. It's bizarre.
 
Mobileye's map data is human annotated though, at least for city streets and for now, they said it'll become fully automated in the future. If you've solved fully automated map labeling from vision, then just run the same process locally on the car? You'd have the same results. Maps should just be the fallback if vision is occluded, snow is on the road, etc.
You can generate maps from thousands of cars, thousands of angles, thousands of lighting conditions, etc. It's going to be way more reliable than a realtime map.
 
Maps are for "tips and tricks", for seeing beyond the horizon, you really want them, but you can't rely on them. There are so many situations where you'll have zero or very sparse data for mapping, driveways and tertiary roads, private lots, neighborhoods without BMWs and Teslas. The Whole Foods lot will get mapped 1000 times/day but the Dollar Tree will be mapped 0 times. Also places where localizer error gets too high (i.e. very long tunnels with no GPS fix) or too few landmarks. If you can complete your trips with no HD maps, just inference from vision, and then you add the maps in for hints and tricks, you are in very good shape.
 
Last edited:
  • Like
Reactions: Eugene Ash
Maps are for "tips and tricks", for seeing beyond the horizon, you really want them, but you can't rely on them. There are so many situations where you'll have zero or very sparse data for mapping, driveways and tertiary roads, private lots, neighborhoods without BMWs and Teslas. The Whole Foods lot will get mapped 1000 times/day but the Dollar Tree will be mapped 0 times. Also places where localizer error gets too high (i.e. very long tunnels with no GPS fix) or too few landmarks. If you can complete your trips with no HD maps, just inference from vision, and then you add the maps in for hints and tricks, you are in very good shape.
The bottom line is you can make an autonomous vehicle safer by using maps. That's why everyone in the field, including Tesla, uses maps.
The vehicle has to deal with cars, bikes, and pedestrians going every which way and are completely unmapped. It doesn't seem like a an insurmountable challenge to deal with situations where the road has changed or there aren't enough landmarks to localize perfectly. Obviously in those situations there is a higher probability of error but that's not an argument against using maps. If you didn't use maps you'd have that diminished level of safety 100% of the time.
I'm going to make the bold prediction that if Tesla ever releases stop sign and stop light response they will use maps (or they're going to have an absurdly high error rate).
 
I'm not a fan of their math, like there 6 different vision algorithms for analyzing objects in camera frames, and we're going to pretend they're all statistically independent of each other and this adds up to MTBF of 10^4 hours.

They are not pretending they are showing you their validation process and the raw numbers that they are aiming for. This is a look behind the veil. As the author said, this has never been done before. Everyone else just says 'just trust us'.


And you just tack on LIDAR and pretend that it isn't going to fail in the exact same places that the cameras do and you're at 10^7.

A 360 lidar and radar system WONT fail in the same way as camera. As they have different strengths and weaknesses.

camlidar.jpg


For example camera will have higher probability of failures in this scenario (night):

 
  • Informative
Reactions: diplomat33
It’s been over three years since AP2.0 was unleashed upon the world, and every day since then people have speculated about the existence of a super secret code branch for autopilot that can actually achieve level 4 (or even 5!) autonomous driving.

Elon wants money, and what better way to get that than to release the first true autonomous vehicle? Smart Summon is proof that he doesn’t hold back things until they’re ready to be released. So let’s let the notion of this truly advanced capability being locked up somewhere in Fremont. It doesn’t exist.

Except it does exist. It was shown to investors at the Autonomy day. It’s not a magical superhuman level of AutoPilot but it does everything that’s required for getting from your house to work, most of the time.

The key here is that the demo we saw from Mobileye is a cherry picked run with a human annotated map. Even if say their solution is 98% reliable on US surface streets this is still thousands of collisions a year at Tesla’s scale. Getting from 98% to 99.99% can only be achieved through exponentially larger training sets. Something neither Tesla or MobilEye have yet.

Tesla also have the HW2 problem. If they started to provide material improvement to only HW3 owners those who paid for FSD and are still on HW2 would be pissed and be demanding HW3 to be installed ASAP. The service headache to retrofit is only just now beginning in earnest.

What I suspect is going on behind the scenes is that Tesla do indeed have MobilEye level Autonomy on their latest version of Navigate on AutoPilot but it’s no where near reliable enough. Once they can ingest vast amounts of edge cases into the Dojo they can train the NNs with self supervision and eventually get to a point where AutoPilot only disagrees with the drivers’ input 0.01% of the time. This is what operation vacation really is: tooling to cluster valuable edge cases where there is statistical difference between driver input and inference all to drive up the reliability enough to be released to the public.
 
The key here is that the demo we saw from Mobileye is a cherry picked run with a human annotated map. Even if say their solution is 98% reliable on US surface streets this is still thousands of collisions a year at Tesla’s scale.

First, I am not sure why some Tesla fans dismiss the use the HD maps as if somehow they make your FSD illegitimate. HD maps don't mean that your vision is not good enough. HD maps are merely a tool that supplements vision to make the system more reliable. And if HD maps can help make your system more reliable, you'd be a fool not to use them.

And I think you might be downplaying the Mobileye demo a bit. Sure, it is just one drive but it demonstrated that the system can handle some pretty important and often difficult driving cases like road blocks, unprotected left turns in busy traffic or getting around a stopped car where you have to temporarily drive in the oncoming traffic lane. The Mobileye demo showed that Mobileye can do this. We've yet to see this capability from any Tesla demo.

Also, the purpose of demos is not to prove a certain number of 9's of reliability. Obviously, a demo of only a few minutes could never do that. Rather, the purpose of a demo is simply to showcase a general feature or capability. You then provide other data to show what the reliability of those features or capabilities are.

Getting from 98% to 99.99% can only be achieved through exponentially larger training sets.

I don't believe this is accurate. Neural Nets also require the right data. If you just have a very large data set, you are likely to get too much data that you don't need and not enough data that you do need. For example, if Tesla collects millions of images from the entire Tesla fleet, they might get a huge data set, but odds are that Tesla will get hundreds of thousands of the same image of the same common case and not enough images of a particular edge case. Remember that a NN only needs like 1000 images to be trained. Anything more than that, is wasted overkill.

Also, FSD is more than just perception. Planning and driving policy is a big part of FSD. You can train neural nets to solve perception but you still need to write the planning and driving policy software that will dictate the rules for how the car will respond to what it sees. So you can have a huge data set but that alone won't solve FSD for you. In fact, I believe that planning and driving policy is what will really differentiate the FSD systems of different companies. Pretty much everybody has solved perception at this point. So what will separate the different systems is how they handle driving policy.

Lastly, remember that edge cases are by definition very rare. Most drivers don't deal with edge cases very often. Most drivers actually deal with common driving cases most of the time. So training your system on a large set of what drivers do, will help solve the common cases, but won't help you as much with those 1 in a million edge cases. That's why solving the last 9's is so difficult.

What I suspect is going on behind the scenes is that Tesla do indeed have MobilEye level Autonomy on their latest version of Navigate on AutoPilot but it’s no where near reliable enough.

Honestly, this is pure speculation. There is no evidence at all that Tesla has the same level of FSD that Mobileye has. In fact, the evidence suggests the opposite. Tesla has FSD that works "most of the time" but only in simple cases, definitely behind what Mobileye has.

Once they can ingest vast amounts of edge cases into the Dojo they can train the NNs with self supervision and eventually get to a point where AutoPilot only disagrees with the drivers’ input 0.01% of the time. This is what operation vacation really is: tooling to cluster valuable edge cases where there is statistical difference between driver input and inference all to drive up the reliability enough to be released to the public.

This is what I call the "secret weapon" argument: Tesla has a secret weapon (in this case, it's Dojo and "operation vacation") that as soon as they deploy it, Tesla will win FSD. The reality is that Dojo and "operation vacation" are useful tools that will help Tesla, absolutely. But I don't think they will suddenly win FSD.

Don't get me wrong: I want Tesla to make progress with FSD and give us good stuff. And I think they will do that. I just think we need to be careful not to use the "secret weapon" argument. There is no magic bullet to solving FSD. It takes the right approach and a lot of hard work.
 
Last edited:
  • Like
Reactions: emmz0r