Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Mobileye is developing full autonomy with only cameras

This site may earn commission on affiliate links.
pFwV811r.jpg



Mobileye’s approach is to build a fully autonomous system using only cameras as sensors. It then plans to add radar and lidar later. This is significant because it makes Mobileye the second major player, after Tesla, to pursue a no-lidar approach. AutoX, comma ai, and Altmotive are some startups also taking a no-lidar approach, but they are small players.

Mobileye will still add lidar for redundancy after its camera-only self-driving car is complete. However, Mobileye is much more conservative than Tesla. Amnon Shashua argues that self-driving cars have to be no less than 1,000x safer than humans to be accepted by society. Tesla, by contrast, is aiming for only 2x safer to start. It therefore stands to reason that Tesla needs less sensor redundancy than Mobileye.

I don’t know whether Mobileye has explicitly said that they believe a camera-only self-driving car can achieve superhuman safety, but that seems to be the implication of what they have said.

Something equally interesting: Roadshow reports that Mobileye is planning to use Valeo Scala lidar units, which are low-resolution. Whereas the classic Velodyne HDL-64E has 64 laser beams, the Valeo Scala only has 4. The Scala also has half the angular resolution — 0.8 degrees vs. 0.4 for the Velodyne HDL-64E. The new Velodyne VLS-128 has 128 beams and an angular resolution of 0.1 degrees.

Correct me if I’m wrong, but isn’t Mobileye’s choice of a relatively crummy lidar unit a sign that Mobileye doesn’t think lidar is that important?
 
Last edited:
It’s also quite striking that Mobileye has 8 driving cameras + 4 parking cameras, including 3 forward-facing cameras behind the centre of the windshield — a main, narrow, and wide. Exactly like Tesla! (Except for the parking cameras — Amnon Shashua concedes that ultrasonics can be used for parking.)

6djkW3rr.jpg
 
Last edited:
  • Informative
Reactions: EinSV
It’s also quite striking that Mobileye has 8 driving cameras + 4 parking cameras, including 3 forward-facing cameras behind the centre of the windshield — a main, narrow, and wide. Exactly like Tesla! (Except for the parking cameras — Amnon Shashua concedes that ultrasonics can be used for parking.)

6djkW3rr.jpg

IMO, it’s not really striking because Tesla essentially copied Mobileye. This camera setup came from Mobileye originally, which was mentioned during one of their earnings reports (Mobileye). Tesla’s strategy is essentially taken from Mobileye and accelerated.
 
  • Disagree
Reactions: Vitold
IMO, it’s not really striking because Tesla essentially copied Mobileye. This camera setup came from Mobileye originally, which was mentioned during one of their earnings reports (Mobileye). Tesla’s strategy is essentially taken from Mobileye and accelerated.

Do you happen to remember which quarter’s earnings call that was? Or do you know of any other source for this info?
 
Really exciting to hear what Mobileye has been able to achieve so far, especially in the CVPR 2016 talk. It gives me more confidence that Tesla can do full autonomy, since it’s using a lot of the same technology as Mobileye — the same sensors (mostly), probably similar neural network architectures, and HD maps. Mobileye has a lot more HD mapping data, but Tesla has a lot more sensor data and other driving data.
 
...Mobileye has been able to achieve so far...

I am not sure how much progress it has made in private but for consumers, an example is MobilEye in GM Cadillac CT6 Super Cruise:

It's heavily geofenced and it can't even work in a construction zone.

In the meantime, Tesla Autopilot can work in a construction zone as in the picture below on Interstate 5 freeway, north of Los Angeles as it was driving on the wrong side of the road, physically wrong direction but temporarily permitted due construction:

WKVt0wN.jpg


The car was driving in North direction but the lane it's on was physically South as evidenced by the permanent steel guardrail that separated the 2 directions. The yellow lane marking was painted over and the new white lane marker was temporarily painted on.

Interstate 5 freeway has been in construction for years and each day, a numerous different stretch keeps on changing!

If an HD map is not in real time for this example then it is useless because it thinks the car is driving on the "wrong side" of the freeway but it's actually permitted during the construction.
 
  • Love
Reactions: J1mbo
IMO, it’s not really striking because Tesla essentially copied Mobileye. This camera setup came from Mobileye originally, which was mentioned during one of their earnings reports (Mobileye). Tesla’s strategy is essentially taken from Mobileye and accelerated.

The two companies worked very closely together for a while. When they parted, the Mobileye "state of the art" system that was commercially available (i.e. installed in other brands) was a dual cam + radar setup. This very nearly appeared in MX.

IIRC the ME future roadmap included an 8-cam setup like AP2, but I am sure ME worked with all their partners to develop that roadmap - they certainly didn't do it in isolation.
 
I recently drove a new Mercedes with drive assist and was pleasantly surprised with how good it is. Definitely not as confident as AP, but very good for what it is. I had it on for most of my trip on highway and a local road. I believe Mercedes uses mobile eye too and it was very smooth when it came to accelerating and braking. Interesting thing is there is not as much nagging, when there really should be. On sharper turns it will start hitting the divider before it correctes itself smoothly, where with AP I barely see that happen as it will always try to stay center although a bit more jerky. Lane changing was smooth and more confident than AP. I guess it helps that owners probably don't think it is supposed to be auto pilot it is just called lane assist.

Not sure how software updates work for it but will be interesting to see how it improves over time compared to Tesla. If it was electric I'd probably drive it more often than my Tesla.
 
I am not sure how much progress it has made in private but for consumers, an example is MobilEye in GM Cadillac CT6 Super Cruise:

It's heavily geofenced and it can't even work in a construction zone.

What I found impressive is Mobileye’s localization accuracy of under 10 cm (~4 inches) using only cameras and camera-based HD maps. I had read an academic paper that achieved similar accuracy, but only in a parking garage at low speeds, so it was uncertain that the results would extrapolate to driving more broadly. Qualitatively, Mobileye’s HD maps look extremely accurate, and the test cars were able to navigate winding, unmarked streets using them. Some folks argue that localization using cameras is not a solved problem, but it actually seems solved to me.

I was also impressed with Amnon Shashua’s cited figure of a false positive for pedestrian detection only once every 500,000 km or 310,000 miles. And “near zero” (whatever that means) false negatives. That was back in mid-2016, so it might have improved since then.

The more recent demo of the test car assertively merging on the highway in Israel is also impressive. It shows that self-driving cars are adaptable to all kinds of conditions. They won’t be left paralyzed by more aggressive or chaotic driving conditions (by polite Canadian standards) if they are trained to handle them.

I can’t find the moment where Amnon Shashua says this, but in one of the talks he mentioned that the car manufacturers Mobileye partners with have a 3-year update cycle on their driver assistance hardware. Whereas Tesla has a 1-year update cycle. I think none or almost none of Mobileye’s partners are using over-the-air performance updates. As far as I know, the only driver assistance system besides Autopilot that uses OTA updates is whatever’s in the Jaguar I-Pace — not sure whether that’s a Mobileye system. In almost all cases, the software only gets updated when the hardware gets updated.

So, there will be a significant lag in Mobileye getting its latest technology (hardware and software) into production. This may be why Enhanced Autopilot has better lane keeping than Mobileye systems currently in production. Another potential reason is that Mobileye is much more risk-averse than Tesla, aiming for no less than 1,000x safety, whereas Tesla is aiming for only 2x safety. A third potential reason is that Tesla has access to sensor data and other driving data from production cars, and with OTA performance updates has a tight update-test-update loop.

The stuff that is impressive about Mobileye’s technology is generic stuff that other companies, including Tesla, are also using. For the three reasons mentioned, Tesla will probably be able to get this stuff into production before Mobileye. So the Mobileye presentations leave feeling more optimistic about Tesla’s technology.
 
Last edited:
I can’t find the moment where Amnon Shashua says this, but in one of the talks he mentioned that the car manufacturers Mobileye partners with have a 3-year update cycle on their driver assistance hardware. Whereas Tesla has a 1-year update cycle.

I finally tracked down the quote. It was in a print interview with Roadshow. This is what Amnon said:

"Companies like Tesla or NIO, they skip the use of a Tier 1. They become their own Tier 1. I think this is something that is possible to do when you have small scale... That allows them to move faster."

"With Tesla, the first Autopilot was introduced in November 2014. It took about a year of development. With regular OEMs, it took closer to three years. That there are less players in the loop, in the chain, accelerates things."
 
  • Informative
Reactions: generalenthu
...navigate winding, unmarked streets

This is nothing new.

2 years ago, Nvidia was able to train its car with camera only after 3,000 miles in 1 month under DAVENET deep-learning network and its car could drive on a dirt road without lane markings either.


...“near zero” (whatever that means) false negatives...

Very impressive non-consumer results (results achieved by company's workers, not by layperson youtubers) in 2016 but that is the year of the first documented Florida AP1 fatal accident too!

I share your enthusiasm but until a progress can be in the hands of layperson so they can show off on youtube (layperson terminology for validation), I would still be somewhat skeptical.
 
This is the difficulty with trying to understand the true status of self-driving car development, and the rate of progress. Research and development is a pipeline that ends with deployment in production systems. If our approach is simply to wait until full autonomy reaches production, then we will have no idea how close or far away it is until suddenly one day it arrives. So, to anticipate the future, we have to look at the earlier stages of the R&D pipeline. But there are many reasons why something earlier on in the pipeline might never make it to a production system, or why it might take a long time to do so.

For example, Nvidia's end-to-end neural network might never be workable for a fully autonomous production car because of combinatorial explosion. If you have multiple modular neural networks, you can train each one independently. For example, you can have a perception network that is trained to deal with bright, direct sunlight shining into the car's front-facing cameras — something that tends to happen when the sun is low in the sky, after sunrise or before sunset. You can also have a motion planning/control network that is trained to deal with a wet road, and adjust for the reduced traction.

With the modular approach, you can train the perception network on sunrises and sunsets, and you can train the motion planning/control network on wet roads. If you encounter a situation where it just finished raining and now there is blinding sunlight, the car will be able to deal with this situation because the networks will deal with those two factors independently. But with an end-to-end neural network, the car won't know how to handle the situation unless it has already been trained to deal with wet roads and bright, direct sunlight simultaneously. This is what leads to combinatorial explosion. To train an end-to-end neural network, you have to multiply every conceivable variable by every other conceivable variable and generate a list of scenarios to train on.

This is why I'm inclined to write off Nvidia's demo as a cool science project, and not a sign of technology that is moving down the pipeline toward production systems. There is a good reason it might never make it out of an experimental setting to a working commercial product. Tesla isn't using an end-to-end neural network, and is simply using Nvidia's GPUs, not any of their software. Wayve is the only company I'm aware of that is taking the end-to-end approach.

The big question with regard to self-driving car technologies generally is whether there is a reason what's working decently well in prototype can't make it to commercialization, or at least can't do so for a long time. The nightmare scenario is that some capability like pedestrian or vehicle detection, or semantic segmentation for driveable roadways, hits a ceiling that engineers can't find a way beyond. If the ceiling falls below human performance, then self-driving car development is just stuck.

The stuff Mobileye presented on localization is encouraging because now both a commercial R&D project and an academic experiment have converged on the same result: localization to within 10 cm with just cameras and camera-based HD maps (no lidar) is possible for self-driving cars. The only caveat I can think of is that camera-based localization might break down under certain conditions. Maybe in tunnels, where there are long stretches of flat, featureless concrete? But as long as you continue to do lane keeping and object detection in the tunnel, it doesn't seem like you need to HD maps to tell you what to anticipate next...

Assuming the error rate Mobileye gave for pedestrian detection is true, that's genuinely encouraging. If there's a false positive every 500,000 km/310,000 miles leading to unnecessary sudden braking (and possible rear-ending), and a false negative (leading to potentially hitting a pedestrian) even less often, then it's within 10x of beating human performance — if it isn't there already.

A consumers will never be able to test these claims by themselves because that would require millions of miles of driving. A consumer could drive 20,000 miles (more than an average year's worth of driving) and never encounter an error, and falsely conclude that errors never occur. The best evidence is large-scale statistical validation.

A self-driving car could kill someone on average every 50,000 miles (compared to 92 million miles for human drivers), and lull you into a false sense of security by driving safely for thousands of miles first. So production deployment isn't the final arbiter of actually developing a fully autonomous car superhuman performance.
 
  • Helpful
Reactions: SpudLime
@Bladerskb Since you seem to be a fan of Mobileye, is there anything you see as incorrect in this thread so far, or is there anything you disagree with?

In one talk (from 2015 I believe), Amnon Shashua said that Mobileye had 10 million miles of data, although I’m not sure if he was referring to simply HD mapping data or raw sensor data that could be used to train and test perception neural networks. Do you know anything about this?