Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Unedited Mobileye’s Autonomous Vehicle & other CES 2020 Self Driving ride videos

This site may earn commission on affiliate links.
Mobileye's map data is human annotated though, at least for city streets and for now, they said it'll become fully automated in the future. If you've solved fully automated map labeling from vision, then just run the same process locally on the car? You'd have the same results. Maps should just be the fallback if vision is occluded, snow is on the road, etc.



It still works without maps, but the performance is degraded and it ignores markings and stalls. Hopefully they ship the new parking lot layout inference stuff in V2 or whatever.

I wouldn't call "semi-automated mapping for roads under 45 mph" human annotated. partially human annotated yeah. Most probably because side roads and side streets don't have as many car runs. But since almost everyone uses main roads. It has alot of runs. So they use a cut off conservative cut off (45 mph) so they don't catch any side roads/streets. Again this is my honest opinion.

If you want to know more about Mobileye's mapping. Watch this: 2019 Mobileye Investor Summit: Tal Babaioff

If you've solved fully automated map labeling from vision, then just run the same process locally on the car? You'd have the same results. Maps should just be the fallback if vision is occluded, snow is on the road, etc.

Alot of people misunderstand this, i'm not sure why its hard to understand.

The question is, is the profitability of false negative/positive higher or lesser with Picture 1 versus all the other pictures & videos?

Picture #1 (random clear condition picture)
9AFCA312-ABC6-4388-9C1E-8857D0B6A689-700x394.jpg


Picture #2 (random fog picture)
stock-footage-cars-in-fog.jpg

Picture #3 from Tesla cam
blind-jpg.498264


Video # 1


Picture #4

X2yclpI.png
 
First, I am not sure why some Tesla fans dismiss the use the HD maps as if somehow they make your FSD illegitimate. HD maps don't mean that your vision is not good enough. HD maps are merely a tool that supplements vision to make the system more reliable. And if HD maps can help make your system more reliable, you'd be a fool not to use them.

And I think you might be downplaying the Mobileye demo a bit. Sure, it is just one drive but it demonstrated that the system can handle some pretty important and often difficult driving cases like road blocks, unprotected left turns in busy traffic or getting around a stopped car where you have to temporarily drive in the oncoming traffic lane. The Mobileye demo showed that Mobileye can do this. We've yet to see this capability from any Tesla demo.

Also, the purpose of demos is not to prove a certain number of 9's of reliability. Obviously, a demo of only a few minutes could never do that. Rather, the purpose of a demo is simply to showcase a general feature or capability. You then provide other data to show what the reliability of those features or capabilities are.



I don't believe this is accurate. Neural Nets also require the right data. If you just have a very large data set, you are likely to get too much data that you don't need and not enough data that you do need. For example, if Tesla collects millions of images from the entire Tesla fleet, they might get a huge data set, but odds are that Tesla will get hundreds of thousands of the same image of the same common case and not enough images of a particular edge case. Remember that a NN only needs like 1000 images to be trained. Anything more than that, is wasted overkill.

Also, FSD is more than just perception. Planning and driving policy is a big part of FSD. You can train neural nets to solve perception but you still need to write the planning and driving policy software that will dictate the rules for how the car will respond to what it sees. So you can have a huge data set but that alone won't solve FSD for you. In fact, I believe that planning and driving policy is what will really differentiate the FSD systems of different companies. Pretty much everybody has solved perception at this point. So what will separate the different systems is how they handle driving policy.

Lastly, remember that edge cases are by definition very rare. Most drivers don't deal with edge cases very often. Most drivers actually deal with common driving cases most of the time. So training your system on a large set of what drivers do, will help solve the common cases, but won't help you as much with those 1 in a million edge cases. That's why solving the last 9's is so difficult.



Honestly, this is pure speculation. There is no evidence at all that Tesla has the same level of FSD that Mobileye has. In fact, the evidence suggests the opposite. Tesla has FSD that works "most of the time" but only in simple cases, definitely behind what Mobileye has.



This is what I call the "secret weapon" argument: Tesla has a secret weapon (in this case, it's Dojo and "operation vacation") that as soon as they deploy it, Tesla will win FSD. The reality is that Dojo and "operation vacation" are useful tools that will help Tesla, absolutely. But I don't think they will suddenly win FSD.

Don't get me wrong: I want Tesla to make progress with FSD and give us good stuff. And I think they will do that. I just think we need to be careful not to use the "secret weapon" argument. There is no magic bullet to solving FSD. It takes the right approach and a lot of hard work.

You have clearly spent a lot of time on this topic and make excellent points. I respect that and I don’t necessarily disagree with what you are saying. I simply believe that the expectation of what “FSD” actually is, versus what people want it to be is totally out of whack. FSD will basically be Navigate on Autopilot on pre-defined main roads. Much like what MobilEye demonstrated but probably worse on average (erratic behaviour, frequent dis-engagements etc).

This won’t be perfect for quite some time, far from it. Think about Smart Summon and Navigate on AutoPilot currently. It only works most of the time. However the foundations are now in place. Once they have “feature complete FSD” they can use the Dojo/Operation Vacation “tools” to rapidly increase the reliability of these features and deploy new models via OTA updates every other week. Once they stop supporting HW2 this quite literally opens the flood gates of what’s possible from a perception and planning perspective.

This is the important distinction between the MobilEye approach and Tesla. It’s not important how well the Self Driving cars perform upon initial release or who is further ahead in research and development. The most important thing is who ships first and then the speed of agility to improve post release. This requires immense vertical integration and incredibly streamlined and lean development teams. Once MobilEye ship this to say Volvo, Subaru, Kia etc how bloated do you think their development cycle will be trying to support these traditional automakers? The very first Level 4 system they deploy better be practically perfect lest they will be in support hell forever more.

Really Tesla’s only secret weapon is their culture and to be honest that’s fundamentally the reason they are so successful and will continue to be.
 
Once MobilEye ship this to say Volvo, Subaru, Kia etc how bloated do you think their development cycle will be trying to support these traditional automakers? The very first Level 4 system they deploy better be practically perfect lest they will be in support hell forever more.

I don't know about the other auto makers but Lucid is going with Mobileye's FSD and they will support OTA updates. So at least with Lucid, Mobileye will be able to add new features and fixes with OTA updates just like Tesla does. So that should help with support.
 
The very first Level 4 system they deploy better be practically perfect lest they will be in support hell forever more.
There is basically zero chance Mobileye ever ships a beta Level 4 system. You’ve got to comply with all the testing rules in all the different states. So yes it will have to be practically perfect before they sell it. There’s also zero chance they sell the promise of upgradability to Level 4, Tesla doesn’t even do that anymore.
It seems like Tesla’s approach is going to be support and legal hell. They’ve sold a product to customers that hasn’t even been invented yet. I’m anxious to see what happens!
 
There is basically zero chance Mobileye ever ships a beta Level 4 system. You’ve got to comply with all the testing rules in all the different states. So yes it will have to be practically perfect before they sell it. There’s also zero chance they sell the promise of upgradability to Level 4, Tesla doesn’t even do that anymore.
It seems like Tesla’s approach is going to be support and legal hell. They’ve sold a product to customers that hasn’t even been invented yet. I’m anxious to see what happens!

This paragraph on the Lucid Air page seems to suggest that Mobileye will upgrade to L4 through software updates:

"In the future, Lucid's assistive technology will help you and your family get things done. Over-the-air software upgrades will allow the Lucid Air to transition through progressive levels of autonomy. In the future, your car will be able to retrieve your groceries, pick up your kids from practice, or provide you a moment to sit back and relax as you are safely driven home. This time is yours."

Although, I imagine that Mobileye will fully validate and test features before releasing them. So Mobileye will not release a beta L4. They will wait until it is ready and then push it out via a software update.
 
This paragraph on the Lucid Air page seems to suggest that Mobileye will upgrade to L4 through software updates:

"In the future, Lucid's assistive technology will help you and your family get things done. Over-the-air software upgrades will allow the Lucid Air to transition through progressive levels of autonomy. In the future, your car will be able to retrieve your groceries, pick up your kids from practice, or provide you a moment to sit back and relax as you are safely driven home. This time is yours."

Although, I imagine that Mobileye will fully validate and test features before releasing them. So Mobileye will not release a beta L4. They will wait until it is ready and then push it out via a software update.
I stand corrected. Seems crazy to promise that. I doubt Mobileye is on the hook for that but who knows...
 
I stand corrected. Seems crazy to promise that. I doubt Mobileye is on the hook for that but who knows...

Since the car is responsible under L4, I would assume that Mobileye would be responsible for any incidents while the car was driving autonomously. However, the Lucid Air has a full suite of cameras, radar and lidar sensors and 2 EyeQ4/5 chips. Presumably, Mobileye worked with Lucid to design the sensor configuration to be optimized for their FSD solution. So when Mobileye does finish the L4 software, it will go on their chips with sensors that they set up. My point being that Mobileye is obviously making sure that everything works from hardware to software the way they want. It would make sense that Mobileye would only deploy their software on cars that have the hardware that they believe works best.
 
I don't know about the other auto makers but Lucid is going with Mobileye's FSD and they will support OTA updates. So at least with Lucid, Mobileye will be able to add new features and fixes with OTA updates just like Tesla does. So that should help with support.

This paragraph on the Lucid Air page seems to suggest that Mobileye will upgrade to L4 through software updates:

"In the future, Lucid's assistive technology will help you and your family get things done. Over-the-air software upgrades will allow the Lucid Air to transition through progressive levels of autonomy. In the future, your car will be able to retrieve your groceries, pick up your kids from practice, or provide you a moment to sit back and relax as you are safely driven home. This time is yours."

Although, I imagine that Mobileye will fully validate and test features before releasing them. So Mobileye will not release a beta L4. They will wait until it is ready and then push it out via a software update.

So i don't believe Mobileye will be supplying Lucid with control algorithms (aka driving policy). EyeQ5 will just be like any of their other chips like the EyeQ4, etc. Its vision software is ofcourse fully validated and tested and will be capable of providing L4+. But its up to the oems and tier 1s to write control algorithms for it.

So even with the amazing networks and engines you saw Amnon demonstrate. A trad oem/tier 1 would decide to use only one, the lane network to implement and sell a ping pong acc with simplistic control algorithm. That's just how it is and how it has played out unfortunately so far.

For example ZF demoed a system they made with EyeQ4 and you could tell it used only the lane network and its control algorithm was simplistic, it was complete trash. Yet EyeQ4 has access to dozens of highly accurate networks which they avoid using.

As amnon previously said, it takes 3 years for traditional oems and tiers 1s to integrate but a startup less than 1 year.

The problem has always been the tier 1s and the traditional oems. There have been only three in house systems based on EYEQ chips so far and they are all by far still the best. AP1, Supercruise (most people think it used lidar to localize and actuate but it doesn't, it uses it only for speed curvature changes, the control algorithm simply used the outputs from the eyeQ3) and NIO pilot. The reason those systems are good is because the control system are all in-house.

So ZF will release a trash system later this year that's worse than the 2016 supercruise or 2016 AP1 that used an inferior 2014 EyeQ3, while ZF is using a superior chip EyeQ4 which powered Mobileye's L4 fleet til last month (also powers Aptiv's L4 and other companies fleet, etc).

But they will market it as L2+ and being more advanced than a regular L2. But actually being trash compared to a "L2" 2016 supercruise. Its embarrassing and frustrating.

The only one that Mobileye has partnered with to supply driving policy (control algorithm) in the future is NIO, in 2022 and in China. It will be mobileye's complete L4 AV KIT.

NIO will engineer and manufacture a self-driving system designed by Mobileye (building on Mobileye’s level 4 AV kit) and will mass-produce it and integrate it with its consumer vehicle lines. Additionally, NIO will develop a specially configured variant of vehicles that Mobileye will use as robo-taxis, deployed for ride-hailing services in unspecified “global markets.”

NIO - NIO Inc. Announces Strategic Collaboration with Mobileye to Bring Level 4 Autonomous Driving Vehicles to Consumers in China and Beyond

Mobileye partners with NIO to develop driverless cars

Hopefully when Lucid has their launch later this year they will reveal that they have similar partnership with Mobileye. But its a long shot. But even if it doesn't, their control algorithm is gonna be developed in-house and won't be crappy like the trad oems and tier 1s.
 
Last edited:
My hope is that Mobileye starts taking note from Microsoft and bypass the tier 1s (aptiv, ZF, etc) they work with and start providing L2+ driving policy (control algorithms). Aptiv, ZF, etc will be pissed just like the OEMs were pissed when Microsoft started the surface brand and started selling the surface pro, surface book and surface studio. But when MS did that, the oem finally started to compete on hardware and make good looking PCs and followed MS reference designs.

Its time for Mobileye to do the same and sell their driving policy to provide actual REAL L2+ and essentially become the tier 1s to the oems and use it as both revenues generating and to wake the tier 1s up so they can start competing in control algorithm because their gross incompetence have gone on long enough.

If mobileye doesn't do that, i don't see things changing. You will have L4 in cities by several startups including mobileye but the consumer car will be filled with trash control algorithms even though they are using the same vision software.

Surface spawned over timid OEMs tip ex-Microsofties
 
Yeah, it's a bummer. I love my Model 3 and I love Autopilot but I think the Mobileye demo is definitely proof that Tesla would be further along with FSD if they have continued working with Mobileye. The demo is essentially a taste of the FSD we probably would have today if Tesla had continued working with Mobileye.

I suspect that Tesla and Mobileye broke up because Mobileye knew that the HW2 hardware was not going to be good enough and did not want to partner with Tesla if they were going to insist on insufficient hardware. I say this because Mobileye's demo uses 12 high res cameras instead of the 8 low res cameras that HW2 uses. And if we look at the Lucid Air which is also partnering with Mobileye, it uses 8 cameras but also uses 6 radar and 4 lidar. So clearly, Mobileye's approach uses more sensors and better hardware than what Tesla wants to use.

I also suspect that the break up happened because Elon wanted to go with his vertical integration approach of doing everything in-house. So Elon wanted Tesla to develop their own perception NN and develop their own computer chip and not rely on an external company. Of course, the downside of this approach is that Tesla has had to basically start from scratch and redo a lot of what other companies like Mobileye have already done.

The end result is that Tesla is way behind the competition in terms of FSD. And Tesla may eventually still get there too but they took the unnecessarily harder path and took longer than they needed to.

And I don't think it would have required that much more. Go with HD maps, 12 high res cameras, add 4 radars, one in each corner for added redundancy for cross traffic and blind spot detection, and 2 EyeQ4 and based on this video, you would probably have something good enough for FSD.
You make a very good point that this could have gone alot better....but i think the “harder path” is a small price to pay when in the END tesla will prove that it is possible. a small startup company putting all this hardware investment and doing demo pr stunts hoping that an auto manufacturing will either provide funding or buy in to support the expansion...is a good route to go for that small company looking for that special aha! Moment when it comes...but for tesla its better for them in the long run because like he said once they crack the code, they can flip a switch and immediately every car on the road is a money maker......if they relied on nvidia for chips lets just say two years from now when universally FSD is ready....u think tesla wants to be price gouged and also LIMITED by a manufacturing rate of another company? 2 years from now there may be 2mil+ tesla’s a year being put out on the road...who is manufacturing the 2 million fsd chips out of no where?

so now they got ap3 and just need to finish the software and then prove everyone wrong...

if they fail, welp thats gunna be hilarious.

i think the long and hard choice sucked in the beginning when clearly AP1 was smarter then AP2...but those days are gone... thank god.
 
so now they got ap3 and just need to finish the software and then prove everyone wrong...

You make it sound so easy. All the experts, the laws of physics and common sense all agree that "sleep in the back" L5 autonomy on the current sensors and AP3 chip is impossible.

But maybe Tesla does not need to achieve that type of autonomy to actually "win"? For example, maybe autonomous driving that only works in good weather would still be a big win for Tesla? In other words, in the cost-benefit analysis maybe Tesla still comes out ahead if they can only achieve a lesser form of autonomous driving but on much cheaper hardware?
 
I wouldn't call "semi-automated mapping for roads under 45 mph" human annotated. partially human annotated yeah. Most probably because side roads and side streets don't have as many car runs. But since almost everyone uses main roads. It has alot of runs. So they use a cut off conservative cut off (45 mph) so they don't catch any side roads/streets. Again this is my honest opinion.

If you want to know more about Mobileye's mapping. Watch this: 2019 Mobileye Investor Summit: Tal Babaioff



Alot of people misunderstand this, i'm not sure why its hard to understand.

The question is, is the profitability of false negative/positive higher or lesser with Picture 1 versus all the other pictures & videos?

Picture #1 (random clear condition picture)
9AFCA312-ABC6-4388-9C1E-8857D0B6A689-700x394.jpg


Picture #2 (random fog picture)
stock-footage-cars-in-fog.jpg

Picture #3 from Tesla cam
blind-jpg.498264


Video # 1


Picture #4

X2yclpI.png

The problem is HD maps require very precise localization (beyond GPS or IMU or wheel tick precision), so what they do is extract local feature points from the environment (i.e. infrastructure like power poles, signs, bollards), and then localize the car by triangulating its position against landmarks stored in the map data. If your vision/sensors are impaired like above then localization also becomes unhealthy and you can't localize yourself on the map accurately enough to rely on it.
 
My Tesla has an inaccurate GPS signal very often, so relying on HD maps to make decisions would require retrofitting a better GPS receiver assembly...
Improving localization accuracy is actually one of the main reasons why Mobileye uses HD maps. Essentially, the car recognizes landmarks from the maps in the camera feed and can calculate its own location from their relative position at sub-10cm precision.
 
They are not pretending they are showing you their validation process and the raw numbers that they are aiming for. This is a look behind the veil. As the author said, this has never been done before. Everyone else just says 'just trust us'.




A 360 lidar and radar system WONT fail in the same way as camera. As they have different strengths and weaknesses.

For example camera will have higher probability of failures in this scenario (night):


Of course they'll fail in the same place, if something occludes a sensor(i.e. a huge bus blocks the view) or weather reduces visibility, it doesn't matter if your sensor is active photon Lidar or passive camera, it's still a failure. At least maybe Radar can get some signature via indirect return.

Also night is an easy case, cameras are weakest at twilight times where you need both high contrast and very wide DR.
 
Unedited Ride in Mobileye’s Camera-Driven Autonomous Vehicle

In the first video, is Mobileye using data transmitted from the stoplights directly to the vehicle?

At 3:27 a red stoplight appears in the upper-right hand corner of the screen, quite a few turns before the stoplight would come into view.

mobileye_1.png

At 3:51 the stoplight is just visible being visualized at the distant intersection on their UI, before they turn a corner and could actually see it.

mobileye_2.png

They don't come to a stop at the stoplight until 4:05 or so.

mobileye_3.png

How did they know the stoplight was red a full 2 turns and a full city block before arriving at it?
 
At 3:27 a red stoplight appears in the upper-right hand corner of the screen, quite a few turns before the stoplight would come into view.

Did anyone have a chance to rewatch this minute or so of footage? There are only two explanations I can think to explain what we're seeing.

Either Mobileye is relying on "smart-stoplights" that are able to transmit their red/yellow/green status remotely, or maybe more likely, as soon as their camera system knows the next turn is at a light, the stoplight icon appears and defaults to red. The system continues to assume the light is red until it actually sees a green light.
 
  • Like
Reactions: diplomat33
Did anyone have a chance to rewatch this minute or so of footage? There are only two explanations I can think to explain what we're seeing.

Either Mobileye is relying on "smart-stoplights" that are able to transmit their red/yellow/green status remotely, or maybe more likely, as soon as their camera system knows the next turn is at a light, the stoplight icon appears and defaults to red. The system continues to assume the light is red until it actually sees a green light.

I remember a video a couple years back about a Mobileye self-driving car running a red light because of radio interference with the "smart stop light". So at least back then, Mobileye was relying on "smart stop lights" to send a signal to the car. Obviously, now, their system is probably more sophisticated.

To answer your question, the car does seem to anticipate the stop light. I would not be surprised if Mobileye is relying on all 3 methods combined: smart stop lights that sends a signal to the car, camera vision and HD maps to have triple redundancy. This would give Mobileye very high reliability for traffic light response not just to correctly identify a red or green light but also to anticipate a stop light in advance.

But it does raise a good question: is Tesla's camera only method for traffic light response good enough? Even if Tesla gets super reliable camera vision that correctly identifies a red or green light, there will be instances where the car can't see a stop light or when the stop light is broken. So I think an argument could be made that camera only is not good enough and you need smart stop lights that communicate with cars and HD maps for added reliability to handle those edge cases.
 
I remember a video a couple years back about a Mobileye self-driving car running a red light because of radio interference with the "smart stop light". So at least back then, Mobileye was relying on "smart stop lights" to send a signal to the car. Obviously, now, their system is probably more sophisticated.

To answer your question, the car does seem to anticipate the stop light. I would not be surprised if Mobileye is relying on all 3 methods combined: smart stop lights that sends a signal to the car, camera vision and HD maps to have triple redundancy. This would give Mobileye very high reliability for traffic light response not just to correctly identify a red or green light but also to anticipate a stop light in advance.

But it does raise a good question: is Tesla's camera only method for traffic light response good enough? Even if Tesla gets super reliable camera vision that correctly identifies a red or green light, there will be instances where the car can't see a stop light or when the stop light is broken. So I think an argument could be made that camera only is not good enough and you need smart stop lights that communicate with cars and HD maps for added reliability to handle those edge cases.
I liken smart stop lights, to vehichle-to-vehicle communication, both highly susceptible to false positives and interference - not to mention the bad actors dilemma (Someone is waving you on to pull out into traffic, would you just trust them?). While I can certainly appreciate the immediate benefits of HD maps and near perfection in localization that comes with Lidar, the fall-back/fail cases of these require a very high-level of confidence in perception, planning and prediction based solely off of say camera-vision alone. This would seem to place HD maps and perhaps even Lidar as short-term interim solutions. Discussing who and with what technology will first reach the nebulous term of "FSD" or the more succinct "Level 5", seems irrelevant in the conversation of consumer level FSD/Level 3/4 in production cars, especially current production cars that presumably we all own.
 
While I can certainly appreciate the immediate benefits of HD maps and near perfection in localization that comes with Lidar, the fall-back/fail cases of these require a very high-level of confidence in perception, planning and prediction based solely off of say camera-vision alone. This would seem to place HD maps and perhaps even Lidar as short-term interim solutions.

They are only short term solutions if you view them as crutches, ie if you think camera only will solve full self-driving and you just need something to temporarily help you out until you finish the camera vision. But that is incorrect. Camera vision alone will not work for L5 autonomy. You need lidar and HD maps, not as a crutch, but as an integral part of the whole full self-driving system.

A relevant quote from "Safety First for Driving Automation":

"As of today, a single sensor is not capable of simultaneously providing reliable and precise detection, classifications, measurements, and robustness to adverse conditions. Therefore, a multimodal approach is required to cover the detectability of relevant entities."

In other words, a single sensor is NOT good enough do full self-driving reliably. So camera only, or lidar only or radar only or HD map only approach will NOT work to do L5 autonomy. So it is not a matter of finding one sensor, like just camera or just lidar, to do full self-driving. No one sensor alone will work. You need cameras, radar, lidar and HD maps working together to make full self-driving really work. Lidar and HD maps are not crutches or short term solutions, they are integral component of the whole system!