Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
[Diagram that implies equal-vote participation of the Camera World Model and rge Radar/Lidar World Model]

...In the ME system, I think the idea is that the system only fails if both parts fail at the same time. So it is not dependent on the weakest link. The car can still drive the cameras fail. The car can still drive if the radar or lidar fails. ...

This is certainly interesting, but i believe it's an overstatement (by Mobileye) of their redundancy approach. I find it somewhat questionable that they would continue to drive the car for any period of time (beyond a safety pull-over or road-exit action) without all sensors functioning, and highly questionable with a camera-vision failure in particular.

First Radar: There are indeed credible arguments for the benefits of Radar as an adjunct to vision, but the uniquely helpful velocity information it provides is very poorly tied to specific objects unless fused with a much higher resolution sensor. This is quite definitely the case with standard low-cost radar equipment such as Tesla employ(ed), but really still true with more advanced radar arrays - higher resolution but not at all sufficient to "see" well enough to drive. But when successfully fused with Camera or high-resolution Lidar output, it makes sense.

So next we consider the all-important Lidar foundation of the right-side World View. Lidar, in good weather, can give those famously impressive 3D point clouds, and I'd grant that they'll be accurate with far higher precision and confidence than the emerging video-lidar proxy method (how necessary that is, is a key point of contention). But I think those false-color photo-like representations we see in some Lidar-promoting literature are misleading. Lidar is getting the correct 3D XYZ values for each surface-point in its World View, but it knows little else about the surfaces. Imagine shrink-wrapping everything around you in matte grey plastic film. You don't see colors or even a "Black & White TV" version of colors, you don't see lights, you can't read signs (except in special and unreliable-for-use conditions). It's better than nothing if the cameras fail, but you don't have a chance of driving safely for long this way - unlike the reverse, where cameras alone with the right software provide an extremely good chance of success (how extremely good, is it good enough in theory, are Tesla's cameras sufficient in practice, this becomes the debate). Even Mobileye's website says
"the camera subsystem is the backbone of the AV, while the radar-LiDAR subsystem is added to provide enhanced safety and a significantly higher mean time between failures (MTBF)."

IMO this is a critical footnote that belies the equal-capability implication of the diagram and of the claim
"An AV that can drive on radar/LiDAR alone"
(noting that above that they say, "a development AV").

The fair conclusion is that a system (whether Tesla's, Wayve's or Mobileye's own AV) might well become good enough on Camera vision alone, might be better albeit more costly by adding Lidar/Radar, but cannot operate solely on the latter equipment set as it exists today.
 
  • Helpful
Reactions: diplomat33
This is certainly interesting, but i believe it's an overstatement (by Mobileye) of their redundancy approach. I find it somewhatquestionable that they would continue to drive the car for any period of time (beyond a safety pull-over or road-exit action) without all sensors functioning, and highly questionable with a camera-vision failure in particular.
Obviously they wouldn't. This isn't that type of redundancy. In this context the Failure in MTBF is hitting something.
Let's say you have two sensors with equal but uncorrelated performance at recognizing semi-truck trailers crossing the highway. They both recognize the trailer 99% of the time but every 100,000 miles they detect a trailer when there isn't one. If you combine the outputs you now have a 99.99% chance of detecting a semi-truck crossing the road, a huge improvement in safety. Of course you'll now have phantom braking events every 50k miles vs. 100k miles but overall you've improved the safety of the system.
Collisions caused by hardware failures seem very unlikely in general. The vast majority of the time it is perfectly safe to just stop while following your current trajectory. Cars break down all the time, what percentage of collisions are caused by hardware failures? There is no redundancy except a very small amount in the brake system.
 
Last edited:
  • Like
Reactions: Matias
Obviously they wouldn't. This isn't that type of redundancy. In this context the Failure in MTBF is hitting something.
Let's say you have two sensors with equal but uncorrelated performance at recognizing semi-truck trailers crossing the highway. They both recognize the trailer 99% of the time but every 100,000 miles they detect a trailer when there isn't one. If you combine the outputs you now have a 99.99% chance of detecting a semi-truck crossing the road, a huge improvement in safety. Of course you'll now have phantom braking events every 50k miles vs. 100k miles but overall you've improved the safety of the system.
Collisions caused by hardware failures seem very unlikely in general. The vast majority of the time it is perfectly safe to just stop while following your current trajectory. Cars break down all the time, what percentage of collisions are caused by hardware failures? There are is no redundancy except a very small amount in the brake system.
I agree with all points except that it's "obvious" to everyone.
I was addressing two things:
  • The written claim that the system as described is "an AV that can drive on radar/LiDAR alone", as published by Mobileye and argued by @diplomat33,
  • The clear but misleading implication that Lidar+Radar thus plays a co-equal role in the resulting redundantly-equipped AV
These things are claimed and repeated but IMO are not true and require response, clearly not an obvious point. Otherwise we're on the same page.
 
  • Like
Reactions: Daniel in SD
I agree with all points except that it's "obvious" to everyone.
I was addressing two things:
  • The written claim that the system as described is "an AV that can drive on radar/LiDAR alone", as published by Mobileye and argued by @diplomat33,
  • The clear but misleading implication that Lidar+Radar thus plays a co-equal role in the resulting redundantly-equipped AV
These things are claimed and repeated but IMO are not true and require response, clearly not an obvious point. Otherwise we're on the same page.
Are they truly independent; what about common cause failure? These are statistical events can the not happen at the same time or In close proximity? What is the likelihood of no detection or misleading results? Without knowing the causal chain or seeing the matrix how can one know?
 
Are they truly independent; what about common cause failure? These are statistical events can the not happen at the same time or In close proximity? What is the likelihood of no detection or misleading results? Without knowing the causal chain or seeing the matrix how can one know?
Independence:
If you read the fine print they're independent perceptually, but clearly not fully independent as successful and trustable stand-alone AV operators in normal operation. Right after claiming they are,, Mobileye clarifies that the Vision side is the "backbone", consistent with my argument that if one side can operate alone, that can only be the Vision side.

Common-cause failure:
Seems very general, any vehicle or system can suffer this. True redundancy against this would require dual or secondary everything. Battery module, sensors, electronics, wiring, electromechanical control. As @Daniel in SD said, this isn't that kind of redundancy. Thinking about it, I'd say it's a complementary architecture to draw increased confidence from each sensor set's strengths, per below:

Their real point of differentiation, according to the flow diagram and per the summary by @diplomat33, is that the fusion of the two sides comes after the Perception modules (though there is fusion of Radar+Lidar in the Perception module of that side). So not Sensor Fusion overall, but Policy and Planning fusion of the two World Views. Although as I keep saying, the Radar+Lidar cannot actually be used to safely pilot the AV in the real world, nonetheless the system architecture asks for its World View as if it could, and this is perhaps a good way to solve well-known problems of normal Sensor Fusion.

I'm guessing that what is really going on here is a dominantly Vision-driven car, not unlike the Tesla Vision FSD approach, but using the Radar+Lidar in a scout / backseat-driver role.
"I saw something that you missed or misjudged, better not go there."
Or, "you saw a confusing shadow or puddle across the road, but I can tell you it's clearoy not a solid object or a ditch".
Or, (famous edge case) "I'm very confident that image you see of of a (painted-on) clear underpass is really a solid wall!"

In other words, it goes a long way towards solving the minority set (but a critically important set) of cases where Vision is non-confident or falsely cofident. And it does so without introducing the kind of difficult Sensor Fusion challenges that people commonly talk about, including the noisy pulsing radar-estimation examples given by Andrej Karpathy in his recent CVPR talk.

Vision: I'm approaching an overpass. Looks like a normal overpass to me, a dark shadow underneath but that's typical.

Planning: OK let's keep going.

RadarLidar: BlippyBlip, I don't know, kind of noisy data...

Planning: You're always doing that at the overpasses, let's keep going.

Vision: Though that shadow does look slightly unusual, really hard to tell...

RadarLidar: Hey, that's no shadow - thats a thing - a person under there! High confidence now!

Planning: Got it thanks - slowing, moving over...

Vision: Hey I think I see a person now, watch out!

Planning: Already covered. Thanks RadarLidar. We'll pick up speed on the other side. Keep pinging, we may need you again.

Vision: That's definitely a person! But safe, not near our path right now. Don't worry.

Planning: Yeah thanks, (Duh)...


Hope you enjoyed that 🙂. I like the approach but see it not as real redundancy, more like recon for the squad. I don't think they should be describing it as a self-sufficient AV using Radar+Lidar; that's not real but it informs the architectural flow diagram and is a way to resolve some known conflicts, as posed in Elon's tweet:
"When radar and vision disagree, which one do you believe? Vision has much more precision, so better to double down on vision than do sensor fusion."
Mobileye is avoiding sensor fusion, avoiding Karpathy's radar-noise example, adding Lidar which trumps the precision argument, and letting the Planning module choose, in the moment, which divergent World View item to trust. How to choose? Each of the Perception output World View data sets have already assigned confidence values to their detected objects and drivable-space regions, so generally pick the more confident one - but quite importantly, also weight-adjusted for factors of downside risk. Most often they will agree.
 
Interesting article about a different approach to solving the autonomous vehicle problem. Not doing it by enormous quantities of data but by mimicking other cars. They have some great initial success

 
If I was Tesla I would just stay at Level 2 or 3 for the next 5-10 years and be proud of it, higher levels just opens them up to liability. There's so much money to be made as an advanced drive assistance subscription/add-on (especially paired with their own insurance product) versus getting into the whole cutthroat, race-to-the-bottom ridesharing/robo-taxi business. No one in that field is ever going to make their investment back.
I would be very disappointed by that.
A Level 2 system fails to frequently, it fails so quickly that it can be hard to save, and it fails unpredictable and with severe consequences.

A Level 2 system that never fails would be Level 4.

A Level 2 system that fails predictably and with plenty
time (10 seconds) to take control is Level 3.

Higher level autonomy would be a huge selling point and price lever.
 
I would be very disappointed by that.
A Level 2 system fails to frequently, it fails so quickly that it can be hard to save, and it fails unpredictable and with severe consequences.

A Level 2 system that never fails would be Level 4.

A Level 2 system that fails predictably and with plenty
time (10 seconds) to take control is Level 3.

Higher level autonomy would be a huge selling point and price lever.
That's not necessarily true though, you could have a really great driving system that almost never fails and handles all maneuvers on all roadways and it's still a Level 2 or 3 because the company doesn't want to be held liable for accidents, so the owner is kept in the driving loop. The meaningful difference between levels is who's liable, is it the driver or manufacturer or fleet operator? The only reason you'd do a Level 4/5 is you're running a robo-taxi fleet and you have no choice but to take liability.
 
Here are the prices for the mobileye's supervision package on the 2021 Zeekr 001:

All hardware on the car is standard (2x EyeQ5H, 12x cameras).

ZAD Basic (ACC, LK, AEB, AP..) is standard.
ZAD Advanced (Automatic Lane Change, Unmanned Scene Automatic Parking, Remote Parking, Steering Collison Avoidance Assist, etc) is $2,500
ZAD Ultimate (Highway Automation and Urban Automation) is $5,500

Edited: made changes to the features each package has as i had it wrong. Also i think steering Collison avoidance assist is RSS Vision Zero Implementation.

 
Last edited:
Here are the prices for the mobileye's supervision package on the 2021 Zeekr 001:

All hardware on the car is standard (2x EyeQ5H, 12x cameras).

Basic (ACC, LK, AEB..) is standard.
ZAD Advanced (Highway Automation) is $2,500
ZAD Ultimate (Full Urban Automation) is $5,500

Is the ZAD Ultimate separate from the ZAD Advanced or does it already include the ZAD Advanced?

IMO, it does seem very competitive with Tesla's pricing.
 
Is the ZAD Ultimate separate from the ZAD Advanced or does it already include the ZAD Advanced?

IMO, it does seem very competitive with Tesla's pricing.

Actually i messed up. I fixed the feature list of each package. Ultimate both has highway and urban automation.
and Advanced doesn't have highway automation.
I think each package has the features of the lower package.
So you don't need to buy Ultimate and Advanced.

Another thing i noticed is that steering Collison avoidance assist is RSS Vision Zero Implementation that Mobileye talked about in the past. Also with Highway/urban being one package, it proves that Mobileye's supervision is one system and isn't a bunch of separate systems frankensteined together.

And yes its very competitive, 10k vs 5k
 
  • Like
Reactions: diplomat33
Actually i messed up. I fixed the feature list of each package. Ultimate both has highway and urban automation.
and Advanced doesn't have highway automation.
I think each package has the features of the lower package.
So you don't need to buy Ultimate and Advanced.

Another thing i noticed is that steering Collison avoidance assist is RSS Vision Zero Implementation that Mobileye talked about in the past. Also with Highway/urban being one package, it proves that Mobileye's supervision is one system and isn't a bunch of separate systems frankensteined together.

And yes its very competitive, 10k vs 5k

Thanks. Yeah, ZAD Ultimate including everything is even better. It is super competitive.

Honestly, I would pick the ZAD Ultimate package over Tesla's FSD package, if I could. Mobileye needs to bring Super Vision to a nice EV in the US!
 
I’m not familiar with the issue, and my experience is with avionics. But I would say no. Only because the redundant sensors need no be different, just redundant. To me one reason to use different sensors is as a backup, albeit with a potential decrease in fidelity. My guess is it’s more of a data fusion issue. Although I can’t rule out cost benefit.

@rxlawdude @powertoold @JHCCAZ

Redundancy is the backbone of any safety critical system, when its not implemented this happens.
I will go more indepth about this in another post.

"Boeing has long embraced the power of redundancy to protect its jets and their passengers from a range of potential disruptions, from electrical faults to lightning strikes.​
The company typically uses two or even three separate components as fail-safes for crucial tasks to reduce the possibility of a disastrous failure. Its most advanced planes, for instance, have three flight computers that function independently, with each computer containing three different processors manufactured by different companies.​
So even some of the people who have worked on Boeing’s new 737 MAX airplane were baffled to learn that the company had designed an automated safety system that abandoned the principles of component redundancy, ultimately entrusting the automated decision-making to just one sensor — a type of sensor that was known to fail. Boeing’s rival, Airbus, has typically depended on three such sensors.
“A single point of failure is an absolute no-no,” said one former Boeing engineer who worked on the MAX, who requested anonymity to speak frankly about the program in an interview with The Seattle Times. “That is just a huge system engineering oversight. To just have missed it, I can’t imagine how.”​
Boeing’s design made the flight crew the fail-safe backup to the safety system known as the Maneuvering Characteristics Augmentation System, or MCAS.​
A faulty reading from an angle-of-attack sensor (AOA) — used to assess whether the plane is angled up so much that it is at risk of stalling — is now suspected in the October crash of a 737 MAX in Indonesia, with data suggesting that MCAS pushed the aircraft’s nose toward Earth to avoid a stall that wasn’t happening. Investigators have said another crash in Ethiopia this month has parallels to the first.​
Boeing has been working to rejigger its MAX software in recent months, and that includes a plan to have MCAS consider input from both of the plane’s angle-of-attack sensors, according to officials familiar with the new design. The MAX cockpit will now include a warning light that will illuminate when the two angle-of-attack sensors disagree. "​

 
Last edited:
That's not necessarily true though, you could have a really great driving system that almost never fails and handles all maneuvers on all roadways and it's still a Level 2 or 3 because the company doesn't want to be held liable for accidents, so the owner is kept in the driving loop. The meaningful difference between levels is who's liable, is it the driver or manufacturer or fleet operator? The only reason you'd do a Level 4/5 is you're running a robo-taxi fleet and you have no choice but to take liability.
Yes, agree. But IMO if the system is safe enough to be classified higher, the liability expenses for the company would be so small, and the upside so huge, that it would be a no brainer to classify it higher.

You also have the issue of user complacency but more serious general company trustworthiness/renommé if the company speaks with two tongues on what safety level the system has.
 
That's not necessarily true though, you could have a really great driving system that almost never fails and handles all maneuvers on all roadways and it's still a Level 2 or 3 because the company doesn't want to be held liable for accidents, so the owner is kept in the driving loop. The meaningful difference between levels is who's liable, is it the driver or manufacturer or fleet operator? The only reason you'd do a Level 4/5 is you're running a robo-taxi fleet and you have no choice but to take liability.
The manufacturer of a L3 vehicle is almost certain to be liable. Regardless, the big difference is that in an L2 system the driver is criminally liable if they are not monitoring the system whereas in an L3-L5 system they are not. Not facing criminal charges is a huge selling point to many consumers.
 
Argo AI has now received a permit from regulators in CA to give free rides to the public in their AVs with safety drivers:

Argo AI, the autonomous vehicle technology startup backed by Ford and VW, has landed a permit in California that will allow the company to give people free rides in its self-driving vehicles on the state’s public roads. The California Public Utilities Commission issued the so-called Drivered AV pilot permit earlier this month, according to the approved application.

8 other companies have received the permit to date:

Aurora, AutoX, Cruise, Deeproute, Pony.ai, Voyage, Zoox and Waymo have all received permits to participate in the CPUC’s Drivered Autonomous Vehicle Passenger Service Pilot program, which requires a human safety operator to be behind the wheel. Companies with this permit cannot charge for rides.

Cruise is the only company to have secured a driverless permit from the CPUC, which allows it to shuttle passengers in its test vehicles without a human safety operator behind the wheel.

CA has a lot of regulatory hoops to jump through before you can actually deploy a driverless commercial robotaxi service:

Snagging the CPUC’s Drivered permit is just part of the journey to commercialization in California. The state requires companies to navigate a series of regulatory hurdles from the CPUC and the California Department of Motor Vehicles — each agency with its own tiered system of permits — before it can charge for rides in robotaxis without a human safety operator behind the wheel.

The DMV regulates and issues permits for testing autonomous vehicles on public roads. There are three levels of permits issued by the DMW, starting with one that allows companies to test AVs on public roads with a safety operator behind the wheel. More than 60 companies have this basic testing permit.

The next permit allows for driverless testing, followed by a deployment permit for commercial operations. Driverless testing permits, in which a human operator is not behind the wheel, have become the new milestone and a required step for companies that want to launch a commercial robotaxi or delivery service in the state. AutoX, Baidu, Cruise, Nuro, Pony.ai, Waymo, WeRide and Zoox have driverless permits with the DMV.

The final step with the DMV, which only Nuro has achieved, is a deployment permit. This permit allows Nuro to deploy at a commercial scale. Nuro’s vehicles can’t hold passengers, just cargo, which allows the company to bypass the CPUC permitting process.

 
It's incredible how the tables have turned.

Just a few months ago, Tesla was considered to be last among the major fsd developers. Now, we have

Tesla, Waymo, and everyone else that's catching up to Waymo

The problem for everyone else is that Waymo has been stagnant, their leadership has left, and they've yet to expand their service, not to mention that Waymo still avoids highways. Even worse, JJRicks is no longer making Waymo videos for the time being! Big thanks to JJRicks by the way.

Meanwhile, Tesla has been making significant progress with every release. Two weeks baby!

I think those who thought that everyone else was ahead of Tesla a year ago need to reassess their rationale for who's really leading the fsd race. This is because it's becoming the case that the leadership chart is being turned upsidedown as Elon previously mentioned.
 
Last edited: