Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
An good post but this is only 25% of the whole deal.
Independence:
If you read the fine print they're independent perceptually, but clearly not fully independent as successful and trustable stand-alone AV operators in normal operation. Right after claiming they are,, Mobileye clarifies that the Vision side is the "backbone", consistent with my argument that if one side can operate alone, that can only be the Vision side.
I should note that Amnon has always detailed in his presentation that they are talking about perception independence not driving policy independence and that the driving policy uses two independent ways of sensing the environment.
Common-cause failure:
Seems very general, any vehicle or system can suffer this. True redundancy against this would require dual or secondary everything. Battery module, sensors, electronics, wiring, electromechanical control. As @Daniel in SD said, this isn't that kind of redundancy. Thinking about it, I'd say it's a complementary architecture to draw increased confidence from each sensor set's strengths, per below:
Yes that kind of redundancy is hardware and system redundancy. Which is different from what true redundancy is about. Here is what mobileye is doing for system redundancy if you are wondering. Obviously the vehicle the are using similar to Waymo has steering/brakes and power redundancy (hardware redundancy).

They also have a fail operation board that has 3x EyeQ5H and acts as a backup if the primary system fails. The Fail operation board also handles perception, mapping and planning independently. All together you get 9x EyeQ5, in the future it will be 3x EyeQ6.
media-1308190-avkitarchitecture.jpg

Their real point of differentiation, according to the flow diagram and per the summary by @diplomat33, is that the fusion of the two sides comes after the Perception modules (though there is fusion of Radar+Lidar in the Perception module of that side). So not Sensor Fusion overall
This is referred to as late fusion
, but Policy and Planning fusion of the two World Views. Although as I keep saying, the Radar+Lidar cannot actually be used to safely pilot the AV in the real world, nonetheless the system architecture asks for its World View as if it could, and this is perhaps a good way to solve well-known problems of normal Sensor Fusion.
The radar/Lidar with a front camera can absolutely be used to drive the car in routes that involve a traffic light control. It’s crucial to actually have a system that can drive independently (in most cases). Because you can use it to bring the car to a safe parking spot if a failure were to happen. Especially if you have passengers, you don’t want to just stop in the middle of the highway, you want to lane change to the shoulder and then park. If there is no shoulder then take the next exit and parking in a public parking lot.
I'm guessing that what is really going on here is a dominantly Vision-driven car, not unlike the Tesla Vision FSD approach, but using the Radar+Lidar in a scout / backseat-driver role.
"I saw something that you missed or misjudged, better not go there."
Or, "you saw a confusing shadow or puddle across the road, but I can tell you it's clearoy not a solid object or a ditch".
Or, (famous edge case) "I'm very confident that image you see of of a (painted-on) clear underpass is really a solid wall!"
It will do all of that and then more.
Think of it as two independent ways to detect a peds/vehicles.

The way i put it is that you have different sensor modalities (camera, lidar, radar) that have different pros and cons and they fail for different reasons. So its not like you have two systems of cameras which have correlated failures. Radar can see through rain, snow, fog and dust, cameras can't. Lidar can see in direct sun light and low light situations including pitch darkness, camera struggles with it. I could keep going.

If you have two systems with uncorrelated failures you get two benefits. The amount of data required to validate the perception system is massively lower and if one of the independent systems fails the vehicle can continue operating safely in contrast to a low-level fused system that needs to cease driving immediately.

In essence if you have two completely independent ways to sense a person, you assume they won’t fail in a correlated way. If either sensor sees a person, you act as if a person is there. And because of independence, you assume that both sensor systems missing a real person is so improbable that it will essentially never happen. Sure, its not 100% independent. But there's hundreds/thousands of independent modes. Completely independence would give you a complete dot product. If your MTBF was 10,000 hours for each system. 10,000 hours * 10,000 hours = 100 million hours. Since its not completely independent, its lower. If you end up somewhere around 10 million hours that's alot better than 10,000 hours. Even 1 million is alot better than 10k.
In other words, it goes a long way towards solving the minority set (but a critically important set) of cases where Vision is non-confident or falsely cofident. And it does so without introducing the kind of difficult Sensor Fusion challenges that people commonly talk about, including the noisy pulsing radar-estimation examples given by Andrej Karpathy in his recent CVPR talk.
Remember the problem tesla is having is because they are using a radar from 2011 and 2014.
Waymo already uses 4D imaging radars in their 5th gen cars and Mobileye is planning to.
Mobileye's radar for example has over 500k points points per sec while Tesla radar has 900 points per sec. Huge difference that matters for sensor fusion.
RadarLidar: BlippyBlip, I don't know, kind of noisy data...
good analogy but this makes radar/lidar system look weak.
Take for example this video. It only takes secs to go from good weather to zero visibility fog and you can't stop in the middle of the road because they become a hit target. A vision only system would be fked.

www.youtube.com/watch?v=uu-OLV3x0E0
Hope you enjoyed that 🙂. I like the approach but see it not as real redundancy, more like recon for the squad. I don't think they should be describing it as a self-sufficient AV using Radar+Lidar; that's not real but it informs the architectural flow diagram and is a way to resolve some known conflicts, as posed in Elon's tweet:
"When radar and vision disagree, which one do you believe? Vision has much more precision, so better to double down on vision than do sensor fusion."
Mobileye is avoiding sensor fusion, avoiding Karpathy's radar-noise example, adding Lidar which trumps the precision argument, and letting the Planning module choose, in the moment, which divergent World View item to trust. How to choose? Each of the Perception output World View data sets have already assigned confidence values to their detected objects and drivable-space regions, so generally pick the more confident one - but quite importantly, also weight-adjusted for factors of downside risk. Most often they will agree.
Mobileye will have an un-edited video showcasing their radar/lidar only system later in the year. admittedly that system is behind their vision only perception system.
 
Last edited:
Don't miss the point that the stakes in aviation are higher. Every airliner accident makes the news and is painstakingly analyzed.

Conversely, road traffic accidents are a way of life. We tolerate many thousands of deaths, not to mention cripples, from road traffic, and hardly anybody cares.

Therefore it is conceivable that cars don't need anything like triple sensor redundancy. They merely need to show that they reduce accident deaths by an order of magnitude.
 
  • Like
Reactions: Daniel in SD
Very entertaining and bold talk about approaching FSD. I like Hotz' candidness and his choice of high level concepts.
Yeah hardware seems pretty sweet, might indicate something about HW4. But Tesla has a lot of advantage having a lot more vertical integration.

My take: Hotz believes in end2end deep learning. Tesla are doing a feature based deep learning. Imo Tesla has an advantage in that they can switch to end2end whenever they want, but with feature based they can specify a lot more relevant triggers to get a more diverse, more interesting data set to train on. As long as compute is a limiting factor(which it will be for the forseeable time) this should be a major advantage.

Regarding Panda 3, it will lack complete 360° vision around the car and will probably not be enough for gathering data for lvl5. I will improve lvl2, but I had hoped it would take a bigger step towards lvl5. Maybe Panda 4 will come with several cameras connected through a network.
 
Last edited:
  • Like
Reactions: powertoold
Are we talking about the level 2 system? I think that it is obvious that driver is responsible and to my understanding you have stated the same?
My objection is the clsim about criminal liability.. Yes, in cases of harm to others or vehicular manslaughter, but running a light with no harm to others is an infraction, not a criminal misdemeanor or felony.

Saying a driver is liable is accurate, just not necessarily liable for a crime.
 
I also like how Hotz restated my observation above in a succinct way, i.e. there's no limit to how good a Level 2 system can be, it can still be a super-human driver. The difference is liability, not capability.
True, unless automation complacency becomes a safety issue and your system gets banned. Europe already has legal limits on how good Level 2 systems can be.
I do like the "make driving chill" philosophy which seems to be the antithesis of FSD beta. He argues that there's no reason for an L2 system to make right hand turns because it's so awkward to monitor and the time you spend turning right is so small. What's a little non-sensical is that he claims that comma.ai will win self-driving and then says he doesn't like the idea of a giving up human control.
 
I do like the "make driving chill" philosophy which seems to be the antithesis of FSD beta. He argues that there's no reason for an L2 system to make right hand turns because it's so awkward to monitor and the time you spend turning right is so small. What's a little non-sensical is that he claims that comma.ai will win self-driving and then says he doesn't like the idea of a giving up human control.

I think his point there is that comma is focused on making the system completely reliable and "chill" for one-lane driving first, before focusing on fancy features (like turn right / left / etc.) like what Tesla is doing. Comma wants to make their system reduce the stress of driving rather than making it more stressful (like when Hotz tried fsd beta).
 
  • Like
Reactions: mark95476
Yup. It appears that Tesla going vision-only and their new training supercomputer is making it quicker to get releases out. Biweekly releases are more-or-less unheard of in the automotive world, including Tesla who is already known for updating their firmware regularly. AI Day in a few weeks should be interesting.

I'd imagine that Waymo made more $ in YouTube views than actually driving paying customers around. Has any auto manufacturer offered to buy their system so they can integrate it their vehicles? I bet all would blanch when they hear how much it costs just for the hardware alone before installation.


It's incredible how the tables have turned.

Just a few months ago, Tesla was considered to be last among the major fsd developers. Now, we have

Tesla, Waymo, and everyone else that's catching up to Waymo

The problem for everyone else is that Waymo has been stagnant, their leadership has left, and they've yet to expand their service, not to mention that Waymo still avoids highways. Even worse, JJRicks is no longer making Waymo videos for the time being! Big thanks to JJRicks by the way.

Meanwhile, Tesla has been making significant progress with every release. Two weeks baby!

I think those who thought that everyone else was ahead of Tesla a year ago need to reassess their rationale for who's really leading the fsd race. This is because it's becoming the case that the leadership chart is being turned upsidedown as Elon previously mentioned.
 
  • Disagree
Reactions: Doggydogworld
@rxlawdude @powertoold @JHCCAZ

Redundancy is the backbone of any safety critical system, when its not implemented this happens.
I will go more indepth about this in another post.

"Boeing has long embraced the power of redundancy to protect its jets and their passengers from a range of potential disruptions, from electrical faults to lightning strikes.​
The company typically uses two or even three separate components as fail-safes for crucial tasks to reduce the possibility of a disastrous failure. Its most advanced planes, for instance, have three flight computers that function independently, with each computer containing three different processors manufactured by different companies.​
So even some of the people who have worked on Boeing’s new 737 MAX airplane were baffled to learn that the company had designed an automated safety system that abandoned the principles of component redundancy, ultimately entrusting the automated decision-making to just one sensor — a type of sensor that was known to fail. Boeing’s rival, Airbus, has typically depended on three such sensors.
“A single point of failure is an absolute no-no,” said one former Boeing engineer who worked on the MAX, who requested anonymity to speak frankly about the program in an interview with The Seattle Times. “That is just a huge system engineering oversight. To just have missed it, I can’t imagine how.”​
Boeing’s design made the flight crew the fail-safe backup to the safety system known as the Maneuvering Characteristics Augmentation System, or MCAS.​
A faulty reading from an angle-of-attack sensor (AOA) — used to assess whether the plane is angled up so much that it is at risk of stalling — is now suspected in the October crash of a 737 MAX in Indonesia, with data suggesting that MCAS pushed the aircraft’s nose toward Earth to avoid a stall that wasn’t happening. Investigators have said another crash in Ethiopia this month has parallels to the first.​
Boeing has been working to rejigger its MAX software in recent months, and that includes a plan to have MCAS consider input from both of the plane’s angle-of-attack sensors, according to officials familiar with the new design. The MAX cockpit will now include a warning light that will illuminate when the two angle-of-attack sensors disagree. "​

The objective that led Boeing’s decision on MCAS was born out of the need to have an answer Airbus’s NEO, a more fuel efficient aircraft. For Boeing to respond it could not do a clean sheet design because it would have required a lengthy certification process and extensive pilot training. So by the time Boeing had an answer to the NEO it would have been too late. Therefore Boeing decided to design a derivative model of the NG and submitted an Amended Type Certificate for the MAX.

The problem wasn’t software per se, it was using software to adjust for aerodynamic differences (stall) between the NG and the Max. Essentially, algorithms were used to make an MAX ‘seem’ to fly under certain stall conditions ‘like’ an NG. At first MCAS was added to correct the tendency for the nose to pitch up in an extreme maneuver (one that often is only performed during test flights). The decision to uses a single AoA may not have been initially catastrophic, but Boeing expanded the use of MCAS control law in other aspects of flight, and it treat MCAS as an addition to the MAX’s speed-trim system. Each time MCAS control laws were expanded Boeing viewed these changes as minimal. This was the wrong assumption.

Using software to address stability issues is not new to commercial aircraft, it is common on advanced military aircraft. The main reason for any ‘automation’ to an aircraft is to make it safe. Operationally air travel has change significantly. To reduce delays changes on the flight deck and to air traffic management have evolved, and are evolving. Some decision making has been augmented or automated. On the ground and in the air aircraft are managed by the pilot, ATC, and airline operations. They are also scheduled. The goal is to avoid serious mishaps through resilience, an operational attribute. Another attribute is robustness, which is a design attribute. Redundancy being part of design is part of an airplane resiliency, which includes redundant systems. However resiliency includes pilot training and fight deck management, and other air and ground systems. An aircraft is designed to meet its mission in a degraded mode enabling it to perform under adverse conditions. Cockpit and ATM automation may or may not replicate current human behavior—this is not the objective. Automation is added to make air travel safe.

My opinion is I can’t say the same automobile automation. It seems the algorithms and heuristic used to make an automobile autonomous are designed to behave like humans, and respond to random act of human stupidity. Not be first and foremost safe. I don’t know if the current state of AI is up to the task. If the goal is zero injuries or fatalities and far less accidents how long will it take for these systems to mature? This is but a nascent technology.
 
This is the state of the autonomous vehicle industry right now...

Bjorn tests virtually all of EVs with autopilot or ADAS equivalents. There are many automobile manufacturers and they generally buy their systems from vendors like Mobileye. Only a Tesla can drive down this non-split road and this is Tesla's base AP system. Tesla has moved on and has been focusing on FSD with higher level functionality.

See 15:00 in this video:

So when you see marketing spiel from various auto OEMs and vendors talking about their ADAS system, they don't have anything that can match Tesla's base Autopilot. In baseball terms, they're still working on getting to first base whereas Tesla is on second or third. It's pretty amazing how little Mobileye has progressed since they split with Tesla. This is the ONLY thing Mobileye works on.
 
This is the state of the autonomous vehicle industry right now...

Bjorn tests virtually all of EVs with autopilot or ADAS equivalents. There are many automobile manufacturers and they generally buy their systems from vendors like Mobileye. Only a Tesla can drive down this non-split road and this is Tesla's base AP system. Tesla has moved on and has been focusing on FSD with higher level functionality.

See 15:00 in this video:

No, it is not the state of the autonomous vehicle industry right now. You are only looking at the state of ADAS industry. None of those vehicles are autonomous. Tesla's are not autonomous yet. They are ADAS. They are not autonomous.

And did Bjorn fully test Highway Teammate, Drive Pilot, Travel Assist, Super Cruise, Blue Cruise? There are plenty of competent ADAS that are on par with Tesla's basic AP and are even hands-free which AP is not.

And you are leaving out all the real autonomous vehicles that exist from Waymo, Cruise, Zoox, Aurora, Poni AI, Argo AI, Baidu, etc... They represent the true state of autonomous vehicles today!

It's pretty amazing how little Mobileye has progressed since they split with Tesla. This is the ONLY thing Mobileye works on.

Mobileye has made a lot of progress since the split. Mobileye has Super Vision which is equal or better to Tesla's FSD. Just look at the Munich, Jerusalem, NYC demos. They represent what Mobileye's vision-only is capable of. And they are working on true L4.
 
Last edited:
No, it is not the state of the autonomous vehicle industry right now. You are only looking at the state of ADAS industry. None of those vehicles are autonomous. Tesla's are not autonomous yet. They are ADAS. They are not autonomous.

And did Bjorn fully test Highway Teammate, Drive Pilot, Travel Assist, Super Cruise, Blue Cruise? There are plenty of competent ADAS that are on par with Tesla's basic AP and are even hands-free which AP is not.
The problem is the only one that exists in any volume is Super Cruise. The others are coming in upcoming vehicles (and older versions with similar names are inferior to Tesla's basic AP for the most part). Definitely there are examples ahead of Tesla in driver monitoring (that's more a function of Tesla not having an IR camera (something that may be changing with the new Model S/X), but in terms of functionality they are mostly playing catchup, for example Super Cruise just adding auto lane change recently (a feature Tesla had 6 years ago, even on the old Mobileye based AP1).
 
Last edited:
The problem is the only one that exists in any volume is Super Cruise. The others are coming in upcoming vehicles (and older versions with similar names are inferior to Tesla's basic AP for the most part). Definitely there are examples ahead of Tesla in driver monitoring (that's more a function of Tesla not having an IR camera (something that may be changing with the new Model S/X), but in terms of functionality they are mostly playing catchup, for example Super Cruise just adding auto lane change recently (a feature Tesla had 6 years ago, even on the old Mobileye based AP1).
Hands free is not catch up.

Hands free is leap frogging.

Us Tesla owners are stuck with "hold the steering wheel, hold the steering wheel" even if we're holding the steering wheel.

The lane change feature was neat with AP1,but you had to be really careful with it as it didn't have side sensors aside from the ultrasonics. The EAP/FSD auto-lane change is often cited as having more failed lane changes versus AP1, but that's because it has side cameras. It does fail occasionally when a truck is in the far lane, and it gets confused thinking its in the next lane over. I imagine the Enhanced Supercruise lane change will be superior unless Tesla Vision auto lane change corrects the perception issues.
 
  • Like
Reactions: rxlawdude
I think those who thought that everyone else was ahead of Tesla a year ago need to reassess their rationale for who's really leading the fsd race. This is because it's becoming the case that the leadership chart is being turned upsidedown as Elon previously mentioned.
Because Tesla has a software that can drive around for acouple mins without disengagement? But at the same when others do the same thing, it’s mean-less. Will never understand this logic.
Yup. It appears that Tesla going vision-only and their new training supercomputer is making it quicker to get releases out. Biweekly releases are more-or-less unheard of in the automotive world, including Tesla who is already known for updating their firmware regularly. AI Day in a few weeks should be interesting.
Lmaooo. Almost all SDC companies update their entire fleet with new software update every week
 
The problem is the only one that exists in any volume is Super Cruise. The others are coming in upcoming vehicles (and older versions with similar names are inferior to Tesla's basic AP for the most part). Definitely there are examples ahead of Tesla in driver monitoring (that's more a function of Tesla not having an IR camera (something that may be changing with the new Model S/X), but in terms of functionality they are mostly playing catchup, for example Super Cruise just adding auto lane change recently (a feature Tesla had 6 years ago, even on the old Mobileye based AP1).

There is propilot 2.0 and acouple others.
 
Most of the fsd development industry is a joke now compared to Tesla.

Mobileye says scalability is critical, for financial and practicality reasons. But then they see Tesla's approach (which is essentially the ideal scalable approach right now), and illogically criticize aspects of it, like using shadow mode and triggers to get more data / practice / etc.

Then there are others who don't seem to be scalable (Waymo, Cruise, etc.). They say they're scalable, but they've been incredibly slow at it, and with Waymo, they currently can't even do their small area well (Chandler).

So you have to wonder, who's right? If you believe in Mobileye, then you can't have much faith in the Waymo approach. If you believe in Waymo, then why do they still suck in Chandler? Waymo released their driverless promo video 3+ years ago, and the UI literally looks the same as it does today. There's no way Waymo can deploy driverless taxis in SF using the software performance they use in Chandler. We can clearly see why they've yet to expand the Chandler service area.

My guess is that all these fsd developers have to lie / be deceptive about their own approaches because they need funding to continue development. The moment they admit Tesla is ahead and progressing fast, they'll likely miss out on some funding.

The bright side is that once Tesla widely releases fsd beta to the US fleet, our questions will be answered.

 
Last edited: