Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
I think you are missing my point. Mobileye uses AV maps which help with reliability no matter what the road is. So even on completely different roads, since Mobileye maps them first, the car's performance will be similar. And Mobileye uses RSS that guarantees consistent safety behavior regardless of the road. So even in different traffic scenarios, the car will have similar behavior since it is guided by the same policy. Plus, Mobileye's vision has a MTBF above 1000 hours of driving. So yes, I think we can expect the videos to be generally representative of vehicle's overall performance. Of course, I am sure there are routes that Mobileye handles worse than others. But I am saying is that it is highly unlikely that the videos are total outliers and that Mobileye's FSD is much worse than FSD Beta everywhere and that the videos represent the only miles that Mobileye's FSD is able to do competently. That is very unlikely for the reasons I mention (MTBF of 1000 hours of driving, AV maps, RSS).
That sounds like a press release.
 
  • Like
Reactions: EVNow
Mobileye's vision-only has a MTBF above 1000 hours of driving per Prof Shashua interview. Mobileye uses AV maps and RSS to increase reliability and safety. Mobileye also has increased redundancy with radar/lidar subsystem which improves reliability. And Mobileye is testing L4 robotaxis.

Tesla does not have camera vision with MTBF of 1000 hours of driving. Tesla has disengagement rate about 1 per 5 miles (your experience and mine). FSD beta makes frequent mistakes that we don't see in Mobileye videos. Tesla lacks sensor redundancy. Tesla lacks AV maps or RSS. Tesla is not testing L4 robotaxis.

Based on that information, I think Mobileye has better FSD than Tesla. But at the end of the day, we can only make a personal judgment based on the information available. Nobody has all the driving data from both Tesla and Mobileye to do an objective analysis.
But again, you are comparing MobileEye claims against Tesla FSD observed driver experiences. I'm sorry, but I don' think that warrants your conclusion. I dont know if MobileEye is better or worse than FSD, and I don't think you do either, if you base it only on this information.
 
  • Like
Reactions: ZeApelido and EVNow
So at 20 mph, it would be 20,000 miles per disengagement. That is 2 times better than Waymo in CA.

Why do they even need a LiDAR based system ?

You are so naive.

No, I am not naive. You keep using these silly strawman arguments.

Cars don't drive at just 20 mph all the time. In fact the car would likely need to drive faster than 20 mph. At speeds higher than 20 mph, the miles per disengagement would be less than 20k. So if you only want Mobileye's robotaxis to drive at 20 mph, you might have a point but they would not drive that slow all the time.

Also, Waymo's miles per disengagement in CA, in the last month of testing was 53,000 miles, not 10,000 miles as you say. So Mobileye is not 2x better than Waymo.

Also, a MTBF of 1000 hours of driving is not good enough to remove driver supervision. That is why they "need" a lidar based system for their L4 robotaxis. A MTBF of 1000 hours of driving is good enough for L2. That is why Mobileye is deploying L2 "FSD with driver supervision" that is vision-only.
 
But again, you are comparing MobileEye claims against Tesla FSD observed driver experiences. I'm sorry, but I don' think that warrants your conclusion. I dont know if MobileEye is better or worse than FSD, and I don't think you do either, if you base it only on this information.

My only claim was the MTBF of 1000 hours of driving. That is a claim by Prof. Shashua. The unedited videos by Mobileye are not claims. They are also observed driver experiences since they show the real world driving of the car with no edits or driver interventions.

And just dismissing everything from Mobileye and assuming that Mobileye's FSD is worse than Tesla's FSD is also an unwarranted conclusion. Where is the evidence for that?
 
My only claim was the MTBF of 1000 hours of driving. That is a claim by Prof. Shashua. The unedited videos by Mobileye are not claims. They are also observed driver experiences since they show the real world driving of the car with no edits or driver interventions.

And just dismissing everything from Mobileye and assuming that Mobileye's FSD is worse than Tesla's FSD is also an unwarranted conclusion. Where is the evidence for that?
I didnt dismiss MobileEye at all .. read the last sentence of my post.
 
  • Like
Reactions: diplomat33
I might have missed this news but last month, Motional launched a pilot program with Uber Eats in Santa Monica, to use their Ioniq 5 AV to do autonomous food deliveries:

Motional and Uber launched autonomous deliveries for Uber Eats customers in Santa Monica, California. Motional’s all-electric IONIQ 5 vehicles, operating autonomously, are now conducting end-to-end food deliveries.

The Motional IONIQ 5 vehicles used in the service have been adapted to enable autonomous deliveries. While Motional has extensive experience moving passengers, this is the company’s first time transporting commercial goods. To prepare, its teams have spent months studying every touchpoint between the restaurant and end-customer, and conducted extensive testing in the Los Angeles area.

Participating merchants will receive a notification when the AV arrives, meet the vehicle at the designated pick-up location, and place the order in a specially-designed compartment in the backseat. Upon arrival at the drop-off location, the customer will receive an alert, securely unlock the vehicle door via the Uber Eats app, and collect their order from the backseat.

 
Can AI vision processing someday exceed human vision capabilities? Maybe. However, a technological area in which I know for certain that AI and computer systems can already outperform humans is retrieving and processing vast amounts of data from a variety of sources. So it is just common sense to me that an approach like Mobileye's, once fully developed, will lead to a better autonomous driving system. Tesla, on the other hand, seems to be eschewing other data sources to rely purely on vision, which means their success could lie entirely on the prospect of AI vision processing someday exceeding human vision capabilities.
 
  • Like
Reactions: diplomat33
So at 20 mph, it would be 20,000 miles per disengagement. That is 2 times better than Waymo in CA.

Why do they even need a LiDAR based system ?

You are so naive.
A disengagement and a failure are not the same thing. Disengagements occur when the safety driver is not sure whether or not the system will be able to respond to a situation. Waymo claims that 99.9% of their disengagements would not have resulted in a failure (collision).
 
  • Like
Reactions: diplomat33
Can AI vision processing someday exceed human vision capabilities? Maybe. However, a technological area in which I know for certain that AI and computer systems can already outperform humans is retrieving and processing vast amounts of data from a variety of sources. So it is just common sense to me that an approach like Mobileye's, once fully developed, will lead to a better autonomous driving system. Tesla, on the other hand, seems to be eschewing other data sources to rely purely on vision, which means their success could lie entirely on the prospect of AI vision processing someday exceeding human vision capabilities.
I'm not sure I buy this argument. Humans basically use three senses when driving: sight (primary), inner-ear accelerometer (car movement) and touch (steering etc feedback). The car has vision and accelerometer, with possible radar and/or lidar.

Your argument is we can't (yet) make car vision as good as human vision, so we need to augment with radar etc. But where has anyone shown that these additional systems can make up for the deficiencies (if any) in car vision? We KNOW, at least in theory, that vision+acceleration+touch is adequate for driving, since humans do it every day. On what do we base the assertion that vision+lidar+radar (or some other combination) can substitute for this? There have been MANY people shouting how we need radar/lidar/porridge etc etc for a car to self-drive, but based on what evidence? Other than "common sense" that more sensors are better (and "common sense" is rarely common and only infrequently sense in my experience).

And yes, we can easily make vision better than humans, even with current camera technology:
-- 360 degrees all the time .. no need to "turn your head"
-- Equal accuity. Humans see well only in a small area (the focal retina) meaning our eyes/heads have to shift position to see something in detail. Cameras can see equally well across the entire field of vision.
-- Dynamic range adjustment time. Both cameras and eyes have very good dynamic range, but it takes eyes a LONG time to adjust from very dark to very light, during which the eye is blind. Cameras are MUCH faster.
-- Simultaneous alternate views. The eye can see either up close or long distance, but not both at the same time. Camera arrays can be setup to see both up-close and long distance at once.
 
But where has anyone shown that these additional systems can make up for the deficiencies (if any) in car vision? We KNOW, at least in theory, that vision+acceleration+touch is adequate for driving, since humans do it every day. On what do we base the assertion that vision+lidar+radar (or some other combination) can substitute for this? There have been MANY people shouting how we need radar/lidar/porridge etc etc for a car to self-drive, but based on what evidence? Other than "common sense" that more sensors are better (and "common sense" is rarely common and only infrequently sense in my experience).

Camera vision has lots of deficiencies. It can misclassify objects. Camera vision also works less reliably in certain conditions like heavy rain, dense fog or total darkness. In fact, we see these same deficiencies in human vision. There are many accidents by humans that happen in rain, fog or darkness where visibility is reduced. Lidar and radar make up for these deficiencies. If your camera vision misclassifies an object, lidar will still detect the presence of the object to avoid collision. And lidar can also classify objects. So, you will get more reliable classification of objects with both vision and lidar than with vision-only. Lidar also works very reliably in total darkness when vision will be less reliable. HD radar is very reliable in dense fog and heavy rain when vision will be less reliable. So yes, vision+lidar+radar will absolutely be more reliable than vision-only.

I know one argument that Waymo has given for why they use vision+lidar+radar is that the prediction and planning stack rely on the perception data to work. If your prediction and planning get less than complete or bad perception data, they will make more mistakes. By using vision+lidar+radar, Waymo wants to give their AV the best, most complete, perception data possible, in order to give prediction and planning the best chance at making the right decisions. With vision-only, your prediction and planning stacks will be entirely dependent on the vision data. If it is not good enough, your prediction and planning stacks will be handicapped.

I think the debate between vision-only and sensor fusion is basically a debate about the march of 9's. Nobody denies that vision-only can drive a car. The question is can vision-only drive with 99.99999% reliability and can we solve it in a timely manner? Remember that most driving is actually relatively easy, it's the last bits that are hard. As you alluded to, the proponents of vision-only are counting on the theory that vision-only should be adequate. But what if vision-only is not quite good enough and it solves like 99% of FSD and then gets stuck? The proponents of vision+lidar+radar don't want to take that chance, they want the best chance at achieving 99.99999% reliably. And how many 9's are "good enough"? For consumer cars, 99% FSD might be perfectly adequate. For robotaxis, 99% FSD is not good enough. So for consumer cars, vision-only makes a lot of sense IMO. For driverless robotaxis, vision-only is a non starter IMO.

I see two possibilities:
1) Vision-lidar-radar is safer than vision-only but is too costly and too limited to geofenced areas. Vision-only works everywhere and is 1.5x safer than humans. All things considered, vision-only is considered "safe enough" so it wins.
2) Vision-lidar-radar is 50x safer than humans and the costs come down enough. Vision-lidar-radar wins out because society prefers 50x safer to just 1.5x safer.
 
Last edited:
I think the debate between vision-only and sensor fusion is basically a debate about the march of 9's. Nobody denies that vision-only can drive a car. The question is can vision-only drive with 99.99999% reliability and can we solve it in a timely manner?
Which was my point. We know cars CAN be driven well with vision-only, the only reason we have these extra sensors is because right now we can't figure it out (though Tesla are doing their best). Sure, having radar helps with fog, but that's a post-event rationalization, not the reason radar has been used by Waymo et al in the first place. If someone really cracked vision only self-driving to you think they would still insist on radar "just for fog"? Of course not, it would be an optional extra (and sold as such!). The same goes for lidar.

And my point remains .. you are again assuming that adding lidar/radar WILL allow self-driving cars "in a timely manner", but based on what? Waymo hasn't solved it yet .. they still have geofencing and HD maps, so who has solved it? And if no-one has, on what basis do you claim that lidar and radar augmentation will help get us there? You keep on implicitly saying that having these will get us to the goal faster, but I dont see any actual evidence to justify that assumption.
 
Your argument is we can't (yet) make car vision as good as human vision, so we need to augment with radar etc. ...
-- 360 degrees... Equal accuity... Dynamic range... Simultaneous alternate views...
No, I think you are missing my argument altogether. This is not about visual accuity, it's about perception. I am not comparing Tesla's cameras, lenses, and CCD sensor to human eyes, cornea, and retina. I am comparing the NNs that classify and interpret the world around the car from the processed video to basic human ability to process visual stimuli in their visual cortex and frontal lobes. Humans have object permanence and intuition. As I have posted from my accounts testing FSD Beta many times, currently FSD has neither. Humans also have long term memory, and each time they drive a route, the memory of the vagaries of street markings, lane weirdness, and navigation of intersections. The first time a human approaches a particular weirdness, they may drive with timidity and uncertainty, but after one or two times through, it's old hat. FSD will always approach it with timidity, uncertainty, and confusion (unless it happens to be one that makes it into the training data).

This is where Mobileye's approach of integrating AV maps and RSS feeds with updated road data makes a lot of sense to me. Good bet that another car has been through that very weirdness -- maybe even in the last day or so -- and successfully navigated it. Let's put that learning up in the cloud and make it available to all cars (like human long term memory). Maybe you don't need access to all the AV maps around the globe - just those in your regional area or along your common routes.

Also, AFAIK there is still no recursion in FSD Beta's NN, so while there may be some point where Tesla appreciates the full context of a situation, if something is occluded in the next frame(s), that context is lost, and now FSD Beta will begin to behave differently. Beta testers are experiencing this all the time (as previously evidenced). Utilizing the AV map data from Mobileye should greatly improve these types of situations as the full context is available before the car even approaches the situation. And these are the types of integrations that computers can do that humans really can't, thus taking advantage of the computer system's strength to "level the playing field" as it were for perception and "vision" in driving.
 
Last edited:
And my point remains .. you are again assuming that adding lidar/radar WILL allow self-driving cars "in a timely manner", but based on what? Waymo hasn't solved it yet .. they still have geofencing and HD maps, so who has solved it? And if no-one has, on what basis do you claim that lidar and radar augmentation will help get us there? You keep on implicitly saying that having these will get us to the goal faster, but I dont see any actual evidence to justify that assumption.
You're assuming that the only goal is to make self-driving cars that can drive everywhere that humans drive with no remote assistance. Personally I think that will require artificial general intelligence. If HD maps can made cheaply enough then you can have a viable product very soon. I don't see any reason to believe that making HD maps can't be automated (seems way easier than automating driving!).
Tesla is working on "Vidar" (depth map from camera data) but it clearly doesn't work well enough yet. So there's an example of a problem that can be solved today by LIDAR. It just helps you move on to the next problem, later you can go back replace it with "Vidar" NN once you get it working well enough. Everything that other self-driving car companies are doing is intended to narrow the scope of the problem as much as possible. Tesla has a viable product in FSD Beta well before it achieves driverless reliability, they don't.
 
You're assuming that the only goal is to make self-driving cars that can drive everywhere that humans drive with no remote assistance. Personally I think that will require artificial general intelligence. If HD maps can made cheaply enough then you can have a viable product very soon. I don't see any reason to believe that making HD maps can't be automated (seems way easier than automating driving!).
Nope, wasn't assuming anything, I was merely pointing out that there was an unvoiced assumption that adding lidar/radar would yield a fast(er) track to some level or other of self-driving.

If you want my opinion, it's that the Tesla and Waymo approaches will converge .. Tesla will improve vision making the need for auxiliary sensors less important (and thus more like an extra safety system rather than a necessity), and probably then use the results of the cars AI to send back dynamically built mapping which, in effect, means the entire fleet is creating maps of every junction (and updating them every time a Tesla drives through). And that is where your scale comes from that lowers the mapping costs by orders of magnitude.
 
Nope, wasn't assuming anything, I was merely pointing out that there was an unvoiced assumption that adding lidar/radar would yield a fast(er) track to some level or other of self-driving.

It is not an assumption. It has yielded a faster track to some level of self-driving. We have real self-driving (at L4 level) in several areas with the vision+lidar+radar approach. To-date, we have no real self-driving with vision-only. By "real self-driving", I mean driverless or L4/L5.