Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Unedited Mobileye’s Autonomous Vehicle & other CES 2020 Self Driving ride videos

This site may earn commission on affiliate links.
I think you might be minimizing the importance of safety. Marginally safer or 25% safer is not good enough because it would still result in hundreds or thousands of deaths. And yes, I know that robotaxi deaths are inevitable. But when a company deploys robotaxis in large numbers with no drivers inside, they will be liable for every single at-fault accident. Too many accidents could bankrupt the company, especially if people die in the accidents. Not to mention the bad PR would scare customers away. And if investigations show that the accidents were avoidable with LIDAR or HD Maps but the company chose not to include them, that would be very bad. So no, I don't think you would have a successful product if you had a robotaxi that was cost effective and only marginally safer than human drivers. You might be first to market but you would also go out of business pretty quickly.

That's why companies like Waymo and Cruise who have L4 autonomous cars now are still waiting before deploying them in large quantities. They know the safety has to be much much better than human drivers before they can deploy them. It's why Mobileye is setting a target of 10 million hours of driving per accident. Just saying the safety is marginally better is not good enough.

Actually, there is no real debate. Virtually all experts agree now that LIDAR and HD Maps are required for safe autonomous cars. The only disagreement comes from Tesla because they think cameras can achieve autonomous driving that is "safe enough".
I'm not sure I am minimizing anything, just being a realist who is also considering how the free market works. Marginally safer, would by definition, save lives and prevent accidents. That is not good enough for you? Especially when you extend this out by scale of the number of vehicles/robotaxis deployed. So yes, marginally safer absolutely is good enough. Would I hail a robotaxi that has a safety record 25% better than the average Uber driver, while also being cheaper? Of course, as would most people... I'm not sure why there is even a debate here. Further, insurance exists and is forced by law here for a reason, every Taxi or Uber driver currently has insurance to protect against at-fault accidents. Why would we assume a robotaxi would be any different? And if there's a verifiable improvement in safety compared to the average Uber driver, how could you argue a robotaxi in this scenario would incur greater insurance costs? It obviously would not, as insurance rates are based off of cold hard math.

Again, chasing absolute safety is a fools errand - if you can produce a functional and safer product, in comparison to a human driver, 2 years before Waymo because you are not relying on costly Lidar and HD maps, you have a successful product. There's no reason to evangelize a technological approach, what matters is the end result.

I have absolutely heard those speak in Tesla's sphere, Comma.ai, Cruise, Waymo from Chris Urmson and original Darpa Grand Challenge winners - that understand there are no absolute technical limitations to say Lidar and or HD maps are a requirement. Certainly many are of the opinion they could not form a functional system without them currently, but that's irrelevant in terms of who is first to market at scale in the future (short or long-term).
 
I'm not sure I am minimizing anything, just being a realist who is also considering how the free market works. Marginally safer, would by definition, save lives and prevent accidents. That is not good enough for you?

Depends how many lives it saves. Marginally means slightly, so let's say 5%. A robotaxi that only saves 5% more lives would not be good enough, no.

Especially when you extend this out by scale of the number of vehicles/robotaxis deployed. So yes, marginally safer absolutely is good enough. Would I hail a robotaxi that has a safety record 25% better than the average Uber driver, while also being cheaper? Of course, as would most people... I'm not sure why there is even a debate here.

I would not hail a robotaxi that is only 25% safer. No way.

Again, chasing absolute safety is a fools errand - if you can produce a functional and safer product, in comparison to a human driver, 2 years before Waymo because you are not relying on costly Lidar and HD maps, you have a successful product. There's no reason to evangelize a technological approach, what matters is the end result.

Nobody is chasing absolute safety. But you need safety to be at a certain level before you can deploy. We are not there yet.

Again, it won't be a successful product if it is not safe and a lot of people die.

And by the way, considering that Tesla is far behind Waymo, you are making a pretty big assumption that it is even possible to achieve autonomous driving 25% safer than the average driver, and 2 years before Waymo, with only cameras.
 
Last edited:
So yes, marginally safer absolutely is good enough. Would I hail a robotaxi that has a safety record 25% better than the average Uber driver, while also being cheaper? Of course, as would most people...
How would you even ascertain that a robotaxi is 25% better than an avg Uber driver ?

This is not a trivial question. The fact that it can't easily be determined without running billions of miles is why I won't hail such a taxi.
 
This is also a great uncut video with awesome commentary.


Finally had time to watch this. I'm really impressed by how many higher-level driving rules they've programmed into their system. For e.g. watching for opened doors on parked cars, or watching reverse lights or hazards to determine intention. If I recall correctly, Waymo said they were struggling with determining the intention of parked vehicles and were using their human monitors to give the vehicles that context.

I also know this particular video was focused on the vision system, but it does seem like the vision is doing a lot of the heavy lifting for all of the context of those higher-level functions. Lidar is pretty good for telling you where something is, but this video really shows that vision tells you what it is and what it is doing.
 
I think Tesla saw the writing on the wall which prompted their Autopilot rewrite. In a nutshell, as I understand it, they're going to also use a 3d approach to mapping the environment which, according to them, should greatly improve the speed and accuracy of object recognition. We shall see.
As much as I admire and respect Musk, sometimes you have to put the ego aside and accept other points of view. Competition is needed in this space to keep everyone on their toes.
 
I really wish Tesla gave us a video like Zoox to see where their progress is.

I am losing faith in the company's ability to deliver.

FSD from Tesla is looking more and more like snake oil, especially after the latest EAP release with "city driving" which is just stopping at every intersection.
 
I really wish Tesla gave us a video like Zoox to see where their progress is.

Well, Tesla gave us a FSD video last year during Autonomy Investor Day and it was far inferior to the Zoox video. So we know their progress as of last year. It was shorter and it was just a couple simple turns, getting on the interstate and then coming back. I doubt the current development software could handle the scenarios in the Zoox video.
 
Well, Tesla gave us a FSD video last year during Autonomy Investor Day and it was far inferior to the Zoox video. So we know their progress as of last year. It was shorter and it was just a couple simple turns, getting on the interstate and then coming back. I doubt the current development software could handle the scenarios in the Zoox video.

I think the closest we have from Tesla are that recruitment clip here: Tesla’s New Autopilot AI Video Shows Laser-Focus Ramp Towards Autonomy

And the debug videos that Green produces: green on Twitter

And actually I see a lot of commonality even in those short clips with what Zoox is doing. At one point in the recruitment clip you can see a "hill_crest" variable tracked, which is a problem Zoox said they were also working on.
 
I think the closest we have from Tesla are that recruitment clip here: Tesla’s New Autopilot AI Video Shows Laser-Focus Ramp Towards Autonomy

And the debug videos that Green produces: green on Twitter

And actually I see a lot of commonality even in those short clips with what Zoox is doing. At one point in the recruitment clip you can see a "hill_crest" variable tracked, which is a problem Zoox said they were also working on.

Personally, I think the Zoox perception looks better than the Tesla perception. But the Tesla recruitment video is nice. It does show us a little bit of what the internal development software can do. And we can see that Tesla has done a lot with camera vision. Also, we can look at the Tesla recruitment video and the Zoox video and see a lot of similarities in the vision. That should not be surprising since they both use camera vision as part of their perception stack.

Which really leaves us with 2 questions:
1) How accurate, complete, and reliable is the camera vision? Specifically, are there things that Zoox can see that Tesla can't or vice versa? Are there aspects of camera vision that is more reliable with Zoox or more reliable with Tesla? Also, Zoox also uses radar and lidar as part of their perception stack. Does radar and lidar improve Zoox's perception over Tesla?
2) How is the driving policy? For example, can tesla handle the driving cases we see in the Zoox video and if so, how safely?

It is worth nothing that since all autonomous driving systems have camera vision in common, and eventually all autonomous driving have to solve perception, it is the driving policy that will ultimately set every FSD system apart. So, if we assume that Tesla does eventually get perception good enough for autonomous driving, what will really separate Tesla from Zoox will be the driving policy, ie what driving situations the car can handle and how the car handles driving situations? .
 
1) How accurate, complete, and reliable is the camera vision? Specifically, are there things that Zoox can see that Tesla can't or vice versa? Are there aspects of camera vision that is more reliable with Zoox or more reliable with Tesla? Also, Zoox also uses radar and lidar as part of their perception stack. Does radar and lidar improve Zoox's perception over Tesla?
2) How is the driving policy? For example, can tesla handle the driving cases we see in the Zoox video and if so, how safely?

A question I have for Zoox and Tesla is how well the systems can handle novel situations. The Zoox developers were really clear in their video that it was from a route that they had practiced. It can handle novel situations within a pre-driven route (vehicles and pedestrians in different locations), but they have not yet proven that their system can handle a brand new route.

I think in some regard Tesla learned that they can make flashy demonstration videos rather easily on practiced routes, but the community gets upset when they learn it's not possible with their personal car. So they hold off on demonstrations until they actually have features to push wide.

Meanwhile Zoox doesn't have any customers in the same way Tesla does, so they can make these videos of practiced routes to show to potential investors and show off their latest developments.
 
A question I have for Zoox and Tesla is how well the systems can handle novel situations. The Zoox developers were really clear in their video that it was from a route that they had practiced. It can handle novel situations within a pre-driven route (vehicles and pedestrians in different locations), but they have not yet proven that their system can handle a brand new route.

Good question. I think that would depend on how developed the perception is and how developed the driving policy is.

Perception
For Tesla, since they rely on camera vision only, it would depend on whether the new route has any new features that the camera vision has not been trained for yet. For Zoox, they use HD maps, so they would need to load their car with the HD map of the new route before attempting the drive.

Driving policy
Assuming the perception can handle the new route, there is also the question of whether the new route has any cases that the driving policy can't handle. More advanced driving policy will mean a greater chance that the car can handle new situations it encounters on a new route.
 
  • Like
Reactions: willow_hiller
Good question. I think that would depend on how developed the perception is and how developed the driving policy is.

This does bring me back to the "Turing Test" point someone brought up in the Feature Complete thread. The driving systems obviously don't need to solve the Turing Test, but it is sounding like they will need a degree of creativity in order to be truly L5.

No matter how well you train perception, or how fine grained your driving policy is, there will always be a situation that the system has never encountered before. Maybe part of this can be avoided by teaching a system what's normal and letting it default to routing around something if it's deemed abnormal ("X is blocking the road, I have no idea what X is but it looks like I can't drive around it and I shouldn't drive over it, so I'll make a U-turn and find an alternative route.") But it would make for an awfully inefficient route if a L5 vehicle ran scared every time it got confused...
 
This does bring me back to the "Turing Test" point someone brought up in the Feature Complete thread. The driving systems obviously don't need to solve the Turing Test, but it is sounding like they will need a degree of creativity in order to be truly L5.

No matter how well you train perception, or how fine grained your driving policy is, there will always be a situation that the system has never encountered before. Maybe part of this can be avoided by teaching a system what's normal and letting it default to routing around something if it's deemed abnormal ("X is blocking the road, I have no idea what X is but it looks like I can't drive around it and I shouldn't drive over it, so I'll make a U-turn and find an alternative route.") But it would make for an awfully inefficient route if a L5 vehicle ran scared every time it got confused...

The key is is how often does the autonomous car get confused. I agree that if it gets confused and takes inefficient routes frequently, that would be bad. But if it happens extremely rarely, that would be acceptable. So I think the driving policy just needs to be sufficiently advanced that it gets confused as infrequently as possible. Also, the important thing is that the autonomous car never just stops and freezes in the middle of the road but that the driving policy is able to figure something out even it means maybe not taking the best route. Again, if it only happens very very rarely, that would be acceptable.

In fact, that is precisely what companies like Waymo, Zoox, Mobileye, Cruise, are working on because they need to develop a driving policy that can handle as many cases as possible. IMO, that is precisely the great challenge of autonomous driving today. Companies doing autonomous driving already have autonomous cars with excellent perception that can handle most driving routes. It's solving those remaining edge cases so that the autonomous car can actually be as useful as possible in virtually every situation, that is the goal.

And based on the videos I have seen, they appear to be working on quite good driving policy. For example, recognizing double parked cars, parked delivery trucks, recognizing if they have their emergency blinkers on or a door open to determine how quickly they might move, recognizing boxes that have fallen off of delivery trucks, etc... and also monitoring incoming traffic to safely determine if and when they can pass around the obstacle. So I think we can build driving policy that will be quite good. Will it be able to handle every situation? No. But IMO, it just needs to be able to handle like 99.99999% of situations.
 
Last edited:
The Zoox video is impressive. It drives home the point that the difference between "feature complete" and FSD is substantial.

As we know, R&D and production are different things. Zoox is doing a great job, with a formidable sensor suite. They've done a lot of homework on the perception and policy side. It's a shame there isn't a way to blend that with Tesla's vast fleet data. Or, as @diplomat33 said, it would be neat if Tesla would show us more of the current state of their development.
 
New 40 mins unedited video with drone view. 12 cameras only as before of-course.


Thanks. You beat me to it.

I think Mobileye also uses HD maps as well, right? I think that is important because the HD map would supplement the camera vision and increase reliability.

Obviously, Mobileye has great camera vision to be able to do this demo. But I am also impressed by the driving policy. This demo really highlights to me how complex city driving can be. There are so many things that can happen that you don't expect. Mobileye has really good driving policy to be able to react to these unexpected, random events.
 
It seems Mobileye has a bit of a "dancing cars" effect on their screen like with Tesla. Not sure if it is just a visualization glitch or a consequence of camera vision not being quite good enough in some cases.
 
It seems Mobileye has a bit of a "dancing cars" effect on their screen like with Tesla. Not sure if it is just a visualization glitch or a consequence of camera vision not being quite good enough in some cases.

Good point. I chuckled a bit at 20:38 when a parked truck on the right became a GIANT sedan.


I think Mobileye also uses HD maps as well, right? I think that is important because the HD map would supplement the camera vision and increase reliability.
I brought this up in the other thread, but I'm curious how they're doing localization with Lidar to their HD maps. I'm guessing they're using Pseudo Lidar. If I recall correctly, they're using Pseudo Lidar (or something similar) to generate HD Maps from MobilEye equipped consumer cars. Pretty sure it's not real-time in the consumer cars, but could be in the AV.