Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Waymo has a level 4 service available to the public in Chandler. Tesla and Mobileye only have level 2, so they must be way behind!
Its way more nuanced than that and someone whose barometer results in believing Tesla is 5-10 years ahead because they make grandiose claims would never comprehend logic. Literally that is your response to the Mobileye video. That they didn't put or say that "The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself." like Tesla did in their 2016 video.

When you ask other Tesla fans its the same tesla is using 8 low resolution cameras which is superior to loldars and Tesla has 'billions of miles of data' blah blah blah.

Everyone could have L5 today and tesla fans will still be claiming Tesla is ahead because they have 'billions of miles of data'. After 6 years you would think this data advantage would show up in actual substance that you can point to.

Then they try to say that Tesla works everywhere and but Waymo is L4 in chandler, thats the data advantage.
No...Supervision not only works anywhere in the US, but also in the EU, China, Japan,etc.
Huawei Autopilot works anywhere in China. So if that's the claim of data advantage (working anywhere in a country) this isn't it.

Rather than trying to base things on grandiose claims or fantasy 'data advantage, exponential nonsense'.

You actually base things on current technology, current deployments, and actual realistic development timeline (again not grandiose claims by the CEO)

I would say Mobileye is atleast 1-2 years behind Waymo depending on what happens next year. If they can actually launch a REAL robot-taxi service like what Waymo has now in chandler. No safety drivers, no NDA, no specific routes (but an actual sq mile area). Then they would be a year behind Waymo and if they can launch in another urban city like they plan in 2023, they would leapfrog Waymo.

But Waymo has its own road map to counter. If they launch in SF the same service they have in chandler next year and Mobileye still launches in Jerusalem and Telviv, they still stay comfortable with a 1 year lead. If Mobileye fails to launch next year then they fall to 2 years behind and Waymo now has a comfortable lead.

If Waymo doesn't launch next year and all goes well for Mobileye (unlikely) then Mobileye takes the reins.

It's funny how people can watch the same thing and come up with different conclusions. People seem to miss how sloppy and unpolished Mobileye is. They'll never get there. They're like the Nikola of FSD (although probably not fraud). They use the most smoke and mirrors and jibberish presentations.
Funny because you won't find a 40 mins FSD beta drive in an urban dense environment like NYC with traffic
Watch and be critical of their technical presentations, particularly the parts about separate lidar radar subsystems, vision only, true redundancy, REM mapping with 10kb a mile, needing to practice, their roadmap, etc.
You mean the Autonomy Day roadmap where Elon said he will have a million L5 robotaxi in 2020 and that Autopilot won't need supervision and you will be able to look out the window by Q2 2020?

Maybe its the part where Elon brushes off the chip not having fail operation and not meeting none of the safety standards.
Or the part where Tesla employee says they will use the driver monitoring cam to see behind the car then we find out its absolutely trash.


Or the barebone simplistic deep learning 101 presentation they gave that wowed the entire TSLA community because they are completely clueless. Maybe its the tech presentation where andrej said they don't do much simulation and would rather focus on their bread and butter. I could go on and on.

Have you ever tried to find out the size of a vector map of a mile?
Do you even know what true redundancy is? Do you know what early fusion and late fusion is? Why is the head of Waymo Research backing up Mobileye's true redudancy here?
Do you know how long mobileye been doing vision only? Have you attempted even trying to find out?
Do you understand the logistics, warehouse, repair, fleet response, routing, software update to an autonomous fleet without introducing any bugs, teleops and consumer service difficulty of a robotaxi?
Do you realize that the main goal of Waymo in chandler which they have said already multiple times is to practice robottaxi in chandler.

All you care about is cameras cameras cameras and data data data.
I'm sorry to break it to you but there's alot to SDC than just cameras and data. Robot-taxies are a logistics nightmare.
 
Last edited:
Good article about AV deployment:

What nations and cities will be the first to deploy self-driving car technology at scale? Only time will tell. But when autonomous vehicle experts consider that question, the same places are repeatedly cited as likely hotbeds for driverless cars. On the international level, it’s the United States, Germany, and China. And, within the U.S., industry observers point to the San Francisco Bay Area, Pittsburgh, the Phoenix area, Miami, Austin and Detroit.

Read more:
 
This is my thesis and speculative.

Vision is "solved" enough that it is a practical platform from which to build/expand driving.

Limited access highway driving is not perfect but delivering results that demonstrate value.

Surface street driving is a much larger challenge and vision alone does not solve it. The solution that DOJO is focused upon seems to be labeling and processing video data much faster but possibly much more.

The next level solution to surface street driving is applying the mothership NN to mapping. Each intersection gets a probability of successful navigation based on vast processing of past experience with each intersection. The NN determines a route based on the highest probability of intersection success. This would primarily be based on approach. Which approach to an intersection has the highest probability of a safe transition?

This would be route based optimization of navigation on autopilot or something like it. This would deliver surface street safety performance equal to highway driving.

This could be compared to geo-fencing but it is far more NN based as I see it.

Map routing analysis solves to approach each intersection the "best" way given the dynamic conditions of traffic, weather, visibility, activity, time of day etc. What this means is that the idea of handling any driving situation is surrendered to an approach that there is a best way for each intersection. As an extreme example, it could be that mapping routes in a way that maximizes right turns and minimizes left turns, avoids circles and uncontrolled intersections for example.

Tesla has already separated tasks such as "summon" and "highway driving" into their own catagories. I am proposing that one (not the only) way forward is to generate dynamic maps for each destination that maximizes intersection success based on the vast experience of the fleet. This is a practical way to significantly advance the march of 9s in terms of safety.

One possible way this could work (not the only way) would be for the driver to have a setting for mapping under FSD that allows a choice of zero intervention success probability. The choice the driver is making is time vs convenience vs safety. This would provide the maximum likelihood of a safe trip to destination. Human driving would have a lower probability of success for example.

As NN solution probabilities improve the time to safe destination would improve.

Every day FSD waits to reach a near perfect solution (.999999...) for every driving situation, there are deaths and injuries due to accidents. At some point the NN technology will good enough that some navigational maps/routes could deliver the desired endpoint based on a particular route. We should seek this solution in the near term and make it available. Tesla is about to come to this decision point IMO.
How a problem is solved is based on how it is formulated. There are different approaches each with its own set of constraints and how they go about satisfying them. My take is it's a problem of bounded optimality and satisficing rationality. This must be done in real-time. Given the uncertainty in human driving the historical data is too dirty.

 
Last edited:
  • Like
Reactions: JHCCAZ
Someone break down the Mobileye approach logic for me.

Here's what Mobileye says about their approach:

1) Use current Mobileye ADAS fleet to crowdsource maps with REM technology (based on vision). No images or video is collected, only 10kb per mile due to bandwidth limitations.

2) Have a vision based subsystem and separate lidar/radar based subsystem

3) Vision and lidar subsystems localize themselves in the REM map.

4) Their final system is the product of the MTBF of the vision and lidar/radar subsystems, so if the vision system fails every 1000 miles, and the lidar system fails every 1000 miles, then their true redundancy (combination of the two systems into one car) MTBF is 1000 x 1000 = 1,000,000 miles between failure.

If someone can explain the following in a logical way, that'd be great:

1) How would the system know if the vision failed vs lidar? When there's a disagreement between vision vs lidar, how can it know which failed? The true redundancy calculation makes zero sense to me. The MTBF is limited by the weakest link in the system, not the product of the two systems.

2) If the lidar and vision subsystems localize themselves on a vision-generated REM map, wouldn't the cause of most failures be due to incorrect REM map data? Also, how do they know that the REM map is reliable when the ADAS fleet doesn't receive NN / software updates that would improve recognition of road semantics / signs / etc.?
 
4) Their final system is the product of the MTBF of the vision and lidar/radar subsystems, so if the vision system fails every 1000 miles, and the lidar system fails every 1000 miles, then their true redundancy (combination of the two systems into one car) MTBF is 1000 x 1000 = 1,000,000 miles between failure.
Wouldn't this be 1000/2?
 
Its way more nuanced than that and someone whose barometer results in believing Tesla is 5-10 years ahead because they make grandiose claims would never comprehend logic. Literally that is your response to the Mobileye video. That they didn't put or say that "The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself." like Tesla did in their 2016 video.

When you ask other Tesla fans its the same tesla is using 8 low resolution cameras which is superior to loldars and Tesla has 'billions of miles of data' blah blah blah.

Everyone could have L5 today and tesla fans will still be claiming Tesla is ahead because they have 'billions of miles of data'. After 6 years you would think this data advantage would show up in actual substance that you can point to.

Then they try to say that Tesla works everywhere and but Waymo is L4 in chandler, thats the data advantage.
No...Supervision not only works anywhere in the US, but also in the EU, China, Japan,etc.
Huawei Autopilot works anywhere in China. So if that's the claim of data advantage (working anywhere in a country) this isn't it.

Rather than trying to base things on grandiose claims or fantasy 'data advantage, exponential nonsense'.

You actually base things on current technology, current deployments, and actual realistic development timeline (again not grandiose claims by the CEO)

I would say Mobileye is atleast 1-2 years behind Waymo depending on what happens next year. If they can actually launch a REAL robot-taxi service like what Waymo has now in chandler. No safety drivers, no NDA, no specific routes (but an actual sq mile area). Then they would be a year behind Waymo and if they can launch in another urban city like they plan in 2023, they would leapfrog Waymo.

But Waymo has its own road map to counter. If they launch in SF the same service they have in chandler next year and Mobileye still launches in Jerusalem and Telviv, they still stay comfortable with a 1 year lead. If Mobileye fails to launch next year then they fall to 2 years behind and Waymo now has a comfortable lead.

If Waymo doesn't launch next year and all goes well for Mobileye (unlikely) then Mobileye takes the reins.


Funny because you won't find a 40 mins FSD beta drive in an urban dense environment like NYC with traffic

You mean the Autonomy Day roadmap where Elon said he will have a million L5 robotaxi in 2020 and that Autopilot won't need supervision and you will be able to look out the window by Q2 2020?

Maybe its the part where Elon brushes off the chip not having fail operation and not meeting none of the safety standards.
Or the part where Tesla employee says they will use the driver monitoring cam to see behind the car then we find out its absolutely trash.


Or the barebone simplistic deep learning 101 presentation they gave that wowed the entire TSLA community because they are completely clueless. Maybe its the tech presentation where andrej said they don't do much simulation and would rather focus on their bread and butter. I could go on and on.

Have you ever tried to find out the size of a vector map of a mile?
Do you even know what true redundancy is? Do you know what early fusion and late fusion is? Why is the head of Waymo Research backing up Mobileye's true redudancy here?
Do you know how long mobileye been doing vision only? Have you attempted even trying to find out?
Do you understand the logistics, warehouse, repair, fleet response, routing, software update to an autonomous fleet without introducing any bugs, teleops and consumer service difficulty of a robotaxi?
Do you realize that the main goal of Waymo in chandler which they have said already multiple times is to practice robottaxi in chandler.

All you care about is cameras cameras cameras and data data data.
I'm sorry to break it to you but there's alot to SDC than just cameras and data. Robot-taxies are a logistics nightmare.

I always thought the Tesla robo-taxi thing was an astonishingly bad idea. Even if the technology was there, I have 0 interest in renting out my $60k personal vehicle for others to ride in/puke in/tear up/have it drive to sketchy places/etc. I have 0 interest in, essentially, running a business. Even if Tesla's network supposedly takes on liability, taxes, paperwork, and everything else, I have 0 interest in my personal vehicle being involved in any of it.

I don't have anything against robo-taxi services though, and do think they have a place in the future for non-personal-vehicle transport.

Anyway, my short term interest with Tesla, along with many others, is the L2 driver assist maturing to the point where operating the vehicle becomes a high level task, like a captain on a bridge instructing a helmsmen. That would add a huge amount of value to me personally.

My long term interest with Tesla is having a personal car that I can sleep in the back seat in while it drives me places.

All of this to say... While Elon deserves criticism for his (IMO) poorly thought out robo-taxi speil, I don't really care about how Tesla cars compete in that space, and their success or failure as a robo-taxi doesn't change my opinion of their cars much. I'll happily drive my L2 Tesla as a personal vehicle, and take a Waymo to the airport.
 
Wouldn't this be 1000/2?
Depends on whether you can drive correctly with only one set of sensors or the other and on whether the two sensors' failures are independent

If the answer is no, then depending on whether the failures are fully independent or not, it could be anywhere between 1000/2 and 1000.

If the answer is yes, then depending on whether failures are fully independent or not, the answer could be anywhere between 1000 and (1000^2).

My gut says that the answer is no, and that the failures are largely independent, so my guess would be 1000/2 or thereabouts.
 
1) How would the system know if the vision failed vs lidar? When there's a disagreement between vision vs lidar, how can it know which failed?

Here is a diagram of the "true redundancy" approach:

modelling-diagram2.svg


Note that Mobileye is not doing traditional sensor fusion. They are not trying to fuse cameras, radar and lidar into just one single coherent perception model to drive the car. Instead, as you can see in the diagram, the cameras feed into their own world model which feeds into the RSS driving policy to determine what to do. The radar/lidar also feed into their own separate word model which feeds into the RSS driving policy to determine what to do. The system also fuses the camera/radar/lidar into a third world model which feeds into the RSS driving policy. Then the car compares the driving policy decision from all 3 independent models to determine which one is the safest. So ME is only comparing any disagreements at the planning level, not the perception level.

There could be small disagreements that don't affect the final planning, where all 3 planning models still agree. Any disagreement between camera and radar/lidar that does not produce a planning difference can be ignored. ME only has to reconcile disagreements in the final planning that affect safety. For example, if the camera world model says to drive straight at current speed, the combined word model says to brake hard to avoid a collision and the radar/lidar world model says to brake hard to avoid a collison then the RSS driving policy has to decide which one to trust. in this case, since 2 out of the 3 models say to brake hard, I am guessing the car would brake hard to avoid a collision. But part of RSS is that it has rules to make sure the car drives safely.

The true redundancy calculation makes zero sense to me. The MTBF is limited by the weakest link in the system, not the product of the two systems.

In the ME system, I think the idea is that the system only fails if both parts fail at the same time. So it is not dependent on the weakest link. The car can still drive the cameras fail. The car can still drive if the radar or lidar fails. If the failures are independent, then you can multiply the odds of failures of the two systems to get the odds of failures of the whole system.

Keep in mind that Amnon is not saying that the product of the two systems is an exact calculation. But he feels the systems are close enough to independent that the calculation is a good enough estimate. And he feels that there is enough buffer that if the estimate is off, the car is still safe enough. For example, say the product of the MTBF is not 10M miles per Failure but only 4M miles per failure. Even 4M miles per failure would still be a very safe FSD.

The point is not the specific number. The point is that having two independent FSD systems will be significantly safer than having a single FSD system. So if one FSD system is safe by itself, then combining both, will make your vehicles even safer for deployment.
 
Last edited:
Then the car compares the driving policy decision from all 3 independent models to determine which one is the safest. So ME is only comparing any disagreements at the planning level, not the perception level.

Still sounds like mumbo jumbo. I’m not sure even you believe what you’re saying lol.

Having three world models and choosing “the safest”? Mobileye is just over-complicating their approach so people stop thinking about their BS.
 
Still sounds like mumbo jumbo. I’m not sure even you believe what you’re saying lol.

Having three world models and choosing “the safest”? Mobileye is just over complicating their approach so people stop thinking about their BS.

It is not mumbo jumbo. You are just not understanding how it works. Other engineering systems like airplanes and rockets already use a similar principle. They might have two computers calculate something independently and compare. They have back-up systems so that the system can still function even when one system fails. That is why airplanes and rockets have such high safety. They are much safer than if they did not have the extra back-ups. ME is applying the same principle to autonomous driving.

The Tesla approach might be much simpler but the safety of the FSD will be entirely dependent on the reliability of the vison system. So the entire safety is based on one system. If vision fails, FSD fails. IMO, it will be harder for Tesla to achieve the necessary safety with vision-only.
 
It is not mumbo jumbo. You are just not understanding how it works. Other engineering systems like airplanes and rockets already use a similar principle. They might have two computers calculate something independently and compare. They have back-up systems so that the system can still function even when one system fails. That is why airplanes and rockets have such high safety. They are much safer than if they did not have the extra back-ups. ME is applying the same principle to autonomous driving.

The Tesla approach might be much simpler but the safety of the FSD will be entirely dependent on the reliability of the vison system. So the entire safety is based on one system. If vision fails, FSD fails. IMO, it will be harder for Tesla to achieve the necessary safety with vision-only.
Except "eye" doesn't imply non-visual LIDAR or radar. ME's spiel was autonomy with their EYE (vision) system. Adding redundancy with multiple sensors is the direction you advocate, while emulating human vision is the direction EM and company are taking. We'll see which system does better in real life, non-gerrymandered areas.

And thanks for indicating "IMO." Respect.
 
Except "eye" doesn't imply non-visual LIDAR or radar. ME's spiel was autonomy with their EYE (vision) system. Adding redundancy with multiple sensors is the direction you advocate, while emulating human vision is the direction EM and company are taking. We'll see which system does better in real life, non-geofenced lab areas.

They are not mutually exclusive though. You can emulate human vision AND also add redundancy. That's what Waymo, Mobileye and co. are doing. I am an advocate for doing both, emulating human vision AND adding redundancy.

And thanks for indicating "IMO." Respect.

I don't want to incur your wrath again. LOL.
 
  • Funny
Reactions: rxlawdude
They have back-up systems so that the system can still function even when one system fails. That is why airplanes and rockets have such high safety. They are much safer than if they did not have the extra back-ups. ME is applying the same principle to autonomous driving.

no, the same principle does not apply to mobileye’s approach. I’m not gonna explain in detail because you describe it in your own post.

airplane and rocket redundancy are about having back up systems to improve safety. Airplane and rocket redundancy systems are not there to improve navigation performance. Airplane and rocket systems aren’t constantly choosing which sensor is ”the safest” and making decisions based on that “world view.” Once again, I don’t think you actually believe in the Mobileye logic.
 
no, the same principle does not apply to mobileye’s approach. I’m not gonna explain in detail because you describe it in your own post.

airplane and rocket redundancy are about having back up systems to improve safety. Airplane and rocket redundancy systems are not there to improve navigation performance. Airplane and rocket systems aren’t constantly choosing which sensor is ”the safest” and making decisions based on that “world view.” Once again, I don’t think you actually believe in the Mobileye logic.
They are actually redundant. In simple terms the results are compared and a voting scheme is employed to determine the result to accept and which may need to be rejected. If a result of a particular source is rejected over some range or frequency it would put into a failed state.
 
They are actually redundant. In simple terms the results are compared and a voting scheme is employed to determine the result to accept and which may need to be rejected. If a result of a particular source is rejected over some range or frequency it would put into a failed state.
Isn't this (viewed at macro level) exactly the reason for Tesla going to vision only, due to instances of phantom braking (or not braking for stationary obstructions)?
 
I always thought the Tesla robo-taxi thing was an astonishingly bad idea. Even if the technology was there, I have 0 interest in renting out my $60k personal vehicle for others to ride in/puke in/tear up/have it drive to sketchy places/etc. I have 0 interest in, essentially, running a business. Even if Tesla's network supposedly takes on liability, taxes, paperwork, and everything else, I have 0 interest in my personal vehicle being involved in any of it.

I don't have anything against robo-taxi services though, and do think they have a place in the future for non-personal-vehicle transport.

Anyway, my short term interest with Tesla, along with many others, is the L2 driver assist maturing to the point where operating the vehicle becomes a high level task, like a captain on a bridge instructing a helmsmen. That would add a huge amount of value to me personally.

My long term interest with Tesla is having a personal car that I can sleep in the back seat in while it drives me places.

All of this to say... While Elon deserves criticism for his (IMO) poorly thought out robo-taxi speil, I don't really care about how Tesla cars compete in that space, and their success or failure as a robo-taxi doesn't change my opinion of their cars much. I'll happily drive my L2 Tesla as a personal vehicle, and take a Waymo to the airport.

If I was Tesla I would just stay at Level 2 or 3 for the next 5-10 years and be proud of it, higher levels just opens them up to liability. There's so much money to be made as an advanced drive assistance subscription/add-on (especially paired with their own insurance product) versus getting into the whole cutthroat, race-to-the-bottom ridesharing/robo-taxi business. No one in that field is ever going to make their investment back.
 
  • Like
Reactions: rxlawdude
Aurora just shared their blueprint for launching a robotaxi service in 2024 with Uber and Toyota.

Here are the highlights:

1. Autonomous ride-sharing will be a massive market that outgrows that of trucking.

2. Our 10-year agreement to receive Uber’s data gives us a unique competitive advantage.

For example, we’ve already leveraged Uber’s detailed marketplace data to:
  • Select the city locations of our first launch;
  • Prioritize our development roadmap—we know what trips are most popular and where they take place (for example, rides to the airport), and what roadways are most frequently traversed; and
  • Develop tools to optimize our fleet positioning, including pick-up and drop-off zones.
3. The farther you can see, the faster you can go. The Aurora Driver’s “common core” allows all of our vehicles to leverage FirstLight Lidar to operate at highway speeds. Since a significant number of ride-hailing trips use a freeway or hit at least 50 mph, our ride-hailing product will benefit from our trucks’ freeway-focused experiences and we’ll be able to target lucrative trips to the airport.

4. Launching the Aurora Driver with partners like Uber will make every ride cheaper. A hybrid model of human-driven and autonomous rides will reduce trip costs to as cheap as $1 a mile (prices are currently 3-4x that), making ride-hailing more affordable than owning a car.

5. With the support of the largest passenger car manufacturer in the world, our fleet of autonomous Toyota Siennas will offer riders a personalized customer experience, accelerate electrification, and, most importantly, save lives.

Read more: Our blueprint for launching our ride-hailing business in 2024

It is an interesting blueprint. Aurora certainly has the FSD tech to make robotaxis possible and partnering with Uber and Toyota will help them deploy. But 2024 for their first service, really? So 3 years from now, if all goes well, just to deploy their first service. That seems late to the game to me.

The robotaxi race definitely looks to be heating up in the next couple of years which more and more AV companies looking to deploy their service.

It was inevitable since start-up AV tech companies don't have the manufacturing to deploy and maintain a fleet of commercial robotaxis, but it is interesting to see the partnerships forming. Ford with Argo and Lyft. Aurora with Toyota and Uber. Cruise with GM. Waymo with FCA. It's like in school when you got picked for the team. Each AV company is picking their automaker to partner with to deploy their tech. I find it fascinating.
 
  • Informative
Reactions: Matias
Isn't this (viewed at macro level) exactly the reason for Tesla going to vision only, due to instances of phantom braking (or not braking for stationary obstructions)?
I’m not familiar with the issue, and my experience is with avionics. But I would say no. Only because the redundant sensors need no be different, just redundant. To me one reason to use different sensors is as a backup, albeit with a potential decrease in fidelity. My guess is it’s more of a data fusion issue. Although I can’t rule out cost benefit.
 
I’m not familiar with the issue, and my experience is with avionics. But I would say no. Only because the redundant sensors need no be different, just redundant. To me one reason to use different sensors is as a backup, albeit with a potential decrease in fidelity. My guess is it’s more of a data fusion issue. Although I can’t rule out cost benefit.
Right, redundant cameras and cpus, not trying to meld redundant sensors of different types.
 
Right, redundant cameras and cpus, not trying to meld redundant sensors of different types.
One can’t overlook the fact the development, verification, and validation of safety critical systems can cost an order of magnitude and more, and takes a lot more time. Usually the life expectancy of such systems are decades. I don’t think any of these technologies are following this practice and are just stuffed full of A.I.