Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
What I want to figure out is how is everyone else doing the planning - they are not as transparent as Tesla.

Put simply, Cruise and Waymo have said that they use ML to predict paths of other road users and then based on those predictions, plot a safe path accordingly. Waymo uses a deep neural net called Multipath++ that maps out possible paths of other road users, with probabilities, up to 8 seconds into the future. Those predictions then feed into their Planner which maps out a safe path for the car. Anguelov has also explained that prediction and planning are interrelated since your actions will affect the actions of other road users but in turn their actions will affect your actions. He says Waymo is still working out the best way to combine prediction and planning together.

IMO, the "Lean Driving Policy" video explains Mobileye's planning pretty well:

Mobileye rejects the traditional approaches of Tesla, Waymo or Cruise of using Monte Carlo or some other search algorithm to predict paths. They also reject end-to-end learning for planning. The video explains why they think these approaches are not the best. Simply put, Mobileye believes they don't guarantee good enough results, require you build models of other users which may not match reality well enough, and can be very computationally expensive. The video goes into more details.

Instead, Mobileye uses deep learning to predict the intent, not path, of other road users. So Mobileye's NN predicts behavior like making a u-turn, changing lanes, cutting in, yielding, etc... instead of trying to map out the entire path of other road users. Then, they use intent coupled with their RSS which sets the boundaries of safe driving, to plot a safe path. The video gives some examples. Say, the Mobileye AV is trying to merge unto a highway. Perception detects the other vehicles. The NN predicts the other car will not yield. So the Mobileye car slows down and plots a safe path to smoothly merge after the other car has passed. If the NN predicts the other car is yielding to you, then it might plot a path where it speeds up to merge safely in front of the vehicle. Likewise, with pedestians, if the NN predicts the pedestrian intends to cross the street, it will plot a safe path where it yields to the pedestrian. If on the other hand, the NN predicts the pedestrian does not intend to cross the street, then it will plot a safe path to keep going, maybe slow down a bit just in case. The RSS uses mathematical equations to govern what is a safe distance from other road users. The car uses RSS to make sure it is plotting a safe path.
 
About Shashua's Under the Hood CES2022 presentation, I thought it was interesting that he is promising L4 consumer AVs that will go everywhere, thanks to Mobileye's scalable maps. "L4 everywhere" sounds like an oxymoron. I can see a few possibilities:
1) He is just trying to underpromise. If he promises L5 then it could look bad if the consumer AVs don't actually work everywhere. But if he promises L4 and they do work everywhere or almost everywhere, it will look really good.
2) The AVs will work everywhere but will have some other ODD limitation like speed or weather restriction and thus will be L4.
3) Shashua mentions that they will have a very wide ODD. Maybe the ODD will be very big but not actually be unlimited and thus not L5. The scalable maps may cover say 99% of roads which will be great for most users but that would not be L5. So it could be almost L5 but not quite, and thus still L4, just with a very wide ODD.
4) He is just lying and they won't go everywhere.

I think it is smart to only promise L4 at this point since L5 is probably unrealistic. And really it is silly to promise L5. Consumer AVs with a very wide ODD would already be an incredible achievement.
 
Last edited:
  • Like
Reactions: Baumisch
The autonomous Progress is still quite a challenge as another report of an Autonomous Vehicle accident, Whitby, near Toronto, Canda that landed the backup driver in the hospital in critical condition:

It's an L3+ Olli Autonomous Shuttle. The website relies on "Computer Vision and Analytics" but the picture seems to show that there's an electronic LIDAR (non-mechanically-rotating) on the front top and another one above the front grille. It's operated by Whitby Autonomous Vehicle Electric Shuttle Project.


Police report has some additional information about this crash:

"The resulting investigation has found that the vehicle was being operated in manual mode prior to, and at the time the vehicle left the roadway. Therefore, the hazard mitigation safety systems designed for the vehicle while in autonomous mode were disabled at the time of the collision."


So another case of either human error or poor design for the human controls.
 
What if you decide to have only full autonomy outside the city, out on the open road?
And Level 2-3 or manual operation within the built environment?...

Can ‘outer markers’ make self-driving cars happen?

 
What if you decide to have only full autonomy outside the city, out on the open road?
And Level 2-3 or manual operation within the built environment?...

Can ‘outer markers’ make self-driving cars happen?


I am not sure the author is trying to say.

L5 on freeways and L2-3/manual within the city built-environment?

The article describes the city's built-environment is just like the airplane using "autoland". The problem is "autoland" is very complex and most pilots don't use the automatic landing procedure unless they have to like in poor visibility.

Nevertheless, using transmitters/sensors for intersections, traffic lights, road markers, obstacles, even pedestrians, other cars (V2V), Vehicle-to-Everything (V2X) ... is a great idea until the car's collision avoidance system is perfected. The problem is it's in an infrastructure project that costs money. And the current Build Back Better still has a hard time passing so who would have the courage to bring tackle this issue to the 50 Republican plus 1 Democratic Senator?
 
I am not sure the author is trying to say.

L5 on freeways and L2-3/manual within the city built-environment?

The article describes the city's built-environment is just like the airplane using "autoland". The problem is "autoland" is very complex and most pilots don't use the automatic landing procedure unless they have to like in poor visibility.

It says: "Since AVs operate in the ‘2D pane’, the outer markers only serve to indicate zones."
 
I am not sure the author is trying to say.

L5 on freeways and L2-3/manual within the city built-environment?

The article describes the city's built-environment is just like the airplane using "autoland". The problem is "autoland" is very complex and most pilots don't use the automatic landing procedure unless they have to like in poor visibility.

Nevertheless, using transmitters/sensors for intersections, traffic lights, road markers, obstacles, even pedestrians, other cars (V2V), Vehicle-to-Everything (V2X) ... is a great idea until the car's collision avoidance system is perfected. The problem is it's in an infrastructure project that costs money. And the current Build Back Better still has a hard time passing so who would have the courage to bring tackle this issue to the 50 Republican plus 1 Democratic Senator?
L5 on freeways is L4.
As for V2X, are we going to require cyclists to have neural link implants to broadcast their intended path?
It seems that Waymo has already demonstrated that autonomous driving works without V2X. Look at their safety report of all the collisions that occurred over 6 million miles, how would V2X help their system perform better? It sounds a lot more expensive than HD mapping...
 
  • Like
Reactions: Doggydogworld
Haven't seen any AV developers maneuver through complicated inner city traffic in let's say SF, at adequate speed.
The simulations I have seen so far ooze cherry-picked footage.
Which reminds me of the old Silicon Valley motto: fake it till you make it.
Or: simulate it till you can make it truly stick.
 
Last edited:
Haven't seen any AV developers navigate complicated inner city traffic in let's say SF, at adequate speed.
The simulations I have seen so far ooze cherry-picked footage.
Which reminds me of the old Silicon Valley motto: fake it till you make it.
Or: simulate it till you can make it truly stick.
Cruise has a bunch of unedited videos. Obviously the proof will be when they open to the public which is supposed to happen this year.
 
  • Informative
Reactions: Baumisch
Haven't seen any AV developers maneuver through complicated inner city traffic in let's say SF, at adequate speed.
The simulations I have seen so far ooze cherry-picked footage.
Which reminds me of the old Silicon Valley motto: fake it till you make it.
Or: simulate it till you can make it truly stick.
There are even some FSD beta videos of that.

I think both Waymo and Cruise can probably do complete driverless operation now with low risk In SF.