Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
The need for sensor fusion comes at some cost when considering the non-linear relationships. This isn't any Kalman filter type sensor fusion.

There are many devices which utilize a limited set of sensors because they achieve the needed accuracy.

Obviously, Tesla's claims on not needing lidar / radar will be proven or disproven out the open for all of us to see. They are betting on the vision system / deep learning algos to get good enough to avoid most of these failure modes. I personally have a hard time seeing that, but it will be interesting to see.
 
Yes this describes sensor fusion.

I'm just trying to understand Shashua's attempt to make Mobileye's approach distinct vs Waymo / Cruise's approach to sensor fusion.

Here is a graphic that illustrates Waymo's sensor fusion approach. You can see that they use the map data for localization and for routing (navigation) and they also fuse the map data and all the sensor data (camera, lidar, radar) into a single perception model that then goes into the planner which then determines the vehicles controls (steering and braking).

JcV4jqs.png


Mobileye's sensor fusion is more complicated. They feed the camera data into one world model. They feed the radar and lidar into a separate world model. They generate planning independently from both and then fuse the planning into a single policy and then controls.

1618338010915-png.653535


So the big difference is when the fusion happens. Waymo fuses all the perception data in the beginning and then generates a single planner. Mobileye generates separate planners from each sensor type and then fuses the planning at the end.
 
Yes, not like Tesla names AutoPilot and Full Self Driving that are perfectly unambiguous 😉
However, I think in general the public just knows Tesla assisted driving system as 'AutoPilot' ... it has been in the news so much it is like the generic industry name. Actually, I think it turned out to be a brilliant move with everything else being compared to 'Tesla AutoPilot'.

FSD is mostly a thing around these detailed threads (TMC, reddit, etc).

Manufacturers used AutoPilot 1958 in https://www.google.com/search?q=autopilot+1958

And in 1956: GM Auto Pilot -- 1956: A future vision of driverless cars
 
There are a lot of things you "don't see", but just because you are incapable of seeing the end goal, doesn't mean others have to be (or could be) as "blind"!

My opinion is not blind. It is not about me not being able to see the end goal. We know cameras are suboptimal in low visibility conditions and we know Tesla is pursuing "camera-only". So, if you just have cameras and cameras don't work well in certain conditions, then it is logical to conclude that Tesla will have issues in certain conditions. And if Tesla has issues in certain conditions, then it is logical to conclude that they probably won't be able to remove driver supervision for all conditions.
 
We know cameras are suboptimal in low visibility conditions
I mean, we have video proof that Tesla cameras see better at night then humans. So, you need to start qualifying your "suboptimal" and/or "low visibility" claims.

Do you really want to be barreling through fog at 70mph just because "radar can see"?
One more note, just because the NN's on our non-betaFSD cars report low-visibility in rain or direct sunlight, does not mean that the updated FSD NN's are not trained specifically for these scenarios.
 
Walmart invests in Cruise:
  • Walmart is investing in Cruise, GM’s majority-owned self-driving vehicle subsidiary, as part of a new $2.75 billion funding round for the company.
  • The decision to invest comes about five months after the companies started working on a pilot program to use Cruise self-driving vehicles for deliveries in Scottsdale, Arizona.
  • The investment round by Cruise was initially announced in January at $2 billion.

 
  • Like
Reactions: Microterf
Here is a graphic that illustrates Waymo's sensor fusion approach. You can see that they use the map data for localization and for routing (navigation) and they also fuse the map data and all the sensor data (camera, lidar, radar) into a single perception model that then goes into the planner which then determines the vehicles controls (steering and braking).

JcV4jqs.png


Mobileye's sensor fusion is more complicated. They feed the camera data into one world model. They feed the radar and lidar into a separate world model. They generate planning independently from both and then fuse the planning into a single policy and then controls.

1618338010915-png.653535


So the big difference is when the fusion happens. Waymo fuses all the perception data in the beginning and then generates a single planner. Mobileye generates separate planners from each sensor type and then fuses the planning at the end.
I presume "True Redundancy" is just their marketing blurb. There's only redundancy if the vehicle can operate autonomously if one of the two system has failed.
 
Here is a graphic that illustrates Waymo's sensor fusion approach. You can see that they use the map data for localization and for routing (navigation) and they also fuse the map data and all the sensor data (camera, lidar, radar) into a single perception model that then goes into the planner which then determines the vehicles controls (steering and braking).

JcV4jqs.png


Mobileye's sensor fusion is more complicated. They feed the camera data into one world model. They feed the radar and lidar into a separate world model. They generate planning independently from both and then fuse the planning into a single policy and then controls.

1618338010915-png.653535


So the big difference is when the fusion happens. Waymo fuses all the perception data in the beginning and then generates a single planner. Mobileye generates separate planners from each sensor type and then fuses the planning at the end.
I presume "True Redundancy" is just their marketing blurb. There's only redundancy if the vehicle can operate autonomously if one of the two system has failed.
 
I mean, we have video proof that Tesla cameras see better at night then humans. So, you need to start qualifying your "suboptimal" and/or "low visibility" claims.

Do you really want to be barreling through fog at 70mph just because "radar can see"?
One more note, just because the NN's on our non-betaFSD cars report low-visibility in rain or direct sunlight, does not mean that the updated FSD NN's are not trained specifically for these scenarios.

I would not want to barrel through fog at 70 mph regardless. But radar sees better through fog than cameras. So I would want radar, yes. I would definitely not barrel through fog at 70 mph with just cameras.
 
I presume "True Redundancy" is just their marketing blurb. There's only redundancy if the vehicle can operate autonomously if one of the two system has failed.

Yes, "true redundancy" is a bit of a marketing blurb. But except for reading traffic lights, the car can operate autonomously if one of the two systems has failed. So basically for all driving that does not require reading traffic lights, the Mobileye car will have true redundancy.
 
  • Like
Reactions: Microterf
Ford's BlueCruise - Level 2

I'm having a tough time finding out the details of the Ford autonomous hardware. Does anyone have a link to the actual system component specifications?

What I've found are it has a short range front camera in the grill, single front-facing camera in the rear-view mirror for self-driving, a rear camera and side cameras in the side mirrors. They call it 360 degree but I don't think it looks very far beyond the car to the sides. There appears to be 12 ultrasonics around the car, a forward radar (range?), and rear radar (15 yards). Also mentions radar in the rear quarter panels (I wonder if they are confusing ultrasonics with radar?). And the driver-facing IR camera for driver monitoring.

Cameras are stitched for the 360 view close to the car for parking. I don't think that is used for the self-driving.

There does not seem to be any forward-looking side view camera. This would be a problem for detecting traffic from the sides, so I don't think they could do Autosteer for City Streets.
cq5dam.web.2160.2160.jpeg

It is probably a Level 2 highway system only. Not sure how good it is at detecting fast-moving traffic to the rear for safe lane changes.
Mobileye seems to be the partner for their system with EyeQ sensing. I'm not sure if Ford or Mobileye are ultimately responsible for the programming of the system, nor its capabilities.
 
Yes, "true redundancy" is a bit of a marketing blurb. But except for reading traffic lights, the car can operate autonomously if one of the two systems has failed. So basically for all driving that does not require reading traffic lights, the Mobileye car will have true redundancy.

There's no real redundancy if it can't operate autonomously with lidar and radar alone And if it can operate with cameras alone, they've already solved vision. Hoorah!
 
  • Like
Reactions: mikes_fsd
There's no real redundancy if it can't operate autonomously with lidar and radar alone And if it can operate with cameras alone, they've already solved vision. Hoorah!

Radar/lidar can operate the car autonomously for everything except reading traffic lights. So radar/lidar do offer real redundancy for like 99% of driving.

Also, Mobileye's point is that "solving vision" only gets you a MTBF of approximately 10,000 hours. And Mobileye is not happy with removing driver supervision with a MTBF of 10,000 hours. They want a higher MTBF before they remove driver supervision, hence why they add a redundant lidar/radar system. With a radar/lidar system that can operate the car autonomously for like 99% of driving, they are able to increase the MTBF to higher levels, making autonomous driving much safer.
 
  • Disagree
Reactions: mikes_fsd
Radar/lidar can operate the car autonomously for everything except reading traffic lights. So radar/lidar do offer real redundancy for like 99% of driving.

Also, Mobileye's point is that "solving vision" only gets you a MTBF of approximately 10,000 hours. And Mobileye is not happy with removing driver supervision with a MTBF of 10,000 hours. They want a higher MTBF before they remove driver supervision, hence why they add a redundant lidar/radar system. With a radar/lidar system that can operate the car autonomously for like 99% of driving, they are able to increase the MTBF to higher levels, making autonomous driving much safer.

If it can't handle traffic lights, it can't handle most driving. Most driving has traffic lights. I have 8 sets of traffic lights on my commute, for example.

It's not redundancy, it's dependency.
 
If it can't handle traffic lights, it can't handle most driving. Most driving has traffic lights. I have 8 sets of traffic lights on my commute, for example.

It's not redundancy, it's dependency.

Highway driving does not have traffic lights.

But you are missing the point: the radar/lidar system offers redundancy for all the driving tasks other than traffic lights.

By having a redundant system for all the other driving tasks, it still improves the overall reliability of the FSD. For example, the radar/lidar offers redundancy for making unprotected left turns, for detecting pedestrians and other vehicles etc... So the radar/lidar increases the overall reliability for those tasks, thus improving the overall reliability.

Basically, adding the radar/lidar is not about doing all the driving without cameras, but it can do a lot of driving tasks and by offering redundancy for those driving tasks, it improves the overall reliability!