Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Cruise

This site may earn commission on affiliate links.
I doubt that. Tesla may be able to have more data, but Waymo has shown, in my opinion, that this is as much or more of an engineering problem than anything else...

Agreed. Tesla was so overwhelmed with data that it had to cut off radar and Ultrasonic Sensors.

What counts is software. What to do with the data.

It described clearly that even when Cruise successfully detected the dummy, it still decided to collide with it at 28 MPH.
 
Last edited:
It described clearly that even when Cruise successfully detected the dummy, it still decided to collide with it at 28 MPH.

What I don't understand is how they could fail to detect any obstacle at all. Unless their LIDAR sensor resolution is so poor that the object/VRU literally fits in-between the point-cloud?

So many people have argued that LIDAR is the silver-bullet to autonomy, but given Cruise's issues, it certainly doesn't seem like LIDAR alone can provide for reliability.
 
  • Like
Reactions: powertoold
What I don't understand is how they could fail to detect any obstacle at all. Unless their LIDAR sensor resolution is so poor that the object/VRU literally fits in-between the point-cloud?

So many people have argued that LIDAR is the silver-bullet to autonomy, but given Cruise's issues, it certainly doesn't seem like LIDAR alone can provide for reliability.

Nobody argues that lidar alone is a silver bullet. The argument is that lidar, with other sensors, can help make your perception more reliable. That is because if used properly, lidar and radar data can complement the camera data, especially in cases where cameras are less reliable like rain, fog, snow and glare. So for example, if you are driving in fog and your camera data is less reliable, having radar data that is unaffected by fog, will be helpful. Or if you are driving in the dark or there is glare in the cameras from the sun or incoming headlights, having lidar data that is unaffected by darkness or glare, will be helpful. It is really in those cases where cameras are less reliable that people argue for the need for lidar and/or radar. In normal conditions, camera data will be sufficient. But you probably don't want the AV to drive just in normal conditions, you want the AV to be able to drive safely under a wide range of conditions. So, the proponents of lidar and radar argue that it makes sense to have more complete data from a variety of different sensors to ensure reliability under different driving conditions.

But as we've discussed before, sensors just provide data. It's what you do with that data that matters. The challenge is prediction and planning, not perception itself. You can detect objects pretty easily, the real challenge is to understand what the object is, how it will behave, and how to respond to it. In the article, it mentions that Cruise detected a toddler size dummy but still hit it. So perception worked but the prediction/planning failed. We also saw that issue with the articulated bus collision. The Cruise AV detected the articulated bus but failed to predict how the articulated bus would move and thus collided with it. My point being that Cruise seemed to have prediction/planning issues beyond just perception. So the issue was not really that Cruise could not detect the objects but that it was not responding properly to what perception was seeing.
 
Last edited:
LiDARs are great for retrieving high-resolution depth information, which can aid in sensor/data fusion, but I think there has yet to be a set standardization across all automotive LiDAR that tackles LiDARs breaking points. Most LiDAR companies establish their own standards and publish them. In fog cases, light scattering still exists which occludes background objects. Not as penetrating vision through fog as we'd like to think, especially with the types of LiDARs used for automotive. Another case that has been largely untested is the cross-interference of LiDARs when they become used on several vehicles. As LiDAR is send/receive, cross-talk may become more apparent in the future.

The final thing that maybe of use in this discussion is the case of pairing low-reflectance targets (children) with high-reflectance targets (cars/signs) in LiDAR. Low-reflectance targets (all LiDAR points for the child) may actually disappear because of the high-reflectance target. If you are doing sensor fusion and the LiDAR points are gone, you may not have a detection there.
 
  • Informative
Reactions: Doggydogworld
LiDARs are great for retrieving high-resolution depth information, which can aid in sensor/data fusion, but I think there has yet to be a set standardization across all automotive LiDAR that tackles LiDARs breaking points. Most LiDAR companies establish their own standards and publish them. In fog cases, light scattering still exists which occludes background objects. Not as penetrating vision through fog as we'd like to think, especially with the types of LiDARs used for automotive. Another case that has been largely untested is the cross-interference of LiDARs when they become used on several vehicles. As LiDAR is send/receive, cross-talk may become more apparent in the future.

The final thing that maybe of use in this discussion is the case of pairing low-reflectance targets (children) with high-reflectance targets (cars/signs) in LiDAR. Low-reflectance targets (all LiDAR points for the child) may actually disappear because of the high-reflectance target. If you are doing sensor fusion and the LiDAR points are gone, you may not have a detection there.
The man to listen to is this guy:
 
What I don't understand is how they could fail to detect any obstacle at all. Unless their LIDAR sensor resolution is so poor that the object/VRU literally fits in-between the point-cloud?
My guess is the article is poorly written and they actually mean fail to classify instead of fail to detect. That's consistent with detecting the dummy but still hitting it. Uber detected Elaine Herzberg hundreds of yards before they hit and killed her. The code kept re-classifying her, which screwed up their predictor and persistence logic (among other problems).
 
What I don't understand is how they could fail to detect any obstacle at all. Unless their LIDAR sensor resolution is so poor that the object/VRU literally fits in-between the point-cloud?

So many people have argued that LIDAR is the silver-bullet to autonomy, but given Cruise's issues, it certainly doesn't seem like LIDAR alone can provide for reliability.

Most likely, when they say "not detected," they mean the code lines are just not written down to handle what to do with the data.

For example, in 2016 fatal Autopilot accident, both the camera and radar "failed to detect" the gigantic tractor-trailer semi truck. But actually, the data did show they detected something but the software was written to deal with the data they got from the front or rear of a vehicle but not the sides.
 
The man to listen to is this guy:
Drago is a 3D perception guru with tons of deep learning expertise, like the Inception Net. His expertise is more mapping and modeling using deep learning, not sensors. And never on a podcast will a person showcase the pain points of their system. Of course data is great, more data is good, nothing is wrong with more data. If you know the architecture of the Inception Network (a paper he was a contributor), it gets filters at different scales at multiple parallel blocks to get more information at those scales, very analogous to the approach of using more data by Waymo.

For deep learning, we usually don't care what data we have, just as long as we have labeled data and from a wide array of sources (multimodal like LiDAR and RGB). And this becomes the downfall of data driven analysis: we care about the scores and metrics we aim to achieve (99% accurate), not so much addressing any possible problems, like hitting children in simulations and test (1% failure case).

I don't disagree that having LiDAR can make your models better, as shown in many publications. Data driven models is based on how much data you have and your model will work best in that distribution of data. Whenever you have fringe cases, you model will not work well. This is due to data imbalances, meaning your model really works well on stuff it has seen and not generalized to other cases. It is those special cases that you will not have enough data to train your system and have failures in them.

The specific case that I'm highlighting is how object reflectance and surrounding reflectance plays a heavy role in making objects disappear in LiDAR. There is actually a case where LiDAR has missing data. How many training instances do you have a child in front of a road sign or a car as compared to vast amount of autonomous driving data? Therefore, do people know the failure points of LiDAR? Only when you know how your sensors fail will you know how to best train the models, or at least gather the data necessary to rectify the problem.

Prediction and planning is the same thing. Unless you understand the pain points in your detection models, you may not be able to address route planning and just propagating error down the pipeline. How is the object being detected? Detect vs classify? Is there some sort of suppression algorithm to remove false positives across multiple frames? Was the detection too late? How does you prediction/planning responded to detections? Does your prediction/planning include responding to misdetections (phantom braking)? I'm just trying to figure out why this problem could exist in their architecture and not assuming anything in the system works perfectly.
 
Drago is a 3D perception guru with tons of deep learning expertise, like the Inception Net. His expertise is more mapping and modeling using deep learning, not sensors. And never on a podcast will a person showcase the pain points of their system. Of course data is great, more data is good, nothing is wrong with more data. If you know the architecture of the Inception Network (a paper he was a contributor), it gets filters at different scales at multiple parallel blocks to get more information at those scales, very analogous to the approach of using more data by Waymo.

For deep learning, we usually don't care what data we have, just as long as we have labeled data and from a wide array of sources (multimodal like LiDAR and RGB). And this becomes the downfall of data driven analysis: we care about the scores and metrics we aim to achieve (99% accurate), not so much addressing any possible problems, like hitting children in simulations and test (1% failure case).

I don't disagree that having LiDAR can make your models better, as shown in many publications. Data driven models is based on how much data you have and your model will work best in that distribution of data. Whenever you have fringe cases, you model will not work well. This is due to data imbalances, meaning your model really works well on stuff it has seen and not generalized to other cases. It is those special cases that you will not have enough data to train your system and have failures in them.

The specific case that I'm highlighting is how object reflectance and surrounding reflectance plays a heavy role in making objects disappear in LiDAR. There is actually a case where LiDAR has missing data. How many training instances do you have a child in front of a road sign or a car as compared to vast amount of autonomous driving data? Therefore, do people know the failure points of LiDAR? Only when you know how your sensors fail will you know how to best train the models, or at least gather the data necessary to rectify the problem.

Prediction and planning is the same thing. Unless you understand the pain points in your detection models, you may not be able to address route planning and just propagating error down the pipeline. How is the object being detected? Detect vs classify? Is there some sort of suppression algorithm to remove false positives across multiple frames? Was the detection too late? How does you prediction/planning responded to detections? Does your prediction/planning include responding to misdetections (phantom braking)? I'm just trying to figure out why this problem could exist in their architecture and not assuming anything in the system works perfectly.
I am convinced other modalities such as radar and lidar adds tons on value both in optimal conditions but especially at night and in fog/smoke/rain. Afaik these sensors has a lot lower latency than ML CV RGB and ofc a lot better precision.
 
  • Like
Reactions: kabin
Update from Cruise:
  • Issued a voluntary software recall to address the issue of car pulling over after a collision when it is not the desired response.
  • Announced a Chief Safety Officer (CSO) Role
  • Retained Third-Party Law Firm to Review October Incident
  • Appointed Exponent to Conduct Technical Root Cause Analysis
  • Safety Governance: We are taking a deep look at our overall safety approach and risk management structures to ensure we are built and positioned to enable continuous improvement.
  • Safety and Engineering Processes: We have advanced tools and processes in place and are committed to further upgrades wherever warranted. We are comprehensively reviewing all of our safety, testing, and validation processes and will add or modify processes where there is room to improve.
  • Internal & External Transparency: We understand that transparency is key to trust, especially in an emerging industry like ours. We are committed to improving how we communicate with the public, our customers, regulators, the media, and Cruise employees.
  • Community Engagement: We also understand the importance of collaborative partnerships. To realize the community benefits of autonomous driving, we need to do a better job engaging with our stakeholders and soliciting their feedback.

 

It is relatively new. It confirms what we already knew. Thanks for sharing.

I do think it highlights how misleading the CA DMV disengagement reports are because Cruise would report to the CA DMV how they were only disengaging like every 80,000 miles yet their driverless rides relied on human assistance every 4-5 miles. So the cars were not disengaging very often but still needed a lot of remote assistance.

It makes me think that "miles per assistance events" would likely be a better metric than disengagements. Just because you don't disengage a lot, does not mean your AV is driving safely or reliably. So a better metric would be how often does a human need to assist the AV. This would be more inclusive as it would count everything from a safety driver intervention to remote assistance.
 
  • Like
Reactions: Tam
During employee meeting, Kyle confirmed there will be layoffs:

During the hour-long meeting, executives outlined damage control operations ranging from internal “listening sessions” to proposed public-facing websites that would detail collisions involving Cruise cars or allow people to post comments describing their interactions with the vehicles. And a humbled CEO Kyle Vogt confirmed to employees that the company will need to do layoffs.

Source: Crisis At Cruise: Robotaxi CEO Confirms Coming Layoffs Amid Scramble To Rebuild Public Trust
 
It's starting to look like Cruise's problems are systemic. Sadly, the employees least responsible will bear the brunt of the consequences.

Definitely. Cruise's problems were deep in the corporate culture. They were not transparent. They did not employ robust safety processes. They ignored safety concerns etc... They tried to ignore the red flags (all the stalls, the protests, the accidents) and just continue scaling as if nothing was wrong. The accident with the woman under the car was the final straw that nailed them. And now, they are trying to radically change the company before it is too late. And yes the little guy will get hurt the most. Cruise could fire the little guys but the big guys on top like Kyle will be safe in their jobs.
 
Definitely. Cruise's problems were deep in the corporate culture. They were not transparent. They did not employ robust safety processes. They ignored safety concerns etc... They tried to ignore the red flags (all the stalls, the protests, the accidents) and just continue scaling as if nothing was wrong. The accident with the woman under the car was the final straw that nailed them. And now, they are trying to radically change the company before it is too late. And yes the little guy will get hurt the most. Cruise could fire the little guys but the big guys on top like Kyle will be safe in their jobs.
Ah, the golden parachute. 😒
 
  • Like
Reactions: diplomat33
It is relatively new. It confirms what we already knew. Thanks for sharing.

I do think it highlights how misleading the CA DMV disengagement reports are because Cruise would report to the CA DMV how they were only disengaging like every 80,000 miles yet their driverless rides relied on human assistance every 4-5 miles. So the cars were not disengaging very often but still needed a lot of remote assistance.

It makes me think that "miles per assistance events" would likely be a better metric than disengagements. Just because you don't disengage a lot, does not mean your AV is driving safely or reliably. So a better metric would be how often does a human need to assist the AV. This would be more inclusive as it would count everything from a safety driver intervention to remote assistance.
I read more after I posted. I was thinking that for a lot of the time even FSD(b) on 11.4.7.3 can get better than a disengagement every 4-5 miles and it does pause while it figures stuff out, but not 3-5 seconds. On city driving it does relatively well.
Of course many of the those FSD disengagements are rather more than "what should I do", more of the human saying "hell no" ;)
I thought Cruise were more along than this.
 
I read more after I posted. I was thinking that for a lot of the time even FSD(b) on 11.4.7.3 can get better than a disengagement every 4-5 miles and it does pause while it figures stuff out, but not 3-5 seconds. On city driving it does relatively well.
Of course many of the those FSD disengagements are rather more than "what should I do", more of the human saying "hell no" ;)
I thought Cruise were more along than this.
The challenge is to build a system that knows when it needs to call home rather than just beep "oh *sugar* I don't know what happens take over NOW!" when it drives into oncoming traffic or just don't beep and go past a red light. like FSDb does.