Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
I like the Cruise point on how Lidar has low range when in rain or fog, they've found the silver lining...
While fog and rain lessen lidar range, we’ve found lidar to be a valuable tool to make localized measurements of fog density near each AV in real time.

So if robotaxis don't work out they can sell the fog density data to meteorologists.
 
I like the Cruise point on how Lidar has low range when in rain or fog, they've found the silver lining...


So if robotaxis don't work out they can sell the fog density data to meteorologists.

An empirical fog density measure also tells them how reliable optical sensors are at a given distance. This type of data might be used to adjust the driving speed during poor visibility.

Extracting that information from purely optical sensors should be possible but I'm having a hard time figuring out how to calculate a visibility threshold before seeing a known object at various distances (another vehicle or person), which means it's delayed creating a potential period of unsafe driving. Using the roadway surface wouldn't be sufficient unless you happened to know exactly what the roadway surface looked like on a clear day, which goes well beyond the information even HD Maps carry.
 
Last edited:
  • Like
Reactions: diplomat33
An empirical fog density measure also tells them how reliable optical sensors are at a given distance. This type of data might be used to adjust the driving speed during poor visibility.

Extracting that information from purely optical sensors should be possible but I'm having a hard time figuring out how to calculate a visibility threshold before seeing a known object at various distances (another vehicle or person), which means it's delayed creating a potential period of unsafe driving. Using the roadway surface wouldn't be sufficient unless you happened to know exactly what the roadway surface looked like on a clear day, which goes well beyond the information even HD Maps carry.
Luckily you don't need to know fog density to drive a car. A coarse estimation of the weather state is enough.
 
Lack of comments from Cruise and what they are doing to address the problem is equally troubling .... instead they put out something about how great Cruise is in bad weather.

Nothing released yet on why they hit the bus either.

Cruise just posted their findings from the accident. They say the root cause was a "unique error related to predicting the movement of articulated vehicles":

We quickly determined the bus’s behavior was reasonable and predictable. It pulled out into a lane of traffic from a bus stop and then came to a stop. Although our car did brake in response, it applied the brakes too late and rear-ended the bus at about 10 mph. We identified the root cause, which was a unique error related to predicting the movement of articulated vehicles (i.e. vehicles with two sections connected by a flexible joint, allowing them to bend in the middle) like the bus involved in this incident.

In this case, the AV’s view of the bus’s front section became fully blocked as the bus pulled out in front of the AV. Since the AV had previously seen the front section and recognized that the bus could bend, it predicted that the bus would move as connected sections with the rear section following the predicted path of the front section. This caused an error where the AV reacted based on the predicted actions of the front end of the bus (which it could no longer see), rather than the actual actions of the rear section of the bus. That is why the AV was slow to brake.

They are issuing a voluntary recall of the old software and they have rolled out a software update that fixes this issue. Here is some info on what they are doing to prevent the accident again:

Once we understood the root cause, our engineering teams immediately started creating a software update that would significantly improve performance near articulated vehicles. Once that work was completed, tested, and validated, our operations team rolled the change out to the fleet. This work was completed within two days of the incident occurring. The results from our testing indicated that this specific issue would not recur after the update.

Although we resolved the root cause in this particular incident, our teams continued to investigate the full extent to which this kind of issue occurred in the past, might occur under a variety of conditions in the future, and might be identified sooner. Our vehicles encounter buses like this one every day, but we’d never caused this kind of collision before. We needed to understand if it was more widespread or isolated to a very unique and rare set of initial conditions.

Our data and simulations showed that it was exceptionally rare. At the time of the incident, our AVs had driven over 1 million miles in fully driverless mode. We had no other collisions related to this issue, and extensive simulation showed that similar incidents were extremely unlikely to occur at all, even under very similar conditions. The collision occurred due to a unique combination of specific parameters such as the specific position of the vehicles when the AV approached the bus (with both sections of the bus visible initially, and then only one section), the AV’s speed, and the timing of the bus’s deceleration (within only a few seconds of the front section becoming occluded).

 
Last edited:
  • Informative
  • Like
Reactions: Jeff N and Dewg
Cruise just posted their findings from the accident. They say the root cause was a "unique error related to predicting the movement of articulated vehicles":



They are issuing a voluntary recall of the old software and they have rolled out a software update that fixes this issue. Here is some info on what they are doing to prevent the accident again:



I'm still at a loss as to why there isn't a base "prime directive", if you will, that says: If all the lidar and radar is screaming there is a solid object directly in front of the vehicle, stop.
 
I'm still at a loss as to why there isn't a base "prime directive", if you will, that says: If all the lidar and radar is screaming there is a solid object directly in front of the vehicle, stop.

Because it is not that simple. Obviously, you don't want to hit the object but you need to know how the object is moving. If it is starting to accelerate away from you, then you don't want to stay stopped, you can also start accelerating but not too fast. If it is slowing down, you need to brake enough to avoid a collision but not too hard because you don't want hard braking that would be uncomfortable for riders. So you need to predict what the object is doing so you can accelerate and brake as smoothly as possible. Robotaxis need to be safe but also need to avoid sudden stops, hard accelerations and braking that would be uncomfortable to the riders.
 
Last edited:
Because it is not that simple. Obviously, you don't want to hit the object but you need to know how the object is moving. If it is starting to accelerate away from you, then you don't want to stay stopped, you can also start accelerating but not too fast. If it is slowing down, you need to brake enough to avoid a collision but not too hard because you don't want hard braking that would be uncomfortable for riders. So you need to predict what the object is doing so you can accelerate and brake as smoothly as possible. Robotaxis need to be safe but also need to avoid sudden stops, hard accelerations and braking that would be uncomfortable to the riders.

There is a point where it is exactly that simple. The vehicle should know it's minimum stopping distance and begin braking immediately outside that threshold. It should always stop short of the obstruction, even if that's optimized to be within an inch of it but not yet touching.

Outside of that emergency stop range there is an abundance of freedom to create the smoother ride you describe.

Edit: The core of train automation enabled by moving-block signalling is: 1) calculating the required distance to come to a full stop (speed, slope, traction concerns like leaves, etc.) and 2) the distance from the train infront of you. Software ensures that the gap between trains is always larger than the stopping distance in various ways (gradually slowing well in advance to an emergency stop). It's kinda astonishing GM missed this 40 year old bit of well defined knowledge on vehicle automation.
 
Last edited:
There is a point where it is exactly that simple. The vehicle should know it's minimum stopping distance and begin braking immediately outside that threshold. It should always stop short of the obstruction, even if that's optimized to be within an inch of it but not yet touching.

Outside of that emergency stop range there is an abundance of freedom to create the smoother ride you describe.

Edit: The core of train automation enabled by moving-block signalling is: 1) calculating the required distance to come to a full stop (speed, slope, traction concerns like leaves, etc.) and 2) the distance from the train infront of you. Software ensures that the gap between trains is always larger than the stopping distance in various ways (gradually slowing well in advance to an emergency stop). It's kinda astonishing GM missed this 40 year old bit of well defined knowledge on vehicle automation.
Except you and the subject vehicle can turn left or right, which makes such a simple thing not work so well in the real world with cars, and thus the system still needs to rely on path prediction (which is what failed here) for smooth operation. It's a problem that has plagued AEB systems too. For trains, it's much easier given there is essentially only one possible path.

That's why people that think more sensors will solve this is way oversimplifying the problem, when many times much of the issue is in the software, not a perception issue.
 
Cruise just posted their findings from the accident. They say the root cause was a "unique error related to predicting the movement of articulated vehicles":
Now the age old Cruise getting stuck (apparently at intersections, but I guess other places too). Have they ever commented on that ?

I guess they commented & resolved quickly on this because it was a crash.
 
Now the age old Cruise getting stuck (apparently at intersections, but I guess other places too). Have they ever commented on that ?

I guess they commented & resolved quickly on this because it was a crash.

I think they've been more cagey on the stalls. The fact that this was an actual collision probably pushed them to address it.
 
  • Like
Reactions: Doggydogworld
I think they've been more cagey on the stalls. The fact that this was an actual collision probably pushed them to address it.
I can see why, given the comments in those Twitter threads have plenty of people suggesting approval from the locals before testing is allowed. This is never done for most vehicle testing (I see manufacturers testing non-autonomous preproduction vehicles here in SF all the time and they certainly didn't notify the neighborhood; same with all the AV vehicles). I don't think they want to even entertain that possibility at all or set up expectations that there should be further consequences for stalls that happen.
 
  • Helpful
Reactions: diplomat33
Except you and the subject vehicle can turn left or right, which makes such a simple thing not work so well in the real world with cars, and thus the system still needs to rely on path prediction (which is what failed here) for smooth operation. It's a problem that has plagued AEB systems too. For trains, it's much easier given there is essentially only one possible path.
Correct it is also not just that but also you might have a vehicle behind you, you can't just stop without taking into consideration if the vehicle behind will hit you. It is more complex than simple braking as fast as possible taking minimum stopping distance into the equation. It is complex.
That's why people that think more sensors will solve this is way oversimplifying the problem, when many times much of the issue is in the software, not a perception issue.
That is not the argument for multimodal sensor fusion. They are just sensors that provide accurate data but what you do with the data is what ultimately matters which is where robust software comes in. The argument is that using a range finding sensor provides true depth and velocity measurement not a guess, they complement each other's shortcomings while also serving as redundancy to reduce the likelihood of an accident as a result of false positive/negative from a single sensor type. Nobody thinks more sensors will solve all collisions, but it will reduce the likelihood of one happening as a result of false sensor data.
 
How good is the path prediction at higher speeds? Say the Cruise is waiting to turn onto a high speed road. A Car is traveling down the road and it has it's turn signal on. Will the Cruise wait to pull out until that car with the turn signal on actually turns? Or will it just pullout since the Cruise car thinks that other car is going to turn
 
How good is the path prediction at higher speeds?
Nobody here knows the answer to that question unless they work for cruise
Say the Cruise is waiting to turn onto a high speed road. A Car is traveling down the road and it has it's turn signal on. Will the Cruise wait to pull out until that car with the turn signal on actually turns? Or will it just pullout since the Cruise car thinks that other car is going to turn
Logically you would reason about the scene before you proceed.

1. Where is the car located on the road?
2. Is there a turn up ahead?
3. Is the car in the turning lane?
4. Is there traffic lights?
5. Who has the right of way?
6. How far is the car from the turn?
7. How fast is the car driving?
8. Is the car actually turning or is it just an erroneous use of a turning signal?
9. What is the likelihood of a collision if you cross?

You take all that into account before you proceed, just like how we as humans intuitively make these decisions. (sometimes).
 
Last edited:
  • Like
Reactions: diplomat33
So the Cruise vehicle thought the Bus was moving and it wasn't?

No. Cruise knew the bus was moving. But the bus was an articulated vehicle, meaning it was made of two pieces that can move differently. The Cruise saw the rear end but could not see the front end, it made a prediction on how the rear end would move but got it a bit wrong so it was not able to brake in time before hitting the rear.
 
Saw a driverless Cruise car get stuck earlier; I didn’t catch the start of it, but by the time I pulled out my camera it pulled off to the side maybe 15 seconds later.

It was stopped a couple car lengths behind the line at what was a green light when I first noticed it. I saw a few human drivers stuck behind it, honking; it used the left blinker for a bit (no one in that lane), then the right blinker (some cars moving there), then all hazard lights flashing, before finally moving.
 
Saw a driverless Cruise car get stuck earlier; I didn’t catch the start of it, but by the time I pulled out my camera it pulled off to the side maybe 15 seconds later.

It was stopped a couple car lengths behind the line at what was a green light when I first noticed it. I saw a few human drivers stuck behind it, honking; it used the left blinker for a bit (no one in that lane), then the right blinker (some cars moving there), then all hazard lights flashing, before finally moving.
I wonder how long it will take before motorists learn that honking at an AV does no good.

I wonder how long it will take before AVs learn that honking means that it needs to do something different.
 
AV safety expert, Brad Templeton, has a good article on the Cruise recall to fix the bus accident. Here is his summary of what Cruise did well and not well:

Summary​

What Cruise did well:
  1. Immediate acknowledgement the crash had taken place
  2. Quick shutdown of operations (which were non-public at the time)
  3. Quickly convened action team and engaged with regulators
  4. Determined problem unlikely to recur before allowing riders
  5. Deployed fix within two days
  6. Decent transparency after two weeks — it takes a lot to admit a bug as embarrassing as this
What they could improve:
  1. Slow to acknowledge fault and to state public was not at risk due to fault and why
  2. Not detecting a problem of this nature in testing
  3. Inadequate sanity checks to prevent crash, though some checks reduced the severity of the crash
  4. No transparency yet on sanity checks to prevent different problems of this magnitude
  5. While the immediate cause is fixed, it’s not yet clear if the broader problem is fixed. Ideally, their car should be updated so that even if it still made the mistake with the bus, it would not hit it any more.
Source: GM’s Cruise Robotaxi vs Bus Crash Caused By Confusion Over Articulated Bus; They Say It’s Fixed