Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
The Cruise saw the rear end but could not see the front end, it made a prediction on how the rear end would move but got it a bit wrong so it was not able to brake in time before hitting the rear.
Not what happened, at all. It was not seeing (effectively - meaning it presumably sensed it but ignored it) the rear end, at least not until too late. And it was not predicting movement of the rear of the bus.

“In this case, the AV’s view of the bus’s front section became fully blocked as the bus pulled out in front of the AV. Since the AV had previously seen the front section and recognized that the bus could bend, it predicted that the bus would move as connected sections with the rear section following the predicted path of the front section. This caused an error where the AV reacted based on the predicted actions of the front end of the bus (which it could no longer see), rather than the actual actions of the rear section of the bus.”


I would simply not have run into the back of the bus. If does seem odd that they don’t have a failsafe “must emergency stop” from solid sensor detections of objects at the current stopping distance - this would have zero impact on ride smoothness. They probably do, but it sounds like in this case all these detections were masked and ignored (thus the result). At some point they were unmasked - annoyingly, they don’t explain why they were unmasked, resulting in some braking - but it sounds like it could have been time-based and after x seconds of not seeing the front of the bus they would start using the position of the rear of the bus for planning (rather than ignoring it and discarding all sensor detections associated with it).

Basically it could see the back of the bus perfectly and just did not use it (pretended it was not there). I guess you could argue it could see it but I interpret that as not seeing it (consistent with humans, who often see objects but then ignore them and hit them - and this is usually described as not seeing them).
 
Last edited:
  • Helpful
Reactions: diplomat33
Not what happened, at all. It was not seeing (effectively - meaning it presumably sensed it but ignored it) the rear end, at least not until too late. And it was not predicting movement of the rear of the bus.

“In this case, the AV’s view of the bus’s front section became fully blocked as the bus pulled out in front of the AV. Since the AV had previously seen the front section and recognized that the bus could bend, it predicted that the bus would move as connected sections with the rear section following the predicted path of the front section. This caused an error where the AV reacted based on the predicted actions of the front end of the bus (which it could no longer see), rather than the actual actions of the rear section of the bus.”


I would simply not have run into the back of the bus. If does seem odd that they don’t have a failsafe “must emergency stop” from solid sensor detections of objects at the current stopping distance - this would have zero impact on ride smoothness. They probably do, but it sounds like in this case all these detections were masked and ignored (thus the result). At some point they were unmasked - annoyingly, they don’t explain why they were unmasked, resulting in some braking - but it sounds like it could have been time-based and after x seconds of not seeing the front of the bus they would start using the position of the rear of the bus for planning (rather than ignoring it and discarding all sensor detections associated with it).

Basically it could see the back of the bus perfectly and just did not use it (pretended it was not there). I guess you could argue it could see it but I interpret that as not seeing it (consistent with humans, who often see objects but then ignore them and hit them - and this is usually described as not seeing them).

Thanks for the correction. I paraphrased some details wrong, sorry. Yes, it made a prediction based on the front, not rear. It saw the rear but ignored it. And yes, Brad Templeton agrees that Cruise should have a "sanity check" where if perception says there is something there, the car should stop.
 
  • Like
Reactions: AlanSubie4Life
Thanks for the correction. I paraphrased some details wrong, sorry. Yes, it made a prediction based on the front, not rear. It saw the rear but ignored it. And yes, Brad Templeton agrees that Cruise should have a "sanity check" where if perception says there is something there, the car should stop.
Reminds me of the Uber accident. They just ignored sensor data as "noisy" (IIRC).
 
AV safety expert, Brad Templeton, has a good article on the Cruise recall to fix the bus accident. Here is his summary of what Cruise did well and not well:

Summary​

What Cruise did well:
  1. Immediate acknowledgement the crash had taken place
  2. Quick shutdown of operations (which were non-public at the time)
  3. Quickly convened action team and engaged with regulators
  4. Determined problem unlikely to recur before allowing riders
  5. Deployed fix within two days
  6. Decent transparency after two weeks — it takes a lot to admit a bug as embarrassing as this
What they could improve:
  1. Slow to acknowledge fault and to state public was not at risk due to fault and why
  2. Not detecting a problem of this nature in testing
  3. Inadequate sanity checks to prevent crash, though some checks reduced the severity of the crash
  4. No transparency yet on sanity checks to prevent different problems of this magnitude
  5. While the immediate cause is fixed, it’s not yet clear if the broader problem is fixed. Ideally, their car should be updated so that even if it still made the mistake with the bus, it would not hit it any more.
Source: GM’s Cruise Robotaxi vs Bus Crash Caused By Confusion Over Articulated Bus; They Say It’s Fixed
"For reasons not explained, the system disregarded the sensor measurements of the back of the bus which showed it was slowing."

"Clearly it was a significant error for their system to attempt to predict the path of the bus based on the front segment that it couldn’t see, ignoring the abundant data from the rear of the bus which clearly showed it slowing. This is a very “AI” sort of mistake since it makes no sense to a human."
 
Driverless bus in China.


00:00 -- Start
00:35 -- Riding the driverless bus (Shengwu Dao Diecuiyuan Station)
01:26 -- Experience of riding the driverless bus
03:48 -- Driverless bus route 2 (Canton Tower Station)
04:50 -- Driverless bus visualization dashboard
07:10 -- Yuejiang West Road headquarters building
09:59 -- Business center in the building.
 
  • Informative
Reactions: scottf200
Sunday morning, no traffic/pedestrian interaction, looks to not exceed 25 MPH, fixed lane. Looks like a Disney parking lot shuttle. And is perhaps faster if you walked.

Not sure there's anything to see here or get excited about. I would not pay for that.
I disagree. City buses rarely average 30 mph and commonly they average even less. This type of multi-passenger people mover makes more sense to me from an economic perspective than does a robotaxi.
 
I disagree. City buses rarely average 30 mph and commonly they average even less. This type of multi-passenger people mover makes more sense to me from an economic perspective than does a robotaxi.
And makes sense from a congestion point of view. Robotaxi takes a lot of space pr passenger compared to a city bus.

In my city, we have an abundance of bus lines. Problem is 7 minutes waiting time i rush hours, 15 minutes for other times. In rush hours, buses (big linked one) are full, and the later bus often catches up to the one in front because of tge amount of passengers exits and enter slows down one but not the other.

So with autonomous smaller buses, but 5 minutes interval could be great thing. Cheaper but more convenient, less waiting time.

BTW I usually cycle faster than the bus in the city because of the stops, so avg speed for the bus incl stops is less than 25 km/h ( el bike limit). 5 km bike ride is around 17 minutes, walking is 1+ hrs, bus is 25 min.
 
Good article by Brad Templeton about the stalls that we've seen from Waymo and Cruise.

Some key points:

The stalls are relatively rare and minor:

Considering the volume of traffic, having just 12 incidents in 6 months is an astonishingly good record, much better than many were led to believe by the anecdotal reports. Data are not provided on how many such events were caused by human drivers, double parkers and car breakdowns, but it seems likely the autnomous vehicles are causing trivial problems in comparison, as might be expected because while they probably drove a million miles in that period between the two companies, humans drive much more.

The stalls are a small price to pay for the long term safety benefits:

Because robotaxis offer the promise — indeed all the companies literally promise it — of being safer in the long term, reducing road risk significantly once they are at scale, it makes sense to tolerate some traffic disruption in order to work for that remarkable goal. As long as humans do most of the driving, there will continue to be carnage on our streets, and the chance to reduce that is worth a great deal of cost, both in money and traffic disruption.
From a standpoint of reducing risk, crashes and traffic disruption, it would be foolish to the extreme to slow down the deployment of this technology because of small issues during its pilot phases. We would rob our future — and literally rob some people of their futures — to avoid minor issues today.

AVs have the benefit of fleet learning:

Society already makes that decision when it allows student drivers on the road. Students and freshly minted licensed drivers are more dangerous and cause more traffic disruption. We accept them because it has been the only way to get them to learn and improve and turn into safer, better and more mature drivers. With robots the benefit is much stronger. Letting a teen student on the road helps make that one driver better. The entire robot fleet is improved from anything learned from one robot. If a robotaxi blocks a street or has a fender-bender, the entire fleet will not make that mistake again.

Waymo and Cruise should still do better with reducing stalls:

This does not mean that Cruise and Waymo should not do better. Cruise has had many more reports than Waymo and needs more improvement. Cruise has also had more crash events, including at least one with injuries, one with wires, and the one 2 weeks ago with a bus.

Waymo and Cruise should consider using "remote driving" in order to clear up stalls faster and reduce disruption:

The companies need to get better at that equation. Both decided they will not use “remote driving” where the remote operator has a console with a wheel and pedals and drives the car like a radio controlled car. That depends on very good quality data connections and still has risks that a rescue driver does not. Nonetheless, it has the advantage of something that can be done without delay. The companies have to consider it, or getting more reliable with their remote assist. Even if these incidents are, as Muni reports, extremely rare, they still get a lot of attention that is magnified and reduces public confidence.

 
Last edited:
Good article by Brad Templeton about the stalls that we've seen from Waymo and Cruise.

Some key points:

The stalls are relatively rare and minor:



The stalls are a small price to pay for the long term safety benefits:




AVs have the benefit of fleet learning:



Waymo and Cruise should still do better with reducing stalls:



Waymo and Cruise should consider using "remote driving" in order to clear up stalls faster and reduce disruption:



In think it's a bit misleading to suggest this is low, given the 12 only covers a subset of incidents captured directly on transit cameras and it doesn't have a miles number to go with it either to compare to non-AV cases. That seems like a very positive spin on the stats.

There are way more incidents that either go unreported or we only find out from social media (not muni transit cameras). For example, in the multiple halting incidents on 19th Ave, the delay caused by that can impact the 28 bus line that runs through there, but that bus would not necessarily be able to determine that it was an AV blocking the way that was the cause, given the traffic may be backed up much further. The source article the Forbes article got the numbers from actually gave a total of 92 incidents that includes incidents reported from other sources like social media.

 
  • Like
Reactions: Doggydogworld
The stalls are relatively rare and minor:
That is an extremely bad interpretation. As usual Brad is extremely biased. You call independent YT testers "shills" but people like Brad are apparently "independent".

There should be zero instances of such stuck vehicles given low number of miles in SF. It is a bad failure - not because the vehicles won't find some situations difficult - but because L4 calls for safe handling of such situations by parking to the side of the road safely. So, it is an egregious failure. Neither rare, nor minor.
 
Last edited:
  • Disagree
Reactions: diplomat33
That is an extremely bad interpretation. As usual Brad is extremely biased. You call independent YT testers "shills" but people like Brad are apparently "independent".

There should be zero instances of such stuck vehicles given low number of miles in SF. It is a bad failure - not because the vehicles won't find some situations difficult - but because L4 calls for safe handling of such situations by parking to the side of the road safely. So, it is an egregious failure. Neither rare, nor minor.
To be fair, in a bunch of these incidents, the car eventually does pull to the side of the road. It just takes its merry time to do so.
 
That is an extremely bad interpretation. As usual Brad is extremely biased. You call independent YT testers "shills" but people like Brad are apparently "independent".

I call YT testers shills because they are. For example, Whole Mars is a Tesla shareholder who puts out tweets and videos just to promote Tesla and pump TSLA stock. So yes, they are shills. Brad is a bonified AV expert who has worked in the AV field for years.

There should be zero instances of such stuck vehicles given low number of miles in SF.

Cruise has done over 1M driverless miles in SF. That is not exactly a "low number of miles".

It is a bad failure - not because the vehicles won't find some situations difficult - but because L4 calls for safe handling of such situations by parking to the side of the road safely. So, it is an egregious failure. Neither rare, nor minor.

I don't disagree that some stalls are very bad. But L4 requires achieving a minimum risk condition, not necessarily pulling over. If it can pull over, it should, but there will be instances where the car cannot pull over before the issue happens.
 
  • Helpful
  • Disagree
Reactions: EVNow and scottf200
I call YT testers shills because they are. For example, Whole Mars is a Tesla shareholder who puts out tweets and videos just to promote Tesla and pump TSLA stock. So yes, they are shills.
I don’t particularly follow him but in the few videos I’ve seen I’ve seen him be pretty critical of FSD in some situations.

TMC logic is that if a person says positive things about Tesla they must be a shill, and if they put their money where their mouth is by buying stock they’re a double shill (see also: Sandy Munro).

Nobody has articulated any theory that explains how many hours of FSD videos a random guy in a car (with 23.5k followers-just looked it up) has to produce to shift the needle on their holding in a company with 3B shares in circulation. I imagine working at McDonalds would have a more reliable effect.
 
  • Like
Reactions: eli_
I call YT testers shills because they are.
You're focusing on Omar, who is definitely a shill. None of the others are anything but dedicated videographers. I challenge you to call Chuck Cook a "shill" to his face. That's really an insult, given how much work he has put into both his test drives and his videos.

As to Brad Templeton, he believes that Waymo and Cruise have the correct approach to autonomous driving and Tesla does not. I value his opinion at the same level as the opinions of other people that post about AD, both here and elsewhere. At the rate that the three companies are going, they will never achieve Level 5 for even NA, let alone, the entire world. Tesla is limited by its hardware. Cruise and Waymo are limited by the economics required to eliminate geofences.
 
You're focusing on Omar, who is definitely a shill. None of the others are anything but dedicated videographers. I challenge you to call Chuck Cook a "shill" to his face. That's really an insult, given how much work he has put into both his test drives and his videos.

My apology. Let me clarify. I do not think that all YT Testers are shills. No, I would never call Chuck Cook a shill. Kim Paquette is also not a shill. I am specifically talking about Omar. He is a shill. And there are a couple others like "ValueAnalyst1" and James Douma who are shills. It's really the TSLA Bulls on Twitter who are shills. The vast majority of YT testers are not shills.
 
As to Brad Templeton, he believes that Waymo and Cruise have the correct approach to autonomous driving and Tesla does not.
I hate these comparisons because they're not even building the same thing. If Tesla was trying to sell an actual robotaxi they'd look a lot more like Waymo and have geofences like everyone else to guarantee reliability. Likewise if Waymo had to function as a driver-assistance system in a mass market car, you couldn't sell it with hard dependencies on HD Maps and geofences, no one would buy it. They're totally different products with different goals, and the technical limitations just seem like obvious side-effects based on what they're trying to do and the time frames involved. That said, Waymo's tech stack is way more mature, as you'd expect.
 
Last edited:
Samsung Electronics will produce eyeQ chips for Mobileye:

Samsung Electronics’ Foundry Division has decided to produce some quantities of the EyeQ product group, flagship semiconductor manufacturer of Mobileye, according to industry sources on April 2. EyeQ is installed in a car in system-on-chip (SoC) form and supports ADAS and autonomous driving technologies. Mobileye currently sells the EyeQ 4, 5, and 6 series and Ultra models. Samsung Electronics is known to have won orders for model 5 series or lower produced in a process of 7 to 28 nanometers.

 
  • Informative
Reactions: Bitdepth