Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Cruise again....


We've seen this happen many times now. Cruise AV just enters intersections and then if it is unsure what to do, it just stops. It seems like Cruise enters intersections without paying attention to whether the path is clear or not. So if the path happens to be blocked, it is already in the intersection and it just stops in the middle of the intersection.

I would think Cruise perception should be able to detect if the path is blocked sooner. So I think Cruise should change that behavior to not enter the intersection in the first place if the path is blocked. Stopping in the middle of an intersection every time the path happens to be blocked even partially, is not good driving IMO.

Also, Cruise's responses are terrible. They are standard boiler plate PR, "thanks for letting us know, we are monitoring the situation and are working to resolve them". And yet they keep happening!
 
Last edited:
  • Like
Reactions: thewishmaster
Cruise again....


We've seen this happen many times now. Cruise AV just enters intersections and then if it is unsure what to do, it just stops. It seems like Cruise enters intersections without paying attention to whether the path is clear or not. So if the path happens to be blocked, it is already in the intersection and it just stops in the middle of the intersection.

I would think Cruise perception should be able to detect if the path is blocked sooner. So I think Cruise should change that behavior to not enter the intersection in the first place if the path is blocked. Stopping in the middle of an intersection every time the path happens to be blocked even partially, is not good driving IMO.

Also, Cruise's responses are terrible. They are standard boiler plate PR, "thanks for letting us know, we are monitoring the situation and are working to resolve them". And yet they keep happening!
Boiler plate PR? I'd love for them to use English with proper sentence structure and, you know, grammar?
 
Over the weekend I spoke to a colleague of mine (ex colleague) who works tertiary to Cruise. This was right about when they were “negotiating” the reduction in fleet, which came right after the EXPANSION in fleet and locations that passed two weeks ago.

What he said was very interesting, to me at least. He said what they had seen in the past two weeks was an increase in what they were calling a “flash mob request”. Actually he said attack. Where at one intersection, often one that is somewhat compromised with parklets, or one street being a one way, etc., around some events there will be sometimes 15-20 (or more) requests for pickup all at essentially the SAME time and all at the same street address. Like a restaurant or bar. Now, this sort of thing CAN happen, concert at at bar gets out (we did have Outside lands here in SF two weekends ago and ppl were all trying to go to the same place, or come from the same clubs) or when a jazz club concert ends, or just a lot of dinner patrons finishing up a dinner. But, for Cruise at least it’s normally 1 or 2 cars, not 10 requests all at the same time. And there is a waiting period where they will wait a few minutes for the requester, then the car will move on, move down the street, etc. But, all the requests get made, and they all show up near together in time, then they can all bunch.

The 2nd one that was even MORE interesting is a different take on a similar theme. Too many requests, same location or street address, or address ACROSS from one another, but same time and requests.

but the kicker is, that these multiple requests are all occurring within a few minutes of an open 911 or fire call that comes over an app (i guess some of these broadcast 911, police and fire requests in major metros) and are requesting a call for pickup right about where the 911 call request is coming from. So, essentially the same issue as above, too many cars in the same area - but with the added challenge of having often a number of emergency vehicles, with lights, sirens, often parked stacked, double parked, parked at an angle, blocking the road entirely, or at least one side of it, etc.

You can imagine the challenge that this could create, and for me at least I can imagine that this would be a pretty reliable way, at least with the current AI/ driverless algorithms in place that one could create quite the congestion, dare I say CLFK pretty easily.

I imagine now, knowing this, it’s probably something that Cruise is TRYING at least to incorporate into their response, deployment, delivery algorithm, going forward.
 
  • Informative
Reactions: MP3Mike
Do you have to use a smart device to call a driverless Cruise? If so they know who did it? Can it be done anonymously?

If identifiable, I'd block those numbers for a year. And say it loudly. Might not stop this but could help. Won't cost hardly anything to do (or undo) either. Maybe offer a means of penance if challenged including additional identification checks. Not sure - could get Bud Lighted too. 🤔
 
Do you have to use a smart device to call a driverless Cruise? If so they know who did it? Can it be done anonymously?

If identifiable, I'd block those numbers for a year. And say it loudly. Might not stop this but could help. Won't cost hardly anything to do (or undo) either. Maybe offer a means of penance if challenged including additional identification checks. Not sure - could get Bud Lighted too. 🤔
Indeed, but anyone could create an account, then or prior, with a prepaid debit card, or single use credit card and then burn the accounts after use. If 1000’s of bots can impair Twitter/X whatever it’s called, someone could easily spoof interested riders for short term pain.

I SEEM to recall something like this happen over a decade ago as UBER started to enter various markets.. users, fake users or otherwise would call a bunch of Ubers to 1 location, when the baseball or basketball or concert was getting out ACROSS TOWN, so NO WAY the Ubers could get to the first destination, understand it was a no show pickup and then wait for calls to go to where the actual high volume (and would have been SURGE pricing volume) was going to be. In the end, I think it was nefarious actors from the TAXI commission (not the TLC), who did this spoofing.
 
You are going to have to have a plan to transport thousands of people to and from an area all at the same time. As well as transporting others around that area. Imagine Walmart the weekend before Christmas. Hundreds of people going to and leaving Walmart at the same time while also transporting others where they need to go. Sports Stadiums that seat thousands of people. Everyone entering and leaving at the same time while having enough cars to also take people not at the game to where they need to go.
 
You are going to have to have a plan to transport thousands of people to and from an area all at the same time. As well as transporting others around that area. Imagine Walmart the weekend before Christmas. Hundreds of people going to and leaving Walmart at the same time while also transporting others where they need to go. Sports Stadiums that seat thousands of people. Everyone entering and leaving at the same time while having enough cars to also take people not at the game to where they need to go.
We have these types of plans already in many parts of the country, just for ride share services. Either dedicated areas in parking areas or garages, which stalls (“your car is in stall 7”) or dedicated areas for things like rental car busses, offsite parking busses, etc.

I would put something like fully autonomous, for SOME types of areas in the same category. Or, IF they can see at the network level apparent congestion in a specific area - then they shift to all on a side street or something like that. Humans, seeing so much congestion in front of a restaurant or club would just pull forward, pull around at the first street and tell their party to meet them just down the street. Or, “I’ll go around and come back” or something like that.

That sort of situational adjustment, I imagine is NOT built into the pickup/arrival algorithm for these types of autonomous vehicles - yet.
 
  • Like
Reactions: JB47394
And Tesla fans think that Tesla will actually diverge real data on FSD if it launched.
No all you will ever get is a 2 sentence "safety report".


Its funny because some in this very thread was claiming that Waymo were hiding information and were against safety.
I wonder if they will make the same claim here? Of-course not, as they believe Tesla releases way more safety data than Waymo.
 
And Tesla fans think that Tesla will actually diverge real data on FSD if it launched.
No all you will ever get is a 2 sentence "safety report".

I dug through the whole link chain. It's another telephone game by journalists that are clueless about what they are reporting on. Here's the whole chain:

"Tesla directed the National Highway Traffic Safety Administration to redact information about whether driver-assistance software was being used by vehicles involved in crashes, The New Yorker reported as part of investigation into Elon Musk's relationship to the US government. "Tesla requested redaction of fields of the crash report based on a claim that those fields contained confidential business information," an NHTSA spokesperson told Insider in a statement. "
Tesla reportedly asked highway safety officials to redact information about whether driver-assistance software was in use during crashes

"In recent months, new crash numbers from the N.H.T.S.A., which were first reported by the Washington Post, have shown an uptick in accidents—and fatalities—involving Autopilot and Full Self-Driving. Tesla has been secretive about the specifics. A person at the N.H.T.S.A. told me that the company instructed the agency to redact specifics about whether driver-assistance software was in use during crashes."
Elon Musk’s Shadow Rule

"In the section of the NHTSA data specifying the software version, Tesla’s incidents read — in all capital letters — “redacted, may contain confidential business information.”"
https://www.washingtonpost.com/technology/2023/06/10/tesla-autopilot-crashes-elon-musk/

They are talking about the NHTSA standing order, where some fields are redacted, and the field the Washington Post report was talking about was the software version. So Business Insider's and the New Yorker's characterization was completely wrong! The redaction was not "about whether driver-assistance software was being used". If it's in the standing order, by definition driver-assistance software was in use! These reporters are just clueless about what they are reading about.

As a sanity check, I also looked through the data to see what was redacted. While it is true Tesla redacted the software version, it is also true Tesla's the only ADAS company that actually reported it to the NHTSA. All the other companies simply left that field blank!

For the other system version field they redacted, other than cases where it was left blank or for very generic names like L2 or Distronic or Propilot (which you can't even tell which specific version it is), plenty of companies also redacted the system version. For ADAS data set (which Tesla was in), this includes vehicles from APTIV, BMW, Daimler, Ford, Subaru.
Standing General Order on Crash Reporting | NHTSA

For ADS set, system version was redacted by Apple, Argo AI, Aurora, Cruise, Daimler, Easymile, Ford, Hyundai, Kodiak Robotics, Lucid, Mercedes, Motional, NVIDIA, Paccar, Robotic Research, TORC Robotics.

Its funny because some in this very thread was claiming that Waymo were hiding information and were against safety.
I wonder if they will make the same claim here? Of-course not, as they believe Tesla releases way more safety data than Waymo.
The NHTSA standing order data that the article was talking about proves nothing about Tesla transparency vs Waymo. For system version, they only put 4th gen or 5th gen (which you can tell anyways from vehicle model, which they can't redact). For software version number they only put the major number (Version 5 for 4th Gen, Version 7 or 8 for 5th Gen). I doubt most manufacturers will want to post specific version numbers when it is actually of significance (especially if it can give competitors clues about progress).

Zoox was the only major one that posted very specific system versions like 86.23.05.15, 90.23.01.22 or 144.23.05.17 (there were some other smaller players that did have similar numbers, but they didn't have enough incident reports to tell if it was a significant identifier). So they are the only one that may actually get some kudos for transparency, given they posted the equivalent of what Tesla likely had redacted, which I suspect was something specific like Beta 11.4.6 or 2023.7.26. Although it could simply be a side effect of Zoox not having a good legal team instead of a deliberate decision.
 
Last edited:
They are talking about the NHTSA standing order, where some fields are redacted, and the field the Washington Post report was talking about was the software version. So Business Insider's and the New Yorker's characterization was completely wrong! The redaction was not "about whether driver-assistance software was being used". If it's in the standing order, by definition driver-assistance software was in use! These reporters are just clueless about what they are reading about.

As a sanity check, I also looked through the data to see what was redacted. While it is true Tesla redacted the software version, it is also true Tesla's the only ADAS company that actually reported it to the NHTSA. All the other companies simply left that field blank!
Radacting the software version number entirely is almost as bad as redating whether the software was in use or not.
This has nothing to do with other ADAS companies. They are not claiming to have self-driving cars while telling the world and their customers that their software is "full self driving". That it won't require human attention "this year" for several years.

If Tesla weren't making all of these grandiose claims and assertions, no one would have an issue.
So yes, if you are claiming your system is full self driving, then you need to provide adequate reports.
The Theory being debunked here is that Tesla fans push the narrative that "Tesla is more safety transparent than Waymo" and "Tesla will release all their data once FSD goes live".

Yet here is Tesla refusing to give important data. Its important because if I had a system that had issues with emergency car and then i claim that I fixed issue in version X. But then we have a crop of the same accident in version X or a version after X. We then know the issue was actually not fixed. That's just an example why version number is important.

The NHTSA standing order data that the article was talking about proves nothing about Tesla transparency vs Waymo. For system version, they only put 4th gen or 5th gen (which you can tell anyways from vehicle model, which they can't redact). For software version number they only put the major number (Version 5 for 4th Gen, Version 7 or 8 for 5th Gen). I doubt most manufacturers will want to post specific version numbers when it is actually of significance (especially if it can give competitors clues about progress).

Zoox was the only major one that posted very specific system versions like 86.23.05.15, 90.23.01.22 or 144.23.05.17 (there were some other smaller players that did have similar numbers, but they didn't have enough incident reports to tell if it was a significant identifier). So they are the only one that may actually get some kudos for transparency, given they posted the equivalent of what Tesla likely had redacted, which I suspect was something specific like Beta 11.4.6 or 2023.7.26. Although it could simply be a side effect of Zoox not having a good legal team instead of a deliberate decision.

So you just excused it away entirely. It literally DOES prove something about the transparency between Tesla and Waymo.

Tesla redacts the data entirely while at-least Waymo posts the major version numbers. Probably because while they do weekly/bi-weekly development builds. They only release major numbers to their driverless fleet.

So you just made excuse for Tesla. The same excuse people make when it's shown that Tesla's safety report is literally two sentences and is absolutely worthless. While Waymo has released 100s of pages of safety reports containing likely every single accident. They also release disengagement reports with the cause of the disengagement, road type (Interstate, Freeway, Highway, Rural Road, Street, or Parking Facility), etc. Finally, collision reports with extreme details. https://www.dmv.ca.gov/portal/file/waymo_07292023-pdf/

If you think that Tesla is more transparent than Waymo. Then Waymo should stop doing all the stuff its doing. Retract all its released documents and then only release two sentence "safety report"?

Since two sentence safety report is more transparent then should Waymo do just that then?

Again if you don't agree that this is all Waymo need to release then you are contradicting yourself.
Tesla-FSD-Safety-2.jpg
 
Last edited:
You really think that Waymo has only put 8 different versions into production in all the years they have been operating? o_O
I don't know if that number is incrementing from Gen 4 or starting over from 1 for Gen 5. Also Idk what the current version number is right now and when that version 8 incident occurred.

But if Waymo is only counting driverless release updates as a "major release" then yes I do.

I remember JJRick saying that it seemed to him that Waymo stopped updating Gen 4 for a while and he stopped seeing them testing the Gen 4 vehicles.
So its seems like they froze the release for Gen 4.

You can also surmise that from the launch video of the Gen 5 from JJRick's drive. Their engineer sort-of confirms it by saying you can see the clear difference from how Gen 5 drives versus Gen 4.

With Driverless SF launching in March 2022 and Downtown Phoenix in Aug 2022. I actually believe it.

Plus they have made some remarks about certain releasees going to the driverless fleet.


 
Radacting the software version number entirely is almost as bad as redating whether the software was in use or not.
This has nothing to do with other ADAS companies.


Everyone can save time and stop reading here.

IOW- it's only bad when Tesla does it- even if every other car company also does it.

And we should ignore what they're doing isn't even the actual thing he claimed they did originally (which was redact if any driving SW was in use or not--- which is clearly impossible to redact since the entire report is about incidents using that software)


Still I made the mistake of reading further and of course there's this:

So yes, if you are claiming your system is full self driving

Tesla makes no such claims. In fact the sales page explicitly tells you it is not

fsdwarn.png


And so does the car when you first turn on the system.

And every single time you turn it on afterward.

As you've been called out on multiple times for falsely claiming but keep saying it anyway.
 

Looks like the same problem as we've seen before.

Cruise is going straight, but is stuck behind the white convertible making a left turn. The white convertible creeps far out into the intersection, and the Cruise creeps slightly into the intersection directly behind it. From about 3 seconds onward, the Cruise doesn't move while it waits for the convertible to complete the turn (which presumably happened right before the light turned red). Then, by the time the oncoming traffic gets the green, the Cruise still thinks it's in the intersection and tries to finish traversing it.

There are a couple failures here, and they're both probably caused by some coded behavior that's intended to make the drive smoother and safer, but ultimately leads to undesirable behavior.

1. "Do not change lanes within an intersection." Most human drivers wouldn't have waited patiently behind a vehicle in the intersection turning left. They would have driven to the right around the turning vehicle, but maybe Cruise forbids this.
2. "Always vacate an intersection if you're already within it, regardless of the controlling light". With the scrutiny on Cruise freezing and creating an obstacle, they may have written into the algorithm this behavior.

Honestly I think failures like these bolster the argument for more end-to-end planning and control. There are so many different combinations of scenarios, that trying to write specific behavior to handle them will inevitably fail. The planning needs to be much more dynamic, flexible, and creative to handle real city-scenarios.
 
Honestly I think failures like these bolster the argument for more end-to-end planning and control. There are so many different combinations of scenarios, that trying to write specific behavior to handle them will inevitably fail. The planning needs to be much more dynamic, flexible, and creative to handle real city-scenarios.

Absolutely! I am all in favor of more ML in planning and control. I know Cruise uses ML in their stack. How much of their planner is ML, I don't know. Certainly, I agree about the undesirable behavior, If the bad behavior is coming from ML then Cruise needs to do better ML training. If Cruise is relying too much on hard coded rules in their planner, then they need to transition to all ML planning like Waymo has done. Waymo saw much better, more dynamic planning when they did a rewrite of their planner and went to all ML planning. It's why we see the 5th Gen able to navigate more complex situations and be more "human-like" where the 4th Gen used to get stuck and stall.

What puzzles me is that Cruise wanted to scale big when their software was in this immature state.
 
  • Like
Reactions: daktari
Everyone can save time and stop reading here.

IOW- it's only bad when Tesla does it- even if every other car company also does it.
"they believe Tesla releases way more safety data than Waymo."
And we should ignore what they're doing isn't even the actual thing he claimed they did originally (which was redact if any driving SW was in use or not--- which is clearly impossible to redact since the entire report is about incidents using that software)
I literally addressed that in my reply.

Still I made the mistake of reading further and of course there's this:

Tesla makes no such claims. In fact the sales page explicitly tells you it is not

View attachment 967556

And so does the car when you first turn on the system.

And every single time you turn it on afterward.

As you've been called out on multiple times for falsely claiming but keep saying it anyway.
Dude do you ever get tired of making false claims about people making false claims. You literally do it all the time.

Elon Musk, which is exactly who i'm referring to and exactly what everyone here, on twitter, reddit and majority of Tesla fans and the general public knows when it comes to FSD.
Has made dozens of FSD claims and continue to do so.

I haven't met any lay person in the general public who didn't believe that Tesla's weren't fully autonomous and didn't require a human driver. Why? Because of Elon's statements. Infact they are shocked when I tell them its not so. Tesla fans constant FSD narrative is literally built on Elon's statements.
No one is referring to fine prints, statements on order pages or manuals. At this point it seems like as the saying goes. you just want to hear yourself talk.