Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Waymo

This site may earn commission on affiliate links.
I was driving around yesterday on FSD Beta and during my drive a plastic shopping bag blew into the road and crossed my lane. As expected, my Tesla ignored it and just ran through it (it passed on the drivers side). But it made me wonder how Waymo and other AVs would handle a shopping bag floating in front of the car. The bag was fully inflated due to wind pressure, so it's a decent sized object floating just above the ground (headlight level). Would LIDAR see it as a solid object? How about RADAR? I'm guessing RADAR waves may pass through the material as it's quite thin. But light waves would reflect back, so I'm guessing LIDAR would indicate an obstacle.
 
I was driving around yesterday on FSD Beta and during my drive a plastic shopping bag blew into the road and crossed my lane. As expected, my Tesla ignored it and just ran through it (it passed on the drivers side). But it made me wonder how Waymo and other AVs would handle a shopping bag floating in front of the car. The bag was fully inflated due to wind pressure, so it's a decent sized object floating just above the ground (headlight level). Would LIDAR see it as a solid object? How about RADAR? I'm guessing RADAR waves may pass through the material as it's quite thin. But light waves would reflect back, so I'm guessing LIDAR would indicate an obstacle.

Yes, lidar would see it. It's a question of what the prediction/planning would decide to do about it.
 
I was driving around yesterday on FSD Beta and during my drive a plastic shopping bag blew into the road and crossed my lane. As expected, my Tesla ignored it and just ran through it (it passed on the drivers side). But it made me wonder how Waymo and other AVs would handle a shopping bag floating in front of the car. The bag was fully inflated due to wind pressure, so it's a decent sized object floating just above the ground (headlight level). Would LIDAR see it as a solid object? How about RADAR? I'm guessing RADAR waves may pass through the material as it's quite thin. But light waves would reflect back, so I'm guessing LIDAR would indicate an obstacle.
I think it's unlikely that FSD beta recognized it as a plastic bag, it probably just didn't recognize it as any of the objects it's trained to recognize. I'm skeptical that the approach of not responding to unrecognized objects will be able to achieve the performance necessary for a robotaxi. I don't think radar would get a reflection from a plastic bag so it does seems like sensor fusion with cameras and/or lidar would be able to recognize low density objects without having to train the perception NN on the infinite variety of plastic bags.
 
I think it's unlikely that FSD beta recognized it as a plastic bag, it probably just didn't recognize it as any of the objects it's trained to recognize. I'm skeptical that the approach of not responding to unrecognized objects will be able to achieve the performance necessary for a robotaxi. I don't think radar would get a reflection from a plastic bag so it does seems like sensor fusion with cameras and/or lidar would be able to recognize low density objects without having to train the perception NN on the infinite variety of plastic bags.

Correct.

Warren from Waymo has a good thread on this and why using lidar is important for driverless-level reliability:




 
  • Informative
Reactions: Doggydogworld
I think it's unlikely that FSD beta recognized it as a plastic bag, it probably just didn't recognize it as any of the objects it's trained to recognize. I'm skeptical that the approach of not responding to unrecognized objects will be able to achieve the performance necessary for a robotaxi. I don't think radar would get a reflection from a plastic bag so it does seems like sensor fusion with cameras and/or lidar would be able to recognize low density objects without having to train the perception NN on the infinite variety of plastic bags.
That's a good point - the cameras and LIDAR would see a solid object in the path, but RADAR would possibly see nothing there. The fusion would then realize there isn't a threat to the vehicle.
 
That's a good point - the cameras and LIDAR would see a solid object in the path, but RADAR would possibly see nothing there. The fusion would then realize there isn't a threat to the vehicle.
Recognization and deciding on the right action is all about the NN. When I see an object on the road, I'll assess it's weight by observing its movement. A stationary plastic bag might contain a football sized rock, but one floating in air should likely be harmless.

IMO self-driving car would need to be able to do this deduction without sensor fusion to be safe. Surely radar fusion with vision could help NN, but probably should not be required in this case.
 
I think 500k driverless miles in metro Phoenix is significant because driverless requires very low safety critical error rate. Also, metro Phoenix is more complicated driving than Chandler. So this stat implies to me that Waymo has achieved very low safety critical error rate in city driving.
 
I did not see any major issues. It seemed to handle a variety of city scenarios really well IMO.
And before anyone comments "I did see FSD doing similar drive - they must be similar", please remember:
  • This is L4: Car is trusted to do this with no-one on the drivers seat. Stakes could not be higher: any mistake done would have significant financial implications for Alphabet.
  • Doing one flawless drive is not that special any more. Doing that _every_ time, with no exceptions, is the magic.
I wonder how many years ago Waymo was where FSD is today?
 
  • Like
Reactions: diplomat33
With that 4 or 5 car line of lead bumper to bumper low speed vehicles, it's surprising the Waymo carried so much closing speed and had to hit the brakes so hard @0:32. The lead vehicles didn't stop short either. 8x videos are good glossing over details.
 
And before anyone comments "I did see FSD doing similar drive - they must be similar", please remember:
  • This is L4: Car is trusted to do this with no-one on the drivers seat. Stakes could not be higher: any mistake done would have significant financial implications for Alphabet.
  • Doing one flawless drive is not that special any more. Doing that _every_ time, with no exceptions, is the magic.

Thanks.. Warren Craddock, from Waymo, says that driverless has to be able to drive "multiple human lifetimes" without a single safety critical mistake! So driverless implies very high reliability. So when you see a driverless ride, the company is basically saying "we trust the car to completely handle this route safely every time". So yes, driverless is very different from "zero interventions".



I wonder how many years ago Waymo was where FSD is today?

It is hard to say. But in 2009, the Google Self-Driving Project, before it became Waymo, completed their "1000 miles - zero intervention" challenge where they had to complete 10 challenging 100 mile routes around CA with zero interventions:

In early 2009, when Waymo was first founded as the “Google Self-Driving Car Project,” Google founders Sergey Brin and Larry Page challenged our first engineers to drive autonomously without human intervention or disengagements along ten challenging 100-mile routes in our home state of California. By December 2009, the team had completed their first route, and nine months later in mid 2010, we had wrapped up the last.

Obviously, back then, their FSD was not super reliable. But they were able to do some routes with zero interventions.

You can see the entire playlist here:

 
Developing for specific routes is exactly the wrong approach to AV and even Elon is getting suckered back into it with that stupid chuck's left turn nonsense. If he wants a gimmick just go back to lidar with hd maps.

You misunderstand. Back in 2009, Google simply picked 10 routes as a test to see if their FSD could do them with zero interventions. They did not develop FSD for specific routes. But back then, Google was really just wanting to see if self-driving was even possible. The self-driving Waymo has today is generations ahead of what Google was doing in 2009. Waymo does not develop for specific routes. Waymo is generalized FSD.

Also, Elon never used lidar and HD maps. So I am not sure how he could "go back" to it if he never used them in the first place.
 
Last edited:
Also, Elon never used lidar and HD maps. So I am not sure how he could "go back" to it if he never used them in the first place.
Lidar doesn't work without HD maps because it cannot recognize objects or read signs. Tesla used lidar with vision but vision makes lidar redundant. If you're going to use lidar just drop vision.
 
Lidar doesn't work without HD maps because it cannot recognize objects or read signs. Tesla used lidar with vision but vision makes lidar redundant. If you're going to use lidar just drop vision.

Please stop. Everything you wrote in that sentence is wrong. Lidar does work without HD maps. Lidar is an active sensor that detects objects and features around the car. It does not need HD maps. Tesla never used lidar with vision. If you use lidar, you don't drop vision. At a minimum, you still need vision to detect traffic light colors. And if you use vision, lidar is not redundant since lidar works in some conditions where vision does not. AVs use both lidar and vision. One does not make the other redundant.
 
I think 500k driverless miles in metro Phoenix is significant because driverless requires very low safety critical error rate. Also, metro Phoenix is more complicated driving than Chandler. So this stat implies to me that Waymo has achieved very low safety critical error rate in city driving.
"Metro Phoenix Region" includes Chandler. 500k is pretty much in line with my estimates of 500 miles/day. 800 days at that rate would be 400k and that white paper said they'd done 66k or so before Covid hit. Plus presumably some testing downtown and while service was suspended in Chandler.

Developing for specific routes is exactly the wrong approach to AV and even Elon is getting suckered back into it with that stupid chuck's left turn nonsense. If he wants a gimmick just go back to lidar with hd maps.
A "general solution" is the gimmick.
 
This always has me thinking. I get it that LIDAR can augment and help fill in some gaps, but how can there be a place where vision doesn't work? That would mean humans can't drive there.
I'm trying to think of places where vision systems might not work - or might not work well enough by themselves to be "safe". Bear with me.

Super amounts of glare blinding the car from the side/rear. Can still see ahead for traffic lights, but no safe view of approaching traffic from the side/rear.

I'm not convinced that non-radar, non-LIDAR cars can safely reverse down darkened areas. Sometimes even we get out of the car and look around, maybe with a flashlight first to get out of that really dark parking lot, dark cluttered alley, narrow & overgrown country lane. Vision cars have no provided light on the sides and very little at the rear, I presume we're not talking about night-vision cars.

How about detecting the potholes at night, determining where the walls are in an unlighted street tunnel, or seeing the wire strung across the road from a downed power pole. Not sure if vision/radar/LIDAR would detect any of these, but there's a chance one of them would.

Also, maybe it does mean humans can't drive in a particular location because it's vision-challenging, but no reason a good autonomous car can't. Might be a good use-case for it, pitch-black road and the headlights failed/turned off. Could be some rescue or military applications too.

Maybe there's better examples. Has anyone found a place where FSD fails to see?
 
Last edited:
  • Like
Reactions: Dewg