Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Waymo Makes History: First Fully Self Driving Car With No Driver

This site may earn commission on affiliate links.
Look, I'm all for saying we don't know Waymo's full capabilities until we see them in action, but at the same time the reality is we've seen a lot more of Waymo self-driving that we've seen Tesla. At some point putting a bit more odds in Waymo's corner makes some sense to me... (As far as seeing who gets there first.)

We shall see.

Actually, we do know the capabilities of LIDAR: LIDAR does not work in inclement weather. It sees the reflections from snow, rain, etc as obstacles. There's some research on trying to filter out the bad data, but as it stands, it's still a theoretical research topic, on the scale of difficulty as trying to clean up photogrammetry stitching errors (if not harder).
 
No one knows the inner workings of Waymo's Self Driving Software.
What we do know from various presentation and articles is that all data from their various sensor configuration play huge roles.

The Radar/Vision doesn't exist to contextualize the Lidar data. They have independent functions.

Radar exist to see and continuously track approaching vehicles, pedestrians and cyclists from all directions in increment weather. These radars can see "underneath and around vehicles, tracking moving objects usually hidden from the human eye.”

Cameras exist to detect and classify objects. Especially objects that's defined by their color such as "traffic lights, construction zones, school buses, and the flashing lights of emergency vehicles." These high resolution cameras allow Waymo to "detect small objects like construction cones far away even when we’re cruising down a road at high speed. And with a wide dynamic range we can see in a dark parking lot, or out in the blazing sun — or any condition in between."

Lidar exists to see and continuously track "shapes in three dimensions, detect stationary objects, and measure distance precisely."

Both the Camera and Lidar system runs two separate and individual Neural Network Models.

One NN is for Object Detection and Classification using images.
The other is for Object Detection and Classification using 3D cloud points.


You can find the relevant clip at 10 minutes 20 seconds

TLDR: The Waymo system is a complete complimentary system.
Actually if you go further back, there have been quite a bit of information that Google released about their system. Primarily they relied on lidar to sync the car's environment model to a premapped area. The vision and radar is only supplementary (for example cameras to detect traffic light colors and sign recognition; radar is for vehicle/pedestrian detection for collision avoidance/prevention similar to most conventional use in cars). This approach suggests that the system would not be reliable in areas that have not been mapped yet (this matches how Google/Waymo has released their system thus far: geo-fenced to a specific area).
Google's Robot Car Can't Explore New Roads, and That's a Big Problem
How Google's self-driving cars detect and avoid obstacles - ExtremeTech

The references are a bit dated, but unless they threw this approach out the window (I don't see why they would do that given it works with their core strength, which is mapping, explicitly HD maps; not talking about regular lower resolution GPS maps), I imagine they are still using a similar approach.

This is flipped from vision based approaches which primarily rely on vision to determine the environment in real time, while any HD maps only serve as supplement.
 
Lidar certainly can see when it rains or snows. Ignoring reflections from snow/rain falling down has been figured out.

This is flipped from vision based approaches which primarily rely on vision to determine the environment in real time, while any HD maps only serve as supplement.

While it is certainly plausible Waymo uses their five Lidars (basically 360 x 2) as the primary sensor network, why wouldn't they, it is a far cry from saying the car can't drive in inclement weather, while, say, a Tesla could.

Really, one car (Waymo) has 2 x 360 degree Lidar, 360 degree vision and 360 degree radar coverage... and the other car (Tesla) has basically 2/3 of its field of view covered solely by single cameras only for anything beyond low-speed/close-by maneuvering purposes...

No, I don't find it very plausible that Waymo should fail where Tesla will succeed, seeing-wise.

Now, whether or not Waymo will make a difference or is merely a pioneer that will get there first but be relegated to some niche, that is a more interesting question IMO.
 
Lidar certainly can see when it rains or snows. Ignoring reflections from snow/rain falling down has been figured out.

I've seen conflicting information on this. Can you give me a source for this information? I've seen a few press releases and news articles about Ford's system, but do you have a link to original research paper? Or can you show me another company, such as Waymo or Mobileye that have also developed algorithms that allow LIDAR to see through rain/snow?
 
@aWalkingShadow You already mentioned the Ford research. Here's some on Waymo: "Krafcik also told Bloomberg the new sensor package on the Waymo Chrysler Pacifica is "highly effective in rain, fog, and snow," which have typically been trouble for LIDAR systems thanks to the reflective nature of water in the air."

Google’s Waymo invests in LIDAR technology, cuts costs by 90 percent

Waymo is already talking/testing/teaching the cars - not just how to see or drive - but how to skid on unploughed snowy roads.

Waymo starts testing in Michigan to master snow and ice

Again, from Tesla we have seen one fair-weather California video. Waymo is already driving in snow...
 
Lidar certainly can see when it rains or snows. Ignoring reflections from snow/rain falling down has been figured out.

Nonsense. Waymo didn't even begin snow testing until this year (Tahoe in the spring, Detroit this fall). Yes, there has been work toward trying to cancel out spurous reflections (still an ongoing research problem among many companies and universities involved in LIDAR). But you can also apply that same man hours that you would have otherwise put toward LIDAR problems toward instead eliminating photogrammetry misstitches.

The same thing applies to everything else. Yes, it's technically "possible" to use a suite that has LIDAR plus radar and cameras in a "radar and cameras-only" scenario. But if you have LIDAR, you're going to be allocating your resources more toward working with the LIDAR datastream as your 3d mapper, where if you don't, all of your resources will go toward working just on radar and cameras. That's just the way it is. You're not going to put LIDAR into a vehicle and then dedicate all of your resources toward ignoring it. A company like Tesla is constantly working with visual and radar data, because that's what they have to work with. It's 100% of their focus.

I'm not saying it's bad to have LIDAR. LIDAR is great! A very high quality datastream. But the way it stands at present, it's not a practical datastream to add to consumer vehicles. So you don't want your "autonomous vehicle development" wasted on a technology that you can't actually realistically use if your goal is all cars would be self-driving. And it's much easier to add a LIDAR-like datastream into a system which wasn't built around it than it is to take it out of a system that was built around it. Add-in means suddenly, "congrats, you no

Forgot to mention another issue with what Waymo is doing: Waymo's vehicles reportedly consume 2-4kW on their self-driving systems. On an EV, that's yet another "complete non-starter" scenario; it would kill your range. Again: Tesla is faced with the much harder task of deploying self-driving that works on real-world consumer electric vehicles.
 
  • Like
Reactions: Carl and Joe F
What is redundancy, really, in this context? Strictly speaking, redundancy would mean double up on everything (two sets of cameras in every direction, two sets of radars in every direction etc.)

Of course no one does that. At the same time, everyone knows that the different sensors have different capabilities and limitations. A radar sees through fog (cameras don't), cameras see text/color (radars don't), etc., so neither a lidar, radar nor sonar could ever fully replace a blocked video camera, and a video camera can never fully replace a blocked lidar or radar.

So I guess my question is: Can we consider all sensors (pointing in the same directions) redundant to each other?
 
I consider this conversation redundant. :D People (including me) obviously believe what they want to believe and where their personal thoughts on this matter align. It is futile to argue stuff with limited data available and biases guiding how we fill out the empty spaces. I am happy to agree that not sufficiently is known yet to say much more. And what can be said, is not really worth bickering over. Waste of time.

Luckily nobody seems to be disputing there are different approaches to the self-driving question - in suite design (vision mostly vs. full monty), in implementation paradigm (visual AI NN vs. careful mapping and rules), in target market (consumer cars vs. ride hailing and commerical fleets) and in the deployment method (step-by-step Level 2-3-4-5 vs. Level 4/5 from the beginning with step-by-step geofencing)...

The difference of opinion is, which combo is leading and winning (likely).

In any case, there are genuine, legitimate differences, though every player on the market may actually be mixing and matching things a lot more than we would know. It will be interesting to see which method will provide the results.

So far it seems the full monty, careful mapping and rules, ride hailing, Level 4/5 from the beginning with step-by-step geofencing is the approach that will likely be the first on the market. That is no guarantee of victory overall, of course. As George Hotz says, it is possible Waymo deployes something in the hundreds and then come the others and ship by the hundreds of thousands a bit later.

Then again it may be that the likes of Musk and Hotz will be left behind if they can't solve the big questions fast enough.
 
Last edited:
@aWalkingShadow You already mentioned the Ford research. Here's some on Waymo: "Krafcik also told Bloomberg the new sensor package on the Waymo Chrysler Pacifica is "highly effective in rain, fog, and snow," which have typically been trouble for LIDAR systems thanks to the reflective nature of water in the air."

Google’s Waymo invests in LIDAR technology, cuts costs by 90 percent

Waymo is already talking/testing/teaching the cars - not just how to see or drive - but how to skid on unploughed snowy roads.

Waymo starts testing in Michigan to master snow and ice

Again, from Tesla we have seen one fair-weather California video. Waymo is already driving in snow...

I looked into the Bloomberg article you mentioned, and the "highly effective in rain, fog, and snow," is actually in reference to LIDAR.
Alphabet’s Waymo Cuts Cost of Key Self-Driving Sensor by 90%

I don't see anywhere in either of your articles where they say LIDAR can see through rain or snow, but rather that their sensor suite as a whole can operate in those conditions.

Now, granted, it doesn't really matter in the end if LIDAR itself can see through rain/snow, as long as Waymo's overall sensor suite allows it to drive in those conditions. But it does add to the conversation of whether Tesla is going with the right approach.
 
Meanwhile, my Tesla with the latest and greatest software darted into another lane with vehicles galore without so much as a beep. Makes you wonder if there is ANY thought of safety in the current code base. :eek:

But keep up the Waymo bashing. It'll give us all something to laugh about six months from now. And who knows? Tesla may have introduced dancing emojiis on the Big Screen by then.
 
Last edited:
...it doesn't really matter in the end if LIDAR itself can see through rain/snow...


"Super smart sensors: Ford uses LiDAR sensors that are so powerful, they can even identify falling snowflakes and raindrops.
Ford’s autonomous vehicles generate so many laser points from the LiDAR sensors that some can even bounce off falling snowflakes or raindrops, returning the false impression that there’s an object in the way. Of course, there’s no need to steer around precipitation, so Ford – working with University of Michigan researchers – created an algorithm that recognizes snow and rain, filtering them out of the car’s vision so it can continue along its path."













giphy.gif




Driverless cars have a new way to navigate in rain or snow


"Here’s how it works: Ford’s autonomous cars rely on LiDAR sensors that emit short bursts of lasers as they drive along. The car pieces together these laser bursts to create a high-resolution 3D map of the environment. The new algorithm allows the car to analyze those laser bursts and their subsequent echoes to figure out whether they’re hitting raindrops or snowflakes.

When a laser goes through the rain or snow, part of it will hit a raindrop or snowflake, and the other part will likely be diverted towards the ground. The algorithm, by listening to the echoes from the diverted lasers, builds up a picture of the “ground plane” as a result, said Jim McBride, technical leader for autonomous vehicles at Ford.
“If you record not just the first thing your laser hits, but subsequent things, including the last thing, you can reconstruct a whole ground plane behind what you’re seeing, and you can infer that a snowflake is a snowflake,” he told Quartz.
Additionally, the algorithm checks for the persistence of a particular obstacle. A laser beam is unlikely to hit a raindrop twice, for example, allowing the algorithm to rule it out as an obstacle, McBride told Quartz."
 
Last edited:
...Waymo or Mobileye) are using a similar algorithm as well.

Good question.

Waymo has done lots of test in hot as well as rainy regions like Kirkland, WA but ot much for snow in the past.

"Google's self-driving car program, now known as Waymo, has been working on driving in rain for years and even put windshield wipers on their car's sensor domes to maintain visibility. The cars will also drive more cautiously in the rain. However, according to a Waymo Medium post discussing the limitations of self-driving technology, "For now, if it’s particularly stormy, our cars automatically pull over and wait until conditions improve." "


There's a LIDAR company who claims:

"Immunity to noise – Leddar can detect targets in harsh or low visibility conditions such as rain, snow, fog, dust or during night-time."


upload_2017-11-8_23-8-14.png
 
.....you have pre-mapped data that has been geofenced...

View attachment 258929

So presumably they will need multiple scans of the same road? Once for clear, once for snow-covered? Does a road need to be re-scanned a third time after it has been ploughed since banks of snow change its topography?

Think of the fun people have with trying to update the 2d maps in their sat navs. Imagine the fun when the data is 3D and needed in weather-specific versions.

The OTA will be OTT.

A pop up message saying "premapped data is inaccurate due to heavy snowfall" is going to save an awful lot of work and stress.

It's a good job these systems are being built with redundancy. It's going to be needed. ;)
 
Last edited:
I looked into the Bloomberg article you mentioned, and the "highly effective in rain, fog, and snow," is actually in reference to LIDAR.
Alphabet’s Waymo Cuts Cost of Key Self-Driving Sensor by 90%

I don't see anywhere in either of your articles where they say LIDAR can see through rain or snow, but rather that their sensor suite as a whole can operate in those conditions.

You are right it is not spelled out exactly, our insight into how these companies operate and implement things is always limited - and you are also right that my primary reference was the Ford research. You are right to ask, but in reality we don't have access to all that inside info.

But here's the big thing. This mainly comes down to the "we believe what we want to believe angle". The bias angle.

People on TMC happily use, say, Comma.AI's comments as proof positive that Tesla is doing the right thing. So what Comma.AI does on vision, applies to Tesla.

Then someone posts how Ford is solving Lidar issues, to point out contemporary research in the Lidar scene. And suddenly that has got nothing to do with Waymo without links proving it?

The objective reality is, we know precious little about how for instance Tesla will be able to solve winter driving. We know much more about how Waymo is doing, we have articles on this and public testing. On Tesla we have absolutely nothing on winter driving.

Yet for whatever reason, the notion persists in this thread that Waymo is the one with a problem.

We believe what we want to believe. That's the way confirmation bias works. When we have come up with a thought (say, vision works and Lidar sucks), we put different information points on different places in the scale based on that.

Elon did a really, really smart thing by badmouthing Lidar. It was a jobsian master stroke. That still resonates in people.

Whether or not he can capitalize on that in autonomous driving results, of course, is a different matter. We shall see.

Now, granted, it doesn't really matter in the end if LIDAR itself can see through rain/snow, as long as Waymo's overall sensor suite allows it to drive in those conditions. But it does add to the conversation of whether Tesla is going with the right approach.

The suggestion was that Waymo is worse than Tesla because it has Lidar, earlier in this thread. I don't think this adds to that strange notion.
 
  • Love
Reactions: NerdUno