Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Waymo Makes History: First Fully Self Driving Car With No Driver

This site may earn commission on affiliate links.
@Tam
I've already acknowledged that I know Ford has algorithms that can see through rain/snow. However, what I was asking about was whether or not other companies (such as Waymo or Mobileye) are using a similar algorithm as well.

Why would we assume they are not? We don't make such silly assumptions about computer vision either. The assumption that Waymo does not do filtering on their Lidar input is absurd.

It is like saying Tesla can't estimate distance to objects from the side cameras because there is no proof that Tesla is using the computing techniques required and they don't have dual cameras there to see 3D. We don't assume such things either and for a good reason.
 
Last edited:
Lidar certainly can see when it rains or snows. Ignoring reflections from snow/rain falling down has been figured out.

While it is certainly plausible Waymo uses their five Lidars (basically 360 x 2) as the primary sensor network, why wouldn't they, it is a far cry from saying the car can't drive in inclement weather, while, say, a Tesla could.

Really, one car (Waymo) has 2 x 360 degree Lidar, 360 degree vision and 360 degree radar coverage... and the other car (Tesla) has basically 2/3 of its field of view covered solely by single cameras only for anything beyond low-speed/close-by maneuvering purposes...

No, I don't find it very plausible that Waymo should fail where Tesla will succeed, seeing-wise.

Now, whether or not Waymo will make a difference or is merely a pioneer that will get there first but be relegated to some niche, that is a more interesting question IMO.
Did you respond to the wrong comment perhaps? I never mentioned weather at all, only that the difference in the mapping based approach means Google won't operate the car in an unmapped area. You can have the exact same hardware, but use either approach (vision primary or lidar/maps primary).

I also never mentioned Tesla, but was talking in general. There are a lot of other players that use vision as primary also (Mobileye, comma.ai, AutoX).
 
Last edited:
Meanwhile, my Tesla with the latest and greatest software darted into another lane with vehicles galore without so much as a beep. Makes you wonder if there is ANY thought of safety in the current code base. :eek:

But keep up the Waymo bashing. It'll give us all something to laugh about six months from now. And who knows? Tesla may have introduced dancing emojiis on the Big Screen by then.

But hang on a minute. Hasn't your own hard-won scepticism with Tesla taught you to to be wary of the claims of autonomous systems developers?

Should Waymo get a sudden benefit of your doubt?

My 2c: "Yeah, nice demo. I'll believe it when I drive it / it's independently tested"

Plus LIDARs look lame :D
 
Did you respond to the wrong comment perhaps?

No, but it is possible I got a different meaning from the message than you intended and I quoted a bit poorly. My primary sensor reference was to this:
Primarily they relied on lidar to sync the car's environment model to a premapped area. The vision and radar is only supplementary (for example cameras to detect traffic light colors and sign recognition; radar is for vehicle/pedestrian detection for collision avoidance/prevention similar to most conventional use in cars).

I never mentioned weather at all

Again, just to be clear, my weather comment was preceding your quote in my message, so it was directed at the thread in general, not you. Sorry for any confusion there.

only that the difference in the mapping based approach means Google won't operate the car in an unmapped area. You can have the exact same hardware, but use either approach (vision primary or lidar primary).

Yes, there certainly are approach differences, but here's where I think some of you guys (if not you in particular then certainly others) may be jumping to premature conclusions... do you really think Waymo can not expand their approach... or that Tesla will ship Level 4/5 FSD without geofencing from day 1?

That's the biggest thing that puzzles me in conversations like this. We sort of tend to give infinite possibility to our favored technology choice and look at the "competing" choice with the harshest light possible. Vision AI is suddenly Level 4/5 global in our thinking and Waymo's maps approach can not possibly have out of map redundancy later, because it doesn't have it today. Chances are, neither is true...

Reality is, we have absolutely no idea when and how autonomous driving will be available in the world and which approach gets there fastest, is the most common or is the best.

I also never mentioned Tesla, but was talking in general. There are a lot of other players that use vision as primary also (Mobileye, comma.ai, AutoX).

Of course. Even the new Audi A8, the first consumer production car shipping with Lidar, uses MobilEye vision as primary and Lidar as secondary.
 
  • Love
Reactions: NerdUno
...banks of snow change its topography...

My understanding of pre-mapping is it provides numerous constant clues/references such as building, trees, telephone poles...

In good weather, camera and LIDAR can detect lane markers fine.

However, when they cannot detect lane markers because some construction workers just covered the road with new asphalt or the snow just covers up the road, the camera and LIDAR can try to match those constant clues/references such as building, trees, telephone poles... in the data base with what they could scan right now and calculate its exact location to keep the car in lane when there is no visible lanes.

The problem is camera and LIDAR are known to not work in rain and snow so how can they scan through rain and snow in real life to know where real life building, trees, telephone poles... are?

However, Ford says its LIDAR can, and Leddar says its LIDAR can.

So we are good.

If both camera and LIDAR fail to scan real life structures due to rain and snow, RADAR can so the system can match with pre-mapped data base and try to place the car in lane.
 
Last edited:
  • Informative
Reactions: malcolm
But hang on a minute. Hasn't your own hard-won scepticism with Tesla taught you to to be wary of the claims of autonomous systems developers?

Why would we not assess each individual player by their own reputation? Certainly we can and should adjust that reputation as times goes by and we learn more, but Waymo is a self-driving pioneer that has been out there for soon a decade. We know a lot about their autonomous progress. To somehow group that with Tesla's would seem terribly unfair on Waymo.

Should Waymo get a sudden benefit of your doubt?

I know Tesla does get that a lot, even still now in this thread Tesla's approach gets a lot more benefit of the doubt than Waymo's... for @NerdUno Tesla also got it until he got burned by the experience. That too was originally understandable, because AP1 gave Tesla a good reputation. Of course we learned different later.

My 2c: "Yeah, nice demo. I'll believe it when I drive it / it's independently tested"

Perfectly fair, but where I do disagree is you grouping it up with Tesla's demo. Waymo is already in the pilot stage hauling passengers in the car. There is a reason why reasonably we should give more merit to that, than to Tesla's 2016 FSD video for example.

The future is uncertain, we can note that, but the present is best assessed with best possible knowledge and best possible assumptions. In that, IMO Waymo certainly is ahead of Tesla. Now, the general approach differences are of course a fair discussion.
 
  • Like
Reactions: malcolm and NerdUno
No, but it is possible I got a different meaning from the message than you intended and I quoted a bit poorly. My primary sensor reference was to this:

Again, just to be clear, my weather comment was preceding your quote in my message, so it was directed at the thread in general, not you. Sorry for any confusion there.

Yes, there certainly are approach differences, but here's where I think some of you guys (if not you in particular then certainly others) may be jumping to premature conclusions... do you really think Waymo can not expand their approach... or that Tesla will ship Level 4/5 FSD without geofencing from day 1?

That's the biggest thing that puzzles me in conversations like this. We sort of tend to give infinite possibility to our favored technology choice and look at the "competing" choice with the harshest light possible. Vision AI is suddenly Level 4/5 global in our thinking and Waymo's maps approach can not possibly have out of map redundancy later, because it doesn't have it today. Chances are, neither is true...

Reality is, we have absolutely no idea when and how autonomous driving will be available in the world and which approach gets there fastest, is the most common or is the best.

Of course. Even the new Audi A8, the first consumer production car shipping with Lidar, uses MobilEye vision as primary and Lidar as secondary.
You are jumping to conclusions a bit.

Even if Google stuck to the current approach they can get to L5 by "mapping the world". This may have seen unthinkable years ago, but now with crowd sourcing it's entirely possible.

And my comment didn't say vision based systems would operate without geofence on day one. Premapping requirement is one reason for a geofence, but there are other reasons too (legal jurisdiction, road type, traffic levels, weather, for example).

Vision based approaches have different challenges that they have to solve (detecting a safe path in the absence of clear lane lines, landmark recognition).

It's just a two different ways to reach the same goal. However, expecting companies that focus primarily on vision to be better at vision seems to be a reasonable assumption to make, just like it's reasonable to expect companies that primarily focus on lidar to be better at lidar processing.
 
You are jumping to conclusions a bit.

The point is, we all are. It's only natural. There is a lot of that in this thread.

Even if Google stuck to the current approach they can get to L5 by "mapping the world". This may have seen unthinkable years ago, but now with crowd sourcing it's entirely possible.

Sure. And IMO with Google in Waymo's court it was always possible, even without crowdsourcing, given that their effort to map the world has been so formidable in the past - assuming, of course, sufficient redundancy to handle exceptions such as roadworks.

And my comment didn't say vision based systems would operate without geofence on day one. Premapping requirement is one reason for a geofence, but there are other reasons too (legal jurisdiction, road type, traffic levels, weather, for example).

True, but others here have suggested the ability to work everywhere to be a positive for vision AI and a negative for Waymo. All those things you list likely play a role for Waymo too, not simply the fact that they have maps in the center of their approach.

In reality, we don't have any idea how this will play out. It is possible the vision AI guys are, for example, so far behind that Waymo will simply solve step-by-step maps faster and/or by the time vision AI catches up, Waymo's redundancy might make up for the non-mapped portions as well. This IMO is a perfectly plausible scenario, for example. Even though Waymo talks of geofenced areas as a first step, that doesn't mean their decade of work on this hasn't yielded solutions outside of that as well. We should not confuse methodological steps automatically with inability. (Same goes for assessing, say, Audi's self-driving, for example. What they chose for their first Level 3 system is not representative of what their system is capable in general and has been for years now.)

Then again, at least in theory, it is possible the likes of Tesla and Comma.AI will surprise us all by having been massively faster and launching, say, a country-wide Level 4/5 next year. That is possible, though I'm not sure how plausible. Their value promise is that by focusing laser-sharply on deep learned visual driving, they will get there faster than those who work more methodologically on a comprehensive sensor suite, mapping and implementing driving policy. It is a clear value promise, easy to understand and obviously a lot of people have "bought" it on TMC. We shall see how it plays out.

In the end, we just don't know yet.

It's just a two different ways to reach the same goal. However, expecting companies that focus primarily on vision to be better at vision seems to be a reasonable assumption to make, just like it's reasonable to expect companies that primarily focus on lidar to be better at lidar processing.

Certainly if we take two companies that started at the same time with two different approaches, that is one thing. But this is comparing Waymo and Tesla. There is a massive gulf of history between them. To ignore that, IMO means putting extraordinary faith on Tesla's better approach - not just regularily noting that one approach is different from the other. Waymo has had so much more time and time on the road working on this, it is perfectly plausible even their "secondary" vision is ahead of Tesla's vision at this very moment.
 
Last edited:
Certainly if we take two companies that started at the same time with two different approaches, that is one thing. But this is comparing Waymo and Tesla. There is a massive gulf of history between them. To ignore that, IMO means putting extraordinary faith on Tesla's better approach - not just regularily noting that one approach is different from the other. Waymo has had so much more time and time on the road working on this, it is perfectly plausible even their "secondary" vision is ahead of Tesla's vision at this very moment.
Not counting a lot of cash to dispose without problem so you can buy better hardware wich semplify lot of things, while tesla has less money, less time, less resources and need to make it better so it can fit in a cheaper processor.
Waymo can geo-reference what they want, after all, if you simply put hundreds of car in a city with the only target: scan every damn street 10 times, how much DAYS you need? not many.
I think that they don't have much cars, so it has no sense at all to let them navigate to a bigger area, they have enought customers as it is.
When the system is proven realiable, you can double the cars, and create another 'georeferenced area', if it work no problem, then you can start with massive-production.. wich, of course, is a problem since they don't have a dedicated production line like tesla.
So, i think waymo has a LOT more chance to bring autonomous to various city around the world where it gains them more ( more people = more time the car is occupied ), but it will have problem scaling and bring it to everyone in the next 5-6 years, tesla, on the other hands, has a lot of production capacity planned for the next 1-2 years, and as seen, a couple of years can make a lot of differences.

Keep in mind that when waymo started the computer capability were EXTREMELY limited comprated to now, and of course all the AI etc was a newborn thing and even they didn't put faith in it. I suspect that google whould have changed it's mind about AI if waymo were started today, but as it is, they have a software that works and changing course is stupid at the moment, but probably it's going to happend and they would probably rewrite all the code gradually to a new way. ( like spacex that keep working on FH when they know that BFR will be better, it would be better to simply close the FH and be done with it, but since they are nearly on it and they need some years to work on BFR, they first complete the 'old tech' and then start with the new )
 
but it will have problem scaling and bring it to everyone in the next 5-6 years, tesla, on the other hands, has a lot of production capacity planned for the next 1-2 years, and as seen, a couple of years can make a lot of differences.

Then again, to change this, all Waymo needs is one partnering car company (which they already have at least two) or buying a car company (not a problem for Google), should it become necessary.

Waymo is clearly not aiming to become a volume car maker, though, their approach to autonomous is through the commercial fleet at this stage. We shall see where it goes from there.

Keep in mind that when waymo started the computer capability were EXTREMELY limited comprated to now, and of course all the AI etc was a newborn thing and even they didn't put faith in it.

Sure, I get that is the vision AI premise:

This is the Musk/Hotz argument, and I guess adopted by a lot of the Silicon Valley community on deep learning vision AI. The OP @Trent Eady is a Tesla-positive writer on Seeking Alpha, whose opinion certainly aligns with this vision.

It is not a bad argument, but it needs to be noted there is a lot of hyperbole behind it - and a self-serving purpose. Let me explain.

The self-serving purpose is that this "vision AI" community wants and believes it can jump ahead of the more comprehesive autonomous driving projects (think multi-redundant suites, redundant computers and systems, rigorously implemented and tested driving policy, responsibility taken by the car company for the driving etc.) through cheap sensor suites (on the extreme think cell-phone level cameras), fleet learning and especially deep learning for driving (think Tesla FSD, comma.ai, no responsibility for the driving taken by the maker of the system).

Why is this a self-serving purpose? Because this vision AI community does not have the resources or the time (they are behind on both) to go the comprehensive route. To jump ahead, they must rely on aggressive deep learning and fleet validation, which they see as their opportunity and believe is the disruptive opportunity more traditional players are missing with their redundant sensors and lidars and manual labelling/teaching and what not.

To put their idea to the extreme it goes something like this: Let's just strap cheap cameras on cars, hook them up to a barely powerful-enough generic CPU/GPU running a neural network, drive them a lot (on the roads and perhaps in simulators) so the NN learns to drive, deploy this to massive fleet with data collection, hand out the fleet data to regulators (shadow mode and/or real mode), rinse and repeat enough times and we're there. That's basically the disruptive idea. It is not a bad idea.

But regulatory-wise, success of this disruptive idea depends on selling the notion to regulators that the fleet data of this will be sufficient proof of the safety of their system. Not controlled "clinical" trials, not a comprehensive approach, but an aggressive machine learning approach relying on commodity hardware deployed as quickly as possible on vast consumer networks.

And that's one reason you have for guys like Musk/Hotz/Eady talking about how unethical it would be to deny this route. The success of the concept they are rooting for depends on it. I'm not saying they don't believe it, I'm just saying this additional angle is affecting the opinion a lot. Just as a company with a more rigorous approach might advocate for more rigorous testing prior to approval.

Now, of course all of these players will mix and match. Some "vision AI" guys will have secondary sensors and redundancy more than others - while some traditional players will certainly employ also techniques similar to the vision AI guys. In the end they might even all end up in the same place, having just taken wildly different routes to get there.

We shall see who succeeds, who gets there first and who is right. I do share the concern what an early, high-profile crash might do to autonomous efforts. Let's hope someone doesn't go ahead too fast either to push it back for all.

The premise is plausible. Whether or not it works out, remains to be seen.
 
But hang on a minute. Hasn't your own hard-won scepticism with Tesla taught you to to be wary of the claims of autonomous systems developers?

Should Waymo get a sudden benefit of your doubt?

My 2c: "Yeah, nice demo. I'll believe it when I drive it / it's independently tested"

Plus LIDARs look lame :D

I think you missed the point. The point is you don't drive it. You ride in it. Those aren't crash dummies riding around in the Waymo vehicles right now. They're real people. Do you honestly think none of them would have been jumping up and down if they felt they were riding in an unsafe vehicle?? It's almost as if you don't want to believe this is actually happening. How could anyone do this to poor Tesla?
 
So presumably they will need multiple scans of the same road? Once for clear, once for snow-covered? Does a road need to be re-scanned a third time after it has been ploughed since banks of snow change its topography?

They don't map just the road. They map everything for precise localization. Mobileye claimed innovation was the ability to do something similar with cameras and a small file size.

It is unclear what Tesla is doing now.

Lidar gives easy distance and velocity information. Cameras requires more development work build the environment model.
 
Actually if you go further back, there have been quite a bit of information that Google released about their system. Primarily they relied on lidar to sync the car's environment model to a premapped area. The vision and radar is only supplementary (for example cameras to detect traffic light colors and sign recognition; radar is for vehicle/pedestrian detection for collision avoidance/prevention similar to most conventional use in cars). This approach suggests that the system would not be reliable in areas that have not been mapped yet (this matches how Google/Waymo has released their system thus far: geo-fenced to a specific area).
Google's Robot Car Can't Explore New Roads, and That's a Big Problem
How Google's self-driving cars detect and avoid obstacles - ExtremeTech

The references are a bit dated, but unless they threw this approach out the window (I don't see why they would do that given it works with their core strength, which is mapping, explicitly HD maps; not talking about regular lower resolution GPS maps), I imagine they are still using a similar approach.

This is flipped from vision based approaches which primarily rely on vision to determine the environment in real time, while any HD maps only serve as supplement.

I have a family friend who works at Waymo and according to them (from late this summer) their vehicles still can only travel in areas pre-mapped due to that being a requirement to use the LIDAR. Chris Urmson back in 2013 also talked about this in his TED Talk I believe. Obviously this is something they are trying to overcome for the future
 
What worries me in the Waymo v. Tesla discussion has to do with safety. Waymo obviously has addressed not running over pedestrians and not swerving into other lanes that already have vehicles in them. My recent experiences with our Tesla suggest that Tesla hasn't even begun this exercise. They don't even use the cameras and sonar that is readily available in the vehicles. Following a dotted line seems pretty primitive in the scheme of things. When the line disappears and your car starts darting all over the highway (sometimes without any warning at all), it tells me Tesla is still several light years away from technology that even comes close to Waymo. I appreciate the advantages of geofencing, but Tesla doesn't seem to operate within ANY boundaries when it comes to safety of those driving or riding in their vehicles.
 
Excuse my ignorance, but why is it Vision AI vs Mapping/LIDAR? Doesn't Waymo use both? Are people arguing that Tesla's Vision AI is stronger (or will be stronger) than Waymo's?

And if Waymo does have strong Vision AI, why are they restricted to only pre-mapped areas? Or is it just out of caution?
 
I have a family friend who works at Waymo and according to them (from late this summer) their vehicles still can only travel in areas pre-mapped due to that being a requirement to use the LIDAR. Chris Urmson back in 2013 also talked about this in his TED Talk I believe. Obviously this is something they are trying to overcome for the future
Thanks James, that seems to verify my impressions of how Waymo's system works.
 
Last edited:
Excuse my ignorance, but why is it Vision AI vs Mapping/LIDAR? Doesn't Waymo use both? Are people arguing that Tesla's Vision AI is stronger (or will be stronger) than Waymo's?

And if Waymo does have strong Vision AI, why are they restricted to only pre-mapped areas? Or is it just out of caution?
It's a fundamental approach of how the different systems work.

The LIDAR/mapping based approach starts with a map (which can be gathered by the same sensors, but manually driven by a human in that environment) and the LIDAR is used to sync the car to that map (by recognizing landmarks in the environment). This map already has all the features prelabeled. The cameras and radar are used to add information on top of that (for example details of traffic light status and signage which lidar can't detect), but they are not what the car uses for localization. What this means is that if you don't have a map of the area beforehand, then a lot of road features it won't be able to reliably recognize. That doesn't mean it will immediately crash when leaving a premapped area (the other sensors ensure that won't happen), but it won't be able to drive with the same reliability that it does in premapped areas.

The Vision based approach does not depend on a map, but rather on the AI recognizing certain road features (lane lines, curbs, etc). Lidar is only used in supplementary way for object detection (similar to how radar is used), but it is not used for syncing to a map. HD maps are used to reduce processing requirements and cope with poor weather/conditions (where lane lines and road features may be obscured), however, they are not primary. The goal is to develop the Vision AI to the point where it has core competency in recognizing all road types.

Again, you can have exactly the same sensor suite (with both types), but use either approach.
 
Last edited: