Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

The Verge rides in a Waymo robotaxi with no safety driver

This site may earn commission on affiliate links.
People naturally and understandably think of this technology competitively, e.g. Tesla vs. Waymo. I do too.

But I also think the bigger picture is that a rising tide lifts all boats. If Waymo can turn Waymo One into a large-scale, profitable (at least in terms of the unit economics) commercial service without safety drivers, that's proof that Level 4 robotaxi technology can work and can work as a business.

The two main things that distinguish Waymo's tech from Tesla's tech are lidar and lidar HD maps. I believe Tesla is much better off 1) waiting to see if Waymo can commercialize lidar robotaxis and 2) waiting to see if Tesla can't commercialize non-lidar robotaxis. If Waymo can and Tesla can't, then Tesla can launch a geofenced Level 4 program with lidar robotaxis and lidar HD maps.

Conversely, GM could put surround cameras and high-grade compute in its production cars and use large-scale fleet data to train Cruise's lidar robotaxis. This would combine the strengths of Tesla's approach with the strengths of the Waymo/Cruise approach.

So, for everyone's sake, let's hope that Waymo is successful in scaling up these driverless rides. I hope Waymo's tech works and the driverless mode of operation proves to be robust as they scale it up. Maybe even more importantly, I hope Waymo eventually discloses some quantitative data on why they're so confident this is safe to do now. Those safety stats could go a long way in quelling skepticism about driverless cars.
 
I believe Tesla is much better off 1) waiting to see if Waymo can commercialize lidar robotaxis and 2) waiting to see if Tesla can't commercialize non-lidar robotaxis.

It’s good to see that you still think literally anything is always good news for Tesla. The simple truth is that Tesla is going to win every race, no matter what competitors do. Just listen to Automomy Day if you don’t believe me. They even showed figures from Google papers to prove their massive lead!

 
Last edited:
  • Funny
Reactions: Doggydogworld
If Waymo can and Tesla can't, then Tesla can launch a geofenced Level 4 program with lidar robotaxis and lidar HD maps.

Yeah, I mean, it took Waymo ten years, so Elon could probably do it in like thirty, fourty-five minutes. He’s just waiting for a dominant competitor to prove it’s possible first, before even trying. That’s just good business sense.
 
Last edited:
Waymo is too tentative. Tesla will risk spilling blood, Waymo won't.

Tesla does seem less risk-averse than Waymo. Tesla's philosophy seems to be that if it's 1% safer than the average human, deploy it. Waymo's philosophy seems to be that it should be much, much safer than the average human, even if that means it takes longer to deploy it.

But the most important difference between the two companies, I think, are the five pillars of Tesla's large-scale fleet learning approach:

1. Automatic curation of rare, diverse, and high-entropy training examples for fully supervised learning of computer vision tasks. (Cruise CTO Kyle Vogt says this is important.)

2. Weakly supervised learning of computer vision tasks (e.g. semantic segmentation of free space).

3. Self-supervised learning of computer vision tasks (e.g. depth mapping) or self-supervised pre-training for tasks that are fine-tuned with fully supervised learning. (This is probably what Dojo is intended to accelerate.)

4. Self-supervised learning for road user prediction tasks (e.g. predicting cut-ins).

5. Imitation learning for planning tasks (e.g. path prediction), probably combined with an explicit, hand-coded planner, and possibly used to bootstrap some form of real world reinforcement learning.

With ~1,000x more vehicles than any other company and ~500x more vehicles than all U.S. autonomous vehicle fleets combined, Tesla can collect commensurately more automatically labelled training data with pillars 4-5 and commensurately higher-quality manually labelled training data with pillar 1.

For pillars 4-5, if performance scales with data like top-1 error on ImageNet, that means Tesla should have roughly 10x better performance on these tasks than competitors. If performance scales like top-5 error, then it's roughly 30x better performance. For pillar 1, I'm not yet aware of any research that attempts to quantify anything analogous.

The scaling rate for different tasks is different, anyway, so these numbers should be taken with a grain of salt.
 
Last edited:
Google doesn't have a good track record of deploying impactful hardware products :p

Right! Google doesn’t know anything about hardware. They are just a bunch of clueless software engineers who just happen to have the largest and most sophisticated datacenters on the planet. I’m pretty sure they bought it all from Dell. Their hardware has so little impact that it only handles 12% of all the internet’s traffic and 90% of all searches. Their hardware is pretty much irrelevant!
 
Last edited:
400K per vehicle. No wonder why Tesla went away from radar. I remember Elon expressing his main concern was cost related to this technology.

Absolutely! Radar is expensive. That’s why Tesla only uses a single, low-quality radar. It’s clearly completely adequate for seeing things like police cruisers.

Tesla on Autopilot crashes into police car
 
Last edited:
  • Disagree
Reactions: APotatoGod
Tesla can collect commensurately more automatically labelled training data with pillars 4-5 and commensurately higher-quality manually labelled training data with pillar 1.

For pillars 4-5,

Oops. Everywhere I said “pillars 4-5”, I meant to say “pillars 2-5”. All of them except pillar 1. Pillars 2-5 use automatic labelling; pillar 1 uses manual labelling.

Edit to clarify: All pillars except pillar 1 use automatic labelling because all techniques except fully supervised learning use automatic labelling. Weakly supervised learning, self-supervised learning, imitation learning, and reinforcement learning all use automatic labelling.

So, pillars 2-5 (not just 4-5) use automatic labelling.
 
Last edited:
Google doesn't have a good track record of deploying impactful hardware products :p
Google hardware lover here. Google wifi products are superb. Loving talking to my google home and asking to adjust temperature. Love my pixel phone. Hopefully Waymo will be disruptive when it finally gets off the ground. Each year Waymo says next year will be the year.
 
David Silver from Udacity (not to be confused with David Silver from DeepMind; that's a different person), wrote an eloquent post about this:

“Waymo’s response to the key question of what makes its vehicles safe enough to be driverless is, essentially, “trust us”.

And so far that works, at least for Waymo, which has done virtually everything right and caused no significant injuries, much less fatalities, in its ten years of existence.

Presumably, though, as Waymo ramps up miles and riders, collisions and injuries will happen. At that point, “trust us” probably won’t seem so sensible.”​
 
  • Like
Reactions: APotatoGod
“Waymo’s response to the key question of what makes its vehicles safe enough to be driverless is, essentially, “trust us”.

I know, right??? They should be more like Tesla and just say something really honest, like:

“In the 3rd quarter, we registered one accident for every 4.34 million miles driven in which drivers had Autopilot engaged... By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 498,000 miles.”

I mean, there’s no way at all to misconstrue such a clear comparison of highway miles to all miles.

Tesla is at the forefront of transparency, and other companies should emulate them!
 
  • Like
Reactions: Doggydogworld
A Waymo rider who talked to Ars Technica and another Waymo rider who did a Reddit AMA (archive) both said Waymos have a disengagement about every 5 rides or so. If the average ride is ~10 miles, that’s a disengagement every ~50 miles. (The miles per disengagement reported to the California DMV excludes most disengagements because the reporting requirements are lax.) Let’s call it one disengagement per 500 miles to be generous.

Humans crash on average every ~500,000 miles. So, if 1 disengagement = 1 crash, Waymo needs to be 1,000x safer to be as safe as the average human. Initially, this made me confused about why Waymo is deciding to do driverless rides.

Well, maybe 1 disengagement ≠ 1 crash. Brad Templeton recently wrote about this:

“Companies generally will train their safety drivers to intervene at any situation that looks dangerous. They tell them not to wait to assure the danger is certain. If the car is not braking for a pedestrian, they don’t keep it going to see if it will hit the pedestrian. The safety driver is told to take over. Later, the team will play the recording back and try to determine what would have happened if the driver had not intervened. Ideally, they will turn the whole situation into a simulator scenario, so that they can test that question in a dynamic way. This is a good approach which many teams do, but less mature teams may not have the resources to do this on every event.​

If you do it this way, you can come up with a more interesting number, what can be called “necessary interventions” or at a higher bar, “accident preventing interventions.” You will be very concerned with the latter. The former might include things like a vehicle wandering out of its lane on an empty road — that would not cause an accident, and people are guilty of this all the time — but you would like to know why it happened and fix it if need be.​

A large number of interventions should get classed as “cautionary.” The safety driver grabbed the wheel, but the system would have done the right thing if not. These are actually very positive events, since how vehicles perform in dangerous situations is one of the most important things to track.”
In June 2019, Templeton also wrote this:

“Last week, my colleagues at The Information published some interesting leaks from GM/Cruise, about how they are failing to meet important internal milestones (Paywall.)​

In particular, the report states that the forecast is that Cruise will, by the end of 2019, have a vehicle that performs at between 5% and 11% of the safety level of average human driving, when it comes to frequency of crashes. As such, Cruise will miss its 2019 goal of deploying a commercial service, though it might deploy one with safety drivers.”
This is using a different way of measuring safety than disengagements. Templeton argues this way is better. Apparently, this is the metric companies like Cruise use internally. (You can read more about it in the article.)

It wouldn’t make sense for Waymo to be at 0.1% average human safety while Cruise is projecting it will be at 5%+ soon (unless Cruise’s projection includes crazy exponential progress). Cruise is ostensibly behind Waymo. The California DMV figures are a bit of a mess, but Waymo’s number is a lot better than Cruise’s number.

The distinction between cautionary interventions vs. necessary interventions could potentially explain it. If Waymo has determined 99.9%+ of the time there is a disengagement, a crash wouldn’t have occurred, then it might make sense to deploy driverless rides, despite its high disengagement rate.

Waymo has remote monitoring, but as I understand it, that can’t be used to avert a crash in the same way a safety driver can.
 
A Waymo rider who talked to Ars Technica and another Waymo rider who did a Reddit AMA (archive) both said Waymos have a disengagement about every 5 rides or so. ...
Both of those articles are nearly a year old. One would hope Waymo has improved since then.
  1. Was the disengagement serious? Was the safety driver just being paranoid?
  2. There is a lot of ebb and sway in driving. We can call it dangerous but would not necessarily cause an accident.
I'm confident Waymo's disengagement is much higher than those reports. Biggest problem for Waymo that I've heard is that the software was / is too tentative. Difficulty in congested unprotected left hand turns and merging. If there was a disengagement because the car wasn't doing anything then that validates what I've heard. From reading reports the navigation software still avoids left hand turns and will take a circuitous route when there is a passenger. I wouldn't be surprised when they launch a public service if it still does that.
 
  • Like
Reactions: strangecosmos2
Both of those articles are nearly a year old. One would hope Waymo has improved since then.

Yeah, but have they improved 10,000x?

I'm confident Waymo's disengagement is much higher than those reports. Biggest problem for Waymo that I've heard is that the software was / is too tentative. Difficulty in congested unprotected left hand turns and merging. If there was a disengagement because the car wasn't doing anything then that validates what I've heard.

Good point. I've heard anecdotal examples of the safety driver taking over just because the Waymo van was sitting still for too long. I think that would fall under Brad Templeton's term “cautionary disengagements”, but it isn't even always cautionary.

I saw a reporter on Twitter say the Waymos have gotten less hesitant over the last year, which is great.

From reading reports the navigation software still avoids left hand turns and will take a circuitous route when there is a passenger. I wouldn't be surprised when they launch a public service if it still does that.

Interesting info, thanks.
 
  • Like
Reactions: DanCar