Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Is Tesla closer than we think?

This site may earn commission on affiliate links.
The Waymo vehicles without safety drivers rely on a few crutches:

  • driving only at low speeds
  • driving only along approved routes
  • starting and stopping routes at approved spots
  • falling back on remote assistance from humans
  • using hand-annotated HD lidar maps that are updated daily
  • constrained to a tiny geofenced area

Waymo driverless cars can drive up to 65 mph, not low speeds.

Waymo does not use hand-annotated HD maps. Waymo uses computer generated HD maps and that update automatically over the air, not daily.

But yes, Waymo has some restrictions. When you are putting live humans in the back seat of a driverless car, it makes sense to take safety precautions to make sure your driverless car won't injure or kill your customers.

Here is the Waymo ODD taken straight from the first responders guide:

vpEOh5i.png
 
It should not be surprising — it should be obvious — that L4 in a highly constrained environment with lots of crutches is a much easier problem than L4 in the wild, with basically no constraints.

For Tesla to achieve human-level L4 driving with their FSD software would be a vastly larger technical achievement than getting to human-level L4 driving within Waymo’s constraints.

It’s not clear to me which is more difficult: making a driverless robot work in Waymo’s playpen or making an L2 robot work in the wild. It’s possible they’re about equally difficult.

What we cannot accept as sound reasoning is that L4, irrespective of constraints, is better or more impressive or more advanced than L2 in the wild simply because 4 is a higher number than 2. That is folly.

Waymo’s technology could not support an L2 system in the wild because it depends on crutches that Waymo only has within its playpen. If you stripped away the crutches and forced Waymo employees to re-develop the software for L2, I reckon you’d (eventually) end up with something comparable to FSD Beta.

Conversely, if you took Tesla’s technology and built a playpen for it in Arizona with all the same crutches Waymo uses, I bet you’d eventually end up with something comparable to Waymo’s driverless proof of concept.

If anything is going to break through the challenges in perception, prediction, and planning that continue to confound AVs, it will be the application of new approaches or new advances in old approaches — such as 4D vision, multi-task learning, self-supervised learning, imitation learning, and reinforcement learning — at the million-vehicle scale, with thoughtful data curation (using things such as active learning and shadow mode).

Solving L4 in the wild with this data is a fundamentally different problem — a fundamentally easier problem — than solving L4 in the wild (not in a playpen) with the data you can get from a few hundred vehicles. It requires neural networks to generalize much less. It trains them with an amount of data commensurate with what we’ve seen in successful AI projects.

Waymo has driven less than 1,000 years in its totality. Artificial agents that play modern, complex 3D games like StarCraft and Dota are trained on a different order of magnitude of experience: in the ballpark of 100,000 years, rather than 1,000.

This is why we have to look beyond shallow comparisons between Waymo and Tesla. It is too simplistic to say Waymo has more advanced AI because 4 is a bigger number than 2. We have to look at the size of the problem — its scope, its constraints, its crutches, and also the resources, i.e. the data, that a company can use to solve it.
 
Waymo does not use hand-annotated HD maps.

From The New York Times (March 2017):

“So far the drive to create digital maps has been slow. Google’s former self-driving car division — now a company called Waymo — has created maps for roads around its headquarters in Mountain View, Calif., and a handful of other cities, including Austin, Tex., and Kirkland, Wash.​
Waymo creates the maps by driving around cars equipped with spinning lidar units mounted on their roofs that shoot out laser beams, creating images of the road and the surroundings. Human engineers, in a time-consuming process, then go over the images and tag the objects that are found, like stop signs, buildings, stoplights and do-not-enter signs.“​
 
What happens when real world conditions don't match the mapping data?

There are remote Waymo operators that will be alerted, and they can create new paths or override the warning, etc.

Perhaps Waymo believes that at some point, a single human operator can monitor a sufficient # of cars simultaneously, such that the cars can operate profitably.
 
What happens when real world conditions don't match the mapping data?

In a less advanced system like L2 Ford BlueCruise or GM Cruise, the system would shut down.

In a more advanced system like Waymo L4 "Driver" (that's not a real human), it would have layers of other systems to query in order to make a decision, not unlike Tesla camera that recognizes a speed sign and when that fails, the system does not shut down, it would go to archived GPS data to continue the drive.

But of course, when all fail, Waymo would call it a "pause" while waiting for additional inputs from the remote operator or the company's human safety driver to drive it manually.

Note that the remote operator cannot "drive" the car but can help the car by inputting whether there's an obstacle in front, is the rear clear, is that really a traffic cone...and by inputting those sensor data-like answers the operator can trick the car to make a decision to take an action like drive forward.

The problem is when the operator inputs those data, it can also cause the car more difficult to make its own decisions.
 
Last edited:
  • Like
Reactions: rxlawdude
L5 is "driverless" by definition. But I will be ecstatic for L2+ or L3- without geofencing. Something the competitors cannot do.
Thank you! This man gets it. People keep saying "Level 5" and A) most have no idea what it is apparently and B) when they really dig into the levels most realize that Level 3 is more than sufficent (and actually what many think Level 5 is anyway !).
 
  • Like
Reactions: rxlawdude
...Waymo’s technology could not support an L2 system in the wild because it depends on crutches that Waymo only has within its playpen. If you stripped away the crutches and forced Waymo employees to re-develop the software for L2, I reckon you’d (eventually) end up with something comparable to FSD Beta...

1) It's a different philosophy for Google/Waymo in 2009: It already tried L2 and it concluded that it does not trust safety in the hands of human drivers because humans are not reliable who can get complacent, tired, and fall asleep... It wants to shift the responsibility of safety to the automation system.

It doesn't mean Waymo is incapable of doing L2. It already proves that its automation has been capable of reliably avoiding crashes since 2009. It wants to take responsibility so there should be either a safety driver at the wheel or its own system with no other human driver at the wheel and not a consumer at the wheel.

It was easier with a safety driver at the wheel to train the machine. Now, when there are no safety drivers, consumers allow the car to go to unexpected scenarios without a human trainer onboard.

My speculation is that Waymo consumer rides will take a long time, years, decades, or our whole life to be perfected due to the difficulty of intelligence.

In the meantime, I think if Waymo would implement a more rigid L4 scenario: Like shipping companies that have unchangeable routes, unchangeable depots... I think that would be much more manageable for the automation system than the 50 square miles in Chandler as long as there are no detours that introduce new wrinkles into the equations.

2) Tesla has a different philosophy. Tesla is not the one at fault when there's an accident while its automation system is at work: It's the driver's fault and that's why it's an L2. It believes that's how Tesla's system will get better. The massive amount of data generated by the fleet contributed by the cars' owners would help to train the system to become better very soon.
 
From The New York Times (March 2017):

“So far the drive to create digital maps has been slow. Google’s former self-driving car division — now a company called Waymo — has created maps for roads around its headquarters in Mountain View, Calif., and a handful of other cities, including Austin, Tex., and Kirkland, Wash.​
Waymo creates the maps by driving around cars equipped with spinning lidar units mounted on their roofs that shoot out laser beams, creating images of the road and the surroundings. Human engineers, in a time-consuming process, then go over the images and tag the objects that are found, like stop signs, buildings, stoplights and do-not-enter signs.“​

From 2017. They can auto-label most objects now. They don't need to do that by hand anymore.
 
What happens when real world conditions don't match the mapping data?

The autonomous cars tries to figure it out based on its real-time perception, following its driving policy to stay as safe as possible. But if it is confused, it can "phone a friend". A remote observer can give it a clue. Remote observers don't control the car, they merely give the car a path or label an object to help the autonomous car figure things out.

For example, if an autonomous car gets stuck behind a double parked delivery truck because it is not sure if the delivery truck is going to move or not, the remote observer can tag the delivery truck as a "stopped object" so the autonomous car then knows to go around it. Later of course, the engineers will refine the software with this edge case so that the autonomous car will understand it better the next time it happens, and hopefully won't need help again.
 
Perhaps Waymo believes that at some point, a single human operator can monitor a sufficient # of cars simultaneously, such that the cars can operate profitably.

No. Waymo hopes to eventually get rid of all remote operators. Waymo is improving the software until eventually the cars can handle everything and they won't need any remote operators at all.
 
Tesla has had Level 4 since at least 2016 😉


Tesla also has a consumer autonomy product in 50 states, which is 5000% better than Waymo.
Looks pretty deployed to me! No hands on the steering wheel.

Oh a video that took hundreds of attempts to record with an average safety disengagement of 3 miles.
Similar to what they were in 2019 Autonomy day test rides and 2021 FSD Beta 8.2
Glad we see your logic in the open and Its not different from what it was in 2017.

 
This discussion thread is a great example of many discussions on this forum. It starts with a question on Tesla’s possible future achievements (or lack thereof), and it ends up in a « What about Hillary’s emails?!? » « what about Waymo?!? ». Now, I never interacted with a Waymo sales rep (if there is such a thing), but I sure interacted with a Tesla rep back in 2016 when ordering my AP2, and FSD was months away, well, obviously, once these damned regulators would let their beautiful achievement be unlocked!

Well, since Elon said back in April that the FSD subscriptions were a « sure thing » for next month, that gives us about a week to witness another achievement. So much winning…
 
No. Waymo hopes to eventually get rid of all remote operators. Waymo is improving the software until eventually the cars can handle everything and they won't need any remote operators at all.

Yes, the goal is to have none, but it's also to reduce it as much as possible. At their current pace, they'll never eliminate remote operators. Right now, they have roadside assistance AND remote operators.
 
Yes, the goal is to have none, but it's also to reduce it as much as possible. At their current pace, they'll never eliminate remote operators. Right now, they have roadside assistance AND remote operators.

Waymo is improving their FSD as well as learning a lot about the logistics of a driverless ride-hailing service. I am more optimistic than you. I believe that Waymo will reduce remote assistance faster than you think. I am really waiting for Waymo to deploy their ride-hailing service in SF with the 5th Gen. I think we will see a big improvement in the SF service.

By the way, I am not convinced that we will be able to send our cars out as driverless robotaxis to make money for us as Elon has touted, at least not anytime soon. Tesla does not have the logistics to support a driverless ride-hailing service. What happens when my Tesla gets stuck somewhere with a helpless passenger in the car? Tesla has no way to help my car. So I guess the passenger has to take over? Or what happens if my Tesla in driverless mode, gets into an accident on the other side of town while I am at work? What if the passenger decides to have my car take them on a 1000 mile road trip without telling me? There are good reasons why Waymo is geofencing, and using roadside assistance and remote operators for their driverless rides!

For these reasons, I suspect Tesla will keep driver supervision for a long time. And if you think Teslas will never get stuck even after they roll out "FSD", think again. If there is one think that Waymo has taught us, it is even super good FSD will still get stuck sometimes. Achieving FSD that never gets stuck, is a much harder task than people thought.
 
Last edited:
The Waymo vehicles without safety drivers rely on a few crutches:

  • driving only at low speeds
Waymo goes 65 mph, that is that low speed to you? The driverless waymo goes 45mph+ routinely in JJRick's videos
  • driving only along approved routes
They drive anywhere within their 50sqmile geofence which is the size of SF if i remember correctly.
They don't have approved routes, they do blacklist roads for various reasons (construction, accident, technical, etc).
That is quite different from them having only approved routes, this is not a train or a bus.

  • starting and stopping routes at approved spots
Its not like they have approved spots, they have general areas you can pickup/stop at (residentials, businesses, etc)
Which makes sense, you don't want to pickup/stop rides at the side of a highway, major streets or private businesses that doesn't approve or Waymo will be sued.
However they do blacklist areas for various reasons.
  • falling back on remote assistance from humans
As have been stated, the remote ops don't joystick the car in anyway, they can't accelerate, brake or E-stop.
People have this magical thinking that Tesla will just flip a switch and every car will be magically autonomous and Tesla will just go on vacation.
It will be hilarious when Tesla starts employing remote ops in a geofenced city that they are attempting to run a service in and then people start scrambling to explain how remote assistance is okay now.
 
New guy here planning to order an M3 Long Range AWD this week. Looking for some advice regarding ordering the FSD computer at this time for $10,000. Do the knowledgeable folks here recommend buying it now, or waiting until the FSD progresses in the next 2 years? Thank you for your advice.