Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Tesla is no more generalized than Waymo, Cruise, Zoox, NIO, Huawei, Mobileye, etc.

If this is true, then why aren't these companies testing their generalized solutions in more locations at once? What benefit do they have to assigning a fleet of 50 to a single city, when they could deploy one vehicle to 50 of the largest cities in the US to demonstrate their technology?

Just so happened to be looking at how long it's been taking Waymo to begin a new service in NYC, and found a familiar face asking a question:
They began that mapping work in 2021. It's 2023 now, and we still don't have insight on when they'll begin service. Surely a generalized solution doesn't take that long to expand.
 
I'm saying Tesla's system has a better foundation for generalized autonomy than one that's reliant on careful fine-tuning for specific cities. It's not yet at the point where they can remove the driver, but some day it might be.
Sounds like the game tesla plays - send out an under cover team and car for a couple of weeks to dial in an intersection for UPLs as well as other viral video intersections and scenarios. And even then it takes a few revisions before they have a minimally working solution. It's contrived and unscalable.
 
That was my point, people keep conflating "legal liability" with "DDT responsibility" and it leads them to factually wrong conclusions like "You can't have an L3 system without the car maker taking liability"

You can, because one has literally nothing to do with the other in SAE terms. I agree it might well impact how consumers react to such a system, but it's not required for it to be a certain level.
I respectfully disagree. I don’t think you can in practice. Once you claim the car does the full OEDR in some ODD you open yourself up to all sorts of legal liability regardless if you say “I assume liability” specifically or not.
 
We can personally confirm or reject any claim Tesla makes about FSD Beta, but there's no way to verify anything Waymo has said.
No we can’t. We can test a particular version with it’s current map data on a particular route. That is hardly the same thing.

What makes you think Tesla has a better solution than anyone else? They have been playing catchup and copying elseyone else and still are. Can you name one novel approach that Tesla is using the others aren’t?

Tesla‘s system is undeployable everywhere, and others are deployed driverless somewhere. Waymo’s Driver is built with the sole purpose of being generalizable. They don’t deploy separate versionfor each city.

This is a competition is reliability.
 
That 80% figure implies people mostly living within the dense core/boundaries of the major well-known cities (e.g. Atlanta, Dallas, Miami, etc). But it's instructive to look at what they consider a city/urban area, especially as it relates to your local area.

For example, the one giant blob they call "Atlanta" actually covers a ton of smaller towns and suburbs that are very much less urban than Atlanta itself. And in the list of "urban areas" there are a ton of small towns that most people would consider "out in the country". In these large blobby areas there are a ton of people driving significantly more than this way-overblown "35 miles a day" just to get to work and back; had a planned job change/move worked out in 2020/2021 I'd have been living and commuting within that Atlanta blob and driving close to 90 miles every day. But I wouldn't consider where I grew up/planned to live to be anything "urban" based on actual lifestyle and living conditions.

Point is, a lot of these "urban" areas where it's suggested people will do away with personally-owned vehicles in favor of shared/autonomous robotaxis are not nearly as dense as the "in a city" figure would imply.
 
  • Informative
Reactions: sleepydoc
What makes you think Tesla has a better solution than anyone else?

I didn't say better; I said more generalizable. Objectively, FSD Beta is less performant than Waymo or Cruise or Mercedes right now. But eventually, they'll upgrade their firmware to be level 3. And when they do, it will be level 3 across all of North America, and they will have passed Mercedes both in scope and performance overnight. And then after that, they'll upgrade their firmware to be level 4. And when they do, it will be level 4 across all NA, and they will have surpassed Waymo and Cruise both in scope and performance.

The novel approach is working within hardware constraints, such that their firmware can be deployed to a fleet of millions of existing vehicles.
 
I didn't say better; I said more generalizable. Objectively, FSD Beta is less performant than Waymo or Cruise or Mercedes right now. But eventually, they'll upgrade their firmware to be level 3. And when they do, it will be level 3 across all of North America, and they will have passed Mercedes both in scope and performance overnight. And then after that, they'll upgrade their firmware to be level 4. And when they do, it will be level 4 across all NA, and they will have surpassed Waymo and Cruise both in scope and performance.

The novel approach is working within hardware constraints, such that their firmware can be deployed to a fleet of millions of existing vehicles.
This reasoning reminds me of the underpants gnomes.

1. Build cars and promise autonomy.
2. ???
3. Profit.

Are you seriously claiming that Tesla’s secret sauce for building the most reliable product is to use the cheapest possible hardware on the cars?
 
Are you seriously claiming that Tesla’s secret sauce for building the most reliable product is to use the cheapest possible hardware on the cars?

Again, you're really bad about putting words in my mouth. I didn't say most reliable. I said most generalizable. Obviously covering a car in LIDAR will help guarantee it will never hit another 3D object. But maybe vision-only will be sufficient.
 
  • Like
Reactions: Mullermn
Again, you're really bad about putting words in my mouth. I didn't say most reliable. I said most generalizable. Obviously covering a car in LIDAR will help guarantee it will never hit another 3D object.
You said that Tesla's novel approach was to work within hardware constraints. I am glad we agree on that more modalities and better sensing and redundancy plays a role in reliability.
But maybe vision-only will be sufficient.
Now you're saying "maybe", above you said "when" and "eventually", as it were a mere matter of a few releases and iterations. This is the problem with the whole Tesla camp tbh.

Right now almost everyone in computer vision research says "maybe" and that it will require a few real break-throughs for it to pan out. 95% isn't going to cut it for safety critical. Not 99.99% either.
 
OK. You may want to pass that message on to Elon. :D

Just to refresh, this was the original post I was referring to:



Your claim was that the “vast majority of people” would not benefit from a “city-only” approach. My point was that this approach is perfectly fine since a “vast majority” (80%) of people live in those cities and travel on average less than 30 miles a day. If you add in some limited access highways you probably get 75% of all use cases.
Yes, I know. My point was that people living in cities travel outside the cities so one can’t generalize the abilities and usefulness of a geofenced FSD system based on where someone lives.

See also @gtae07 ’s post above.
 
I would say Mercedes is far less performant than any of them. Its only claim to fame is being level 3 but it's so restricted that you really can't compare it to the others.
Apples and oranges. L3 eyes off in traffic jams is nice, and surely you understand Drive Pilot operates in L2 as well, where the human have to watch the road and is legally driving? How is their L2 more restricted than for example Blue Cruise or Super Cruise.

Which are "the others"?
 
  • Like
Reactions: texas_star_TM3
Now you're saying "maybe", above you said "when" and "eventually", as it were a mere matter of a few releases and iterations. This is the problem with the whole Tesla camp tbh.

Few things about the future are 100% guaranteed, or 100% impossible. And only Siths deal in absolutes. That's the reason I began this conversation in the first place: most people here seemed to agree that Tesla achieving level 4 this year was impossible. It might be improbable, but that's a far cry from impossible.

It's worth remembering that Elon Musk himself often quantifies many of his ambitious goals as having a "non-zero probability." But if you had to identify one characteristic that has helped Elon build his companies in the past, it would be perseverance in the face of long odds.
 
Few things about the future are 100% guaranteed, or 100% impossible. And only Siths deal in absolutes. That's the reason I began this conversation in the first place: most people here seemed to agree that Tesla achieving level 4 this year was impossible. It might be improbable, but that's a far cry from impossible.
They have gotten from 5 miles per failure to perhaps 10-12 miles per failure on recent versions, up from 5 miles after 2 years. How do you reckon they would get to 50000 miles? Level 4 on the whole North American road network? LOL. Give me a break.

This year is impossible. That's just a scientific fact. Even if someone had a break-through in CV tomorrow, they wouldn't likely have the time to implement it by end-of-year.
 
  • Like
Reactions: texas_star_TM3
First, no one really knows all the details of Tesla, Waymo or GM's approaches. We cam make inferences based on observations, tweets (which can't actually be verified) and other available data but there's still a lot we don't know. Because it's an available consumer product, people like verygreen can hack the system and see more data to give us a better idea but there's still a lot we don't know.

Broadly there seem to be two approaches - getting hyper-accurate 'HD' mapping data so the car can navigate without processing anything and developing a human-capable system so the car doesn't need any data and can figure everything out locally. Clearly, all 3 companies are doing a mixture of the two but it appears that Waymo and GM are tilting more towards the former while Tesla is more on the other side.

The problem with the HD mapping approach is that heavily relying on mapping data to lessen the required processing abilities commits you to perpetually spending a large amount of resources to keep the mapping data up to date. Even with such efforts there will necessarily be a lag between any street level change and the mapping data, leading to issues.

I agree with @willow_hiller - the approach Tesla appears to be taking seems to be more generalizable. It may well be that all the companies' approaches wind up coalescing somewhere in the middle. If you think about a human driver, we are able to drive in a new area but are better drivers in familiar areas because we have a 'map' in our memory and know what to expect. The same applies to a computer.
 
First, no one really knows all the details of Tesla, Waymo or GM's approaches. We cam make inferences based on observations, tweets (which can't actually be verified) and other available data but there's still a lot we don't know. Because it's an available consumer product, people like verygreen can hack the system and see more data to give us a better idea but there's still a lot we don't know.

Broadly there seem to be two approaches - getting hyper-accurate 'HD' mapping data so the car can navigate without processing anything and developing a human-capable system so the car doesn't need any data and can figure everything out locally. Clearly, all 3 companies are doing a mixture of the two but it appears that Waymo and GM are tilting more towards the former while Tesla is more on the other side.

The problem with the HD mapping approach is that heavily relying on mapping data to lessen the required processing abilities commits you to perpetually spending a large amount of resources to keep the mapping data up to date. Even with such efforts there will necessarily be a lag between any street level change and the mapping data, leading to issues.

I agree with @willow_hiller - the approach Tesla appears to be taking seems to be more generalizable. It may well be that all the companies' approaches wind up coalescing somewhere in the middle. If you think about a human driver, we are able to drive in a new area but are better drivers in familiar areas because we have a 'map' in our memory and know what to expect. The same applies to a computer.
But this is all based on prejudice rather than actual research? Am I right?

Waymo's Head of research have said that if you cannot drive without a map, you cannot deploy autonomous vehicles at scale. There are constant roadwork, and repainting of lines in a city and on the highway, yet Waymo drives there without interventions, and reading hand held signs and hand signals. How do you reckon that is?


Obviously Cruse and Waymo and other robotaxi companies have 360 Lidar, 360 Radar and 360 microphones and better cameras plus sensor cleaning too, so this is not about maps.

We do know a lot about actual performance in the MTBF metric. Here FSDb sucks. 10-15 miles vs Waymo 20000+. We do know that Waymo has driverless operations, Tesla does not. We know that the progress in terms on MTBF is slow for Tesla after 2 years of iterating.

If Tesla on Hw4 is driverless in the Boring tunnel in Vegas in three years from now I'd be shocked. That's a lot harder than you think. All these small details with ingress/egress on top of the actual driving...
 
Last edited:
  • Like
Reactions: kabin and DanCar
I respectfully disagree. I don’t think you can in practice. Once you claim the car does the full OEDR in some ODD you open yourself up to all sorts of legal liability regardless if you say “I assume liability” specifically or not.

But that wasn't the discussion topic.

Folks keep claiming since Tesla won't "officially" say they're liable means they CAN'T have a system >L2.

Which is factually untrue. The word liability does not even appear in SAE J3016, let alone anything about requiring a system maker to make any statements about it.

I'm not arguing you couldn't sue them ANYWAY for a >L2 system failure. I mean, people have sued over just L2 system failures (though they've generally lost)

That has nothing to do with determining the SAE level of the system though.


Then when it does, you can pay $2,000 for firmware acceleration, that is just a bug fix in the original acceleration algorithm.

I'm not sure you understand what the word bug means...