Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Don't be so quick to judge the ranking. It really depends on how you weight the different metrics. Zoox has driverless but it is only a few of their custom pod vehicles that only drive a 2 mile loop on their HQ campus and is only for employees so far. Tesla does not have driverless but they have 100,000s of vehicles using FSD beta with supervision everywhere in the US. And FSD beta is improving and, at least in theory could eventually become "eyes off" if the intervention rate gets good enough. So which is better? 4 driverless pods that only drive on a 2 mile loop with employees or 600k cars with self-driving that but requires driver supervision? I would put Tesla ahead of Zoox since the Zoox driverless is so limited. Zoox has not shown any ability to scale whatsoever. At least Tesla is testing FSD beta at scale, even if the software is not "eyes off" yet. And AFAIK, Nvidia does not have any AV deployments at all, they just have hardware and simulations and ADAS in some cars. So I would put Tesla which is testing FSD beta on 600k cars ahead of Nvidia.
I'm suddenly seeing Obi Wan talking about a "certain point of view". :) I'm excited that Tesla is in the top 5.
 
  • Like
Reactions: diplomat33
Interesting that mobileye is committed to tele / remote operation for full self driving. How long before Tesla copies them? :)

Just to be clear, Mobileye does not say that tele-operation is required for all full self-driving. They says tele-operation is only needed for driverless/robotaxi since there is no human in the car to intervene if the AV gets stuck. It is not needed for "eyes off" on consumer cars since the human can serve the role of tele-operator if the car gets stuck.

Also, Mobileye does not want to get into the tele-operation business. That is why they decided that they will not manage their own robotaxi fleets. Instead, they will license Mobileye Drive to other companies who will manage the robotaxi fleet and tele-operations.
 
Naysayer:

Engineering whistleblower explains why safe Full Self-Driving can't ever happen


"artificial general intelligence (an AI with human-level intelligence and reasoning capabilities) does not exist."

1 Crash / how many miles:
Waymo+Cruise: 60,000
US drivers: 600,000
Tesla Autopilot+drivers: 4.85 million

It sounds like machine alone without drivers crashes more often than human drives alone.

Human drivers alone crashes more often than human/machine hybrid.

It sounds like hybrid is best so Waymo should let humans help its machine to drive.
 
Last edited:
  • Like
Reactions: flutas
"artificial general intelligence (an AI with human-level intelligence and reasoning capabilities) does not exist."

This is missing a critical "yet."

There's a computer scientist named Ajeya Cotra that's doing some interesting research regarding benchmarking the performance of machine learning models. After trying many different metrics, her team landed on "biological anchors" as a metric that was both correlated with capability and comparable across different architectures. The idea is you compare the number of parameters to the size and configuration of biological brains; and presumably, human-level intelligence arrives approximately when it becomes feasible to train a model with the same number of neurons as a human brain.

So far, GPT2 is roughly comparable to the brain of a honey-bee; and GPT4 is up to the level of a squirrel. So we're still far off, but Cotra's median prediction (the point where it's just as likely as it is unlikely) for when it will reach human-level is in the late 2030s.
 
  • Like
Reactions: JB47394 and flutas
Naysayer:

Engineering whistleblower explains why safe Full Self-Driving can't ever happen


"artificial general intelligence (an AI with human-level intelligence and reasoning capabilities) does not exist."

1 Crash / how many miles:
Waymo+Cruise: 60,000
US drivers: 600,000
Tesla Autopilot+drivers: 4.85 million

It sounds like machine alone without drivers crashes more often than human drives alone.

Human drivers alone crashes more often than human/machine hybrid.

It sounds like hybrid is best so Waymo should let humans help its machine to drive.

Those stats are misleading because they are not apples to apples. The Waymo+Cruise stats are from city driving where collisions are more frequent whereas the Tesla AP stats are from highway driving where collisions are less frequent.

Certainly, adding an attentive driver can make a self-driving system safer. Mobileye argues that there is a natural synergy between humans and machines because their failure rates are different. Humans often fail in easy or routine driving situations because we are tired or distracted. Machines do not get tired or distracted but can fail in more difficult driving situations where an attentive human would not fail. So the FSD system can add safety in those routine driving scenarios where humans get tired or distracted and an attentive human can add safety in those edge cases that the FSD system can't handle.

But I think implementing an effective human/machine hybrid self-driving system is tricky because the human and the machine need to be in sync. If the human is attentive and ready to intervene but the machine is fine and does not need an intervention, that does not add any extra safety. And if the machine needs an intervention but the human is either not attentive or does not realize they need to intervene, then it won't add extra safety either. In fact, the human can be attentive but intervene too late after the FSD failure has already occurred and it may not add any additional safety.

The only way for a human/machine FSD system to be safer is for the human to be both attentive but also recognize that an intervention is needed at the exact moment when the FSD requires an intervention. So you need both a way to keep the driver attentive at all times (driver monitoring system) but also the human needs to be able to identify when the FSD system will require an intervention. Now, when your FSD system is poor, this will likely be less of an issue since the human will know that they need to intervene a lot. The human will be ready. But as the FSD gets more reliable and interventions become less frequent, it will be harder and harder for the human to know when an intervention is required. They may decide not to intervene because they think the FSD will be able to handle a situation only to fail if the FSD actually was not able to handle it. So I believe there will be diminishing returns in safety as the FSD improves. Eventually, your FSD will be "safe enough" that having a safety driver does not add a lot of safety and may even reduce safety if the human intervenes and cause crashes when the FSD actually did not need an intervention. I think we saw cases with Waymo where a safety driver had a crash that the simulation showed the Waymo Driver actually would have handled safely. So the safety driver actually reduced safety by not trusting the Waymo Driver. I think that was a key moment when Waymo realized that they were ready to remove the safety drivers and go driverless.

But it also depends on what type of FSD you are trying to build. If your product is designed for consumer cars, it makes sense to go the human/machine hybrid approach since there will be a human in the driver seat anyway. That is why so many carmakers are doing "hands off" systems that require driver supervision. But Waymo is pursuing a robotaxi business model. A human/machine hybrid does not make sense for robotaxis. And as you probably know, Waymo tried the human/machine hybrid approach in 2013 and found that the human was too complacent. At that moment, they decided to go all-in on the driverless approach, believing that it is better to focus on making the machine safer than humans, rather than rely on the human to improve safety.
 
Last edited:
  • Like
Reactions: hgmichna and DanCar
Engineering whistleblower explains why safe Full Self-Driving can't ever happen

"artificial general intelligence (an AI with human-level intelligence and reasoning capabilities) does not exist."
And DeKort bizarrely claims AGI is "the only path to true FSD". Whatever "true FSD" means. I just want to wake up at my destination. Don't need AGI for that, as Waymo shows.

1 Crash / how many miles:
Waymo+Cruise: 60,000
US drivers: 600,000
Tesla Autopilot+drivers: 4.85 million
Completely different crash types.

Tesla: severe crashes, e.g. airbag deployments
US drivers: police reported crashes (~10x higher than airbag deployments)
Waymo: every single contact event with a moving or stationary object or person, no matter how minor, even if no damage

Show me a 100k mile Tesla that's pristine, never any body work and not a single wheel scrape from a curb or scratch or ding or window crack from a bumper or bollard or road debris. Then tell me about 4.85 million miles.

Real numbers, like Swiss Re study of Waymo driverless miles, show Waymo is much safer than humans. Also note that >90% of the Waymo crashes filed with DMV are the fault of the other (i.e. human) party, not Waymo. And the few that were Waymo's fault were exceedingly minor, e.g. scraping a curb or brushing a parked car. A >10:1 ratio is a pretty strong clue, no?

Human drivers, he said, are scanning their environment all the time
Right. Except when they're drowsy. Or drunk. Or high. Or texting. Or reading a clever billboard. Or distracted by a miniskirt.....

Self-driving cars would have to clock billions to hundreds of billions of miles using their current methods to achieve a fatality rate in line with that of human drivers: one per 100 million miles, a 2016 study by Rand found.
Repeatedly debunked. Why exclude the 99%++ of wrecks which are not fatal? Makes no sense. Again, see the Swiss Re study for confidence intervals with a fraction of that many miles.

"They can do a lot of it. They'll make progress, which is why they are where they are," he said. "But they will not get far enough to where they're better than a human."
I don't know who "they" are in this quote, but Waymo is safer than the average human already. So what's he trying to say?
 
I don't know who "they" are in this quote, but Waymo is safer than the average human already. So what's he trying to say?
Subjectively, I too think Waymo is safe because it didn't collide with many obstacles resulting in no serious damages/injuries and with no human fatality (one dog hidden by the parked car jumped out in front and was fatally hit. Backup driver was present but said it was unavoidable by the machine and human, so this won't count for now).

However, Cruises have gotten so many complaints due to stalls that cast doubt that the technology is intelligent enough to stop freezing itself.
 
  • Like
Reactions: diplomat33
And DeKort bizarrely claims AGI is "the only path to true FSD". Whatever "true FSD" means. I just want to wake up at my destination. Don't need AGI for that, as Waymo shows.


Completely different crash types.

Tesla: severe crashes, e.g. airbag deployments
US drivers: police reported crashes (~10x higher than airbag deployments)
Actually wasn't it established up thread that it's only about 3x higher than airbag deployments?
Waymo: every single contact event with a moving or stationary object or person, no matter how minor, even if no damage

Show me a 100k mile Tesla that's pristine, never any body work and not a single wheel scrape from a curb or scratch or ding or window crack from a bumper or bollard or road debris. Then tell me about 4.85 million miles.

Real numbers, like Swiss Re study of Waymo driverless miles, show Waymo is much safer than humans. Also note that >90% of the Waymo crashes filed with DMV are the fault of the other (i.e. human) party, not Waymo. And the few that were Waymo's fault were exceedingly minor, e.g. scraping a curb or brushing a parked car. A >10:1 ratio is a pretty strong clue, no?


Right. Except when they're drowsy. Or drunk. Or high. Or texting. Or reading a clever billboard. Or distracted by a miniskirt.....


Repeatedly debunked. Why exclude the 99%++ of wrecks which are not fatal? Makes no sense. Again, see the Swiss Re study for confidence intervals with a fraction of that many miles.


I don't know who "they" are in this quote, but Waymo is safer than the average human already. So what's he trying to say?
 
One factor is sometimes overlooked. FSD may occasionally require human intervention, but the human driver may not recognize it and neglect to intervene.

This situation can often be alleviated if FSD itself recognizes that an intervention is needed and alerts the human driver. In fact, the Tesla autopilot as well as FSD do that.

So, how often does the car recognize such situations? If the car recognized 90% of such cases, then the likelihood of a serious problem would shrink to 10%.

This is especially important with remote control, where no driver is present until a remote driver connects to the car. A workaround is a help button that can be operated by passengers of robotaxis to alert a remote control operator. I guess, current robotaxis have that already.
 
Actually wasn't it established up thread that it's only about 3x higher than airbag deployments?
I don't think so, but I'm not certain. Anecdotally, I don't know anyone who's been in an airbag wreck the past 20+ years. That covers a couple dozen wrecks for family, friends and myself. I can't square that with a 28% deployment rate.

The 2023 CRSS document says there are ~6 million police report crashes a year. At 28% that's 1.7m airbag deployments. I've never seen a number above 600k, though I can't find a good recent source. The doc also says this:
Complex sample design features employed in CRSS data collection should be considered in
analysis of the CRSS data. Treating the CRSS sample as a simple random sample in estimation
may cause severe bias to both point estimates and standard error estimates.
I think your 28% treats the data as a simple random sample, as they warn against. It seems you need special s/w to find the actual percentage:
Specialized computer
software for complex survey data analysis, such as SAS PROC SURVEY procedures and
SUDAAN procedures, should be used for CRSS data analysis along with proper design
statements
All IMHO. I didn't read the entire 353 pages (!!) and didn't fully understand every part I did read.
 
  • Informative
Reactions: JB47394
I don't think so, but I'm not certain. Anecdotally, I don't know anyone who's been in an airbag wreck the past 20+ years. That covers a couple dozen wrecks for family, friends and myself. I can't square that with a 28% deployment rate.
28% among police reported accidents. It doesn't include accidents that are not reported to the police. As mentioned up thread, there are states where accidents are not required to be reported to the police, only to the the DMV. The NHTSA data does not capture those accidents.
The 2023 CRSS document says there are ~6 million police report crashes a year. At 28% that's 1.7m airbag deployments. I've never seen a number above 600k, though I can't find a good recent source. The doc also says this:

I think your 28% treats the data as a simple random sample, as they warn against. It seems you need special s/w to find the actual percentage:

All IMHO. I didn't read the entire 353 pages (!!) and didn't fully understand every part I did read.
I couldn't find any reliable source for modern airbag deployment stats back then, so that is why I did it based on CRSS as a sanity check. Older stats are not particularly useful given the number of airbags in cars have gone up drastically (with Tesla also being an example), as well as older cars without airbags being retired from the road. For example, a modern car with side airbags can deploy those without deploying the regular front airbags, so if you use stats from when most airbags are only front (or even only driver) that would fail to accurately represent the relative prevalence.

Yes, doing it all based on just the CRSS does treat it closer to a random sample, but my analysis actually doesn't do that for the general case, only among police reported accidents. I don't really see any obvious factors that would necessarily further change the stats another 3.3x.
 
Last edited:
28% among police reported accidents.
To be clear, I was only talking about police reported accidents among family and friends. Not parking lot scratches and such.
For example, a modern car with side airbags can deploy those without deploying the regular front airbags,
Yes, that can happen. On the flip side, algorithms have improved over the years to reduce unnecessary deployments which were more common in early cars.
Yes, doing it all based on just the CRSS does treat it closer to a random sample, but my analysis actually doesn't do that for the general case, only among police reported accidents.
The CRSS data is not random even among police reported accidents. They intentionally over-sample certain types of accidents and under-sample others. I don't know how much it affects your 28%, but they explicitly warn about possible "severe bias" in point estimates.

I'm not saying 28% is wrong, just that it doesn't line up with my experience. My gut feel is also influenced by time spent looking at late model wrecked cars. Many have salvage titles, but virtually all went to auction instead of being repaired. I'd say 10-15% have blown bags, and I always figured these represented the "worst of the worst" in terms of severity since late model cars are usually worth repairing and keeping. Again, purely anecdotal but another reason I can't get comfortable with 28%. I'd much rather have a hard number, of course. It seems really weird nobody publishes total deployments per year. Almost like they're hiding it.
 
What is the population density requirement for these companies to operate in? An Urban area could be 1 million people or it could be a population of 2000 people.

The robotaxi companies will go to cities where they feel they can make a profit. So the population density and the demand for a robotaxi service does need to be high enough. But I don't think the robotaxi companies have said what that number is.
 
What is the population density requirement for these companies to operate in? An Urban area could be 1 million people or it could be a population of 2000 people.

This will depend entirely on maintenance requirements of the vehicles, including unusual events like retrieving them when they're stuck or broken.

If vehicles can almost always be unstuck by remote assistance, and vehicles are able to drive themselves to a large maintenance facility then the population threshold for service drops significantly. The only local staff would be someone to recharge and clean the vehicle a few times per day. This might even be partly done as a gig-economy type contractor.

Likewise, if they can allow any CAA licensed tow-truck driver to unstick a vehicle for $50 it's much easier to support low-density service than if they need full-time staff in a chase vehicle.

Robo-taxi companies haven't said because they don't know yet. They've not yet launched their purpose built vehicles yet, nor do they know what market-share they'll be able to obtain (how many trips per capita they can expect).