Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Note that the teleoperators do not directly drive the car. They merely provide suggestions to the car but the car never disengages from autonomous driving. They can draw a recommended path if the autonomous car is not sure what path to take. For example, if the autonomous car is not sure how to navigate a construction zone, the teleoperator can draw a path for the car to take. And they can label objects if the car is not sure how to respond to an object. For example, we see that the car is confused by a stopped delivery truck.

I used to mention this as a strategy to avoid a “disengagement,” and people thought I was being silly. @Bitdepth

87DC8F60-49B6-4367-9E86-A1FFBA91136B.jpeg
 
  • Like
Reactions: diplomat33
They aren't as reckless with your life as Elon Musk is.
In Elon's mind, he is saving many more than those that get killed. I tend to agree with Elon, but I'm not going to be one of the first to be testing this life saving tech with my eyes closed.

On the flip side talking about Waymo, I've felt for a long time if they made wider release of their tech they would help many lives. For example they said a couple of years ago, that freeway driving isn't challenging. So why not release a Waymo freeway driver to car companies? Waymo won't risk blood on their hands and they would lose millions in lawsuit if that happened, plus millions in brand reputation to Waymo and Google if that happened. Because of this we won't get Waymo soon enough and saving lives, because of such strong risk adversity.
 
Last edited:
Because the most advanced AV really isn't that advanced. o_O
It's bizarre to me that people expect the very first real AV to be so advanced that it works everywhere. Why wasn't the Wright brother's plane able to fly across the Atlantic?
Yeah, in ten years will be looking back at today's "driverless" and laughing how bad they were. In twenty years we will be looking back at tech from 2030 and laughing how primitive those were. By that time, it will probably be considered barbaric to let a human drive in urban environments.
 
In Elon's mind, he is saving many more than those that get killed. I tend to agree with Elon, but I'm not going to be one of the first to be testing this life saving tech with my eyes closed.

On the flip side talking about Waymo, I've felt for a long time if they made wider release of their tech they would help many lives. For example they said a couple of years ago, that freeway driving isn't challenging. So why not release a Waymo freeway driver to car companies? Waymo won't risk blood on their hands and they would lose millions in lawsuit if that happened, plus millions in brand reputation to Waymo and Google if that happened. Because of this we won't get Waymo soon enough and saving lives, because of such strong risk adversity.
Yeah, that's not how I interpret Waymo's safety data. It seems like they're only now reaching a bit better than average human safety in their geofenced area. And to achieve that level of safety they need remote assistance frequently.
You seem obsessed with this idea of sacrificing lives to develop autonomous vehicles. Are you saying that Tesla should deploy robotaxis before they're safer than humans?
 
... You seem obsessed with this idea of sacrificing lives to develop autonomous vehicles.
No. What makes you think that? If you are saying that when driverless is safer than average human, than there will still be deaths, then yes that is true. There is a sacrifice for humans driving and also for robots driving if they have faults, but better than average human driver.
Are you saying that Tesla should deploy robotaxis before they're safer than humans?
No. Although safer than humans might be debatable exactly what that means. I don't think the public will allow this until chance of death from a robot is very close to zero. I would argue for a lower threshhold, like 2x safer than average human. Anyone can decide to engage and disengage.
 
Last edited:
No. What makes you think that? If you are saying that when driverless is safer than human, than there will still be deaths, then yes that is true. There is a sacrifice for humans driving and also for robots driving if they have faults, but better than average human driver.
Because you keep bringing it up! I'm glad you've clarified that you simply mean that once they're safer than humans they should be deployed. I don't think Waymo vehicles are as safe as you think. It's a very hard thing to do determine. One incident every 130k miles isn't that great and their sample size of 6 million miles is too small. I'm certainly optimistic though.
I get confused around here because some people talk about Tesla like they're close to release but they're not even remotely close to human level performance yet.
 
... I don't think Waymo vehicles are as safe as you think. ...
Perhaps I think some humans are super dangerous. :p I was hoping Waymo would release what they have to car makers (would likely have to drive costs down), and allow driverless where it works, freeways, with no difficult merging required. But you are most likely correct, Waymo is not as safe as I think it is. I do know for sure from talking to several members of the team, they are safety first , safety second, and safety third. I don't think that mentality is going to work. Tesla is going to eat their lunch by willing to take some risk. Example of risk, is releasing beta to the public, which requires supervision, which we know is dangerous.
 
I don't think Waymo vehicles are as safe as you think. It's a very hard thing to do determine. One incident every 130k miles isn't that great and their sample size of 6 million miles is too small. I'm certainly optimistic though.

I agree that 6M miles is not enough to definitely quantify Waymo's safety. And taken out of context, 1 incident per 130k miles does not look all that good, but in context, the stat is actually not that bad. Unlike humans, Waymo did not have any accidents where it left the road. There were very few cases where Waymo directly caused an accident. Also, most of the Waymo accidents were minor. The severe accidents were caused by human drivers violating the rules of the road. These are all good signs that Waymo cars are pretty safe on their own. The big thing that Waymo needs to do now to make their cars safer is improving their defensive driving skills in order to better avoid accidents caused by humans driving badly.

I do know for sure from talking to several members of the team, they are safety first , safety second, and safety third. I don't think that mentality is going to work. Tesla is going to eat their lunch by willing to take some risk. Example of risk, is releasing beta to the public, which requires supervision, which we know is dangerous.

Your approach of "Screw safety. Release FSD now. Make faster progress" would backfire on Tesla. If Tesla releases FSD Beta wide to the entire fleet before it is ready and it causes a lot of severe accidents, the PR against the Tesla would be horrible. Regulators would crack down on Tesla. The fact is that it won't matter if Tesla makes faster progress on FSD, if regulators shut the whole thing down or if the public loses all faith and does not want to use Tesla's FSD anymore. Tesla won't "eat anybody's lunch" if their entire FSD gets shut down!

Yes, there is such a thing as being too cautious. But if Tesla takes too much risk, it could also backfire too. It is a tight rope act. There is a right middle ground.

Sure, maybe Waymo's FSD progress is not as fast as some would like. Maybe they are too cautious. But Waymo released a very positive safety report. Public trust in Waymo is as high as ever. Waymo is expanding their driverless ride-hailing. They are making steady progress. They are the world leader in FSD. And they are doing it in a way where the public and regulators trust and support them. It is hard to see how Waymo would see it as a bad approach.
 
  • Like
Reactions: powertoold
This was in the Australian media regarding Honda's Lvl 3.

2021 Honda Legend set to gain Level Three autonomous driving capability

So according to this article, L3 does refer to Traffic Jam Pilot. Looks like I was right.

"Honda says its Level Three system, dubbed 'Traffic Jam Pilot', will launch first on the brand's flagship Legend sedan, with a Japanese showroom debut confirmed for before the end of March, 2021.

Like other Level Three autonomous systems, Traffic Jam Pilot enables the vehicle to take full control of driving – including steering, braking and accelerating – in "certain conditions", such as in a traffic jam on a motorway (as its name suggests)."
 
1% of the time is quite a lot. Say each event lasts for 20 seconds, that's one intervention every half hour.

I think we need a better metric than "1% of the time" or "99.9%." 20 seconds is an arbitrary time period which got you an intervention every half hour. What if the time period is 1 minute or 5 minutes - then it looks a lot better at 1 intervention every 1.7 or 8.3 hours, respectively.

I'd like to see number of interventions per mile or per hour, hopefully specifying highway and city.
 
  • Like
Reactions: diplomat33
I found this video from 12 News Channel from back in Feb 2020 that shows a car deliberately swerving in front of a Waymo car several times. The police say that the driver was trying to cause a crash on purpose.


So yeah, autonomous cars will have to deal with idiots, drunk drivers etc...
 
  • Informative
Reactions: 1 person
finally some good news on V2V & V2Infra - time to focus efforts on worthwhile solutions.
The Federal Communications Commission today voted to add 45MHz of spectrum to Wi-Fi in a slightly controversial decision that takes the spectrum away from a little-used automobile-safety technology.

The spectrum from 5.850GHz to 5.925GHz has, for about 20 years, been set aside for Dedicated Short Range Communications (DSRC), a vehicle-to-vehicle and vehicle-to-infrastructure communications service that's supposed to warn drivers of dangers on the road. But as FCC Chairman Ajit Pai today said, "99.9943 percent of the 274 million registered vehicles on the road in the United States still don't have DSRC on-board units." Only 15,506 vehicles have been equipped with the technology, he said.
FCC takes spectrum from auto industry in plan to “supersize” Wi-Fi
 
  • Like
Reactions: MP3Mike
California Public Utilities Commission approved a state regulatory framework for commercial autonomous ride-hailing.

Robotaxi operators will be allowed to charge fares for driverless rides in California.

Cruise and Waymo say they plan to launch commercial autonomous ride-hailing in CA.

The CPUC ruling requires a would-be robotaxi company to receive approval from the DMV before it begins the application process with the CPUC through what’s called a “Tier 3 Advice Letter” process, the process used to approve rates for energy utilities and water and sewer operations. The process can take months to years.

In comments submitted prior to the CPUC vote, several robotaxi companies suggested that the DMV and CPUC processes occur simultaneously, and that the CPUC handle robotaxi approvals the way it handles ride-hailing services, rather than through the more onerous and time-consuming Tier 3 method. CPUC Commissioner Genevieve Shiroma, who spearheaded the ruling, said the process could be revisited in the future.

Robotaxi companies can now win approval to operate in California