Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
The Ghost speaker makes it sound as if all other systems suffer from the inability to see and react to unlabelled or unclassified objects. I would say straight away that that cannot be true. It is utterly obvious nonsense.

They want to tell us that, for example, a Tesla completely ignores an object in the street that it cannot categorize? Nobody in his right mind would program a car like that. Of course an automated car will be programmed to stop in front of an uncategorized object, rather than crash into it.

We have examples of FSD beta not seeing unlabeled or unclassified objects awhile back. And we have videos from several months ago, of FSD Beta hitting or almost hitting unrecognized objects. So it did happen. But Tesla fixed the problem when they introduced occupancy networks. So the Ghost speaker is not wrong, but he is speaking of older systems. He is describing early computer vision that relied on old object classification techniques.

What is misleading is that the Ghost Speaker is ignoring that this problem has been fixed now. He talks like it is still a problem in order to push his "physics based AI" approach as the solution. Now, perhaps, Ghost's approach also solves the problem but you can't ignore that other companies have their own methods for solving that problem and do so successfully now.
 
  • Like
Reactions: Dewg
Mobileye Q3 results:
  • Revenue increased 38% year over year to $450 million in the third quarter.
  • Future business backlog continues to grow, with design wins achieved in 2022 (through Oct 1, 2022) projected to generate future volume of 54 million systems by 2030. This compares to 24 million systems delivered in 2022 (through Oct 1, 2022).1
  • Our newer advanced ADAS products, such as SuperVision contributed meaningfully to our revenue growth and resulted in Average System Price increasing to $53.0 in third quarter 2022 from $45.7 in the prior year period.
  • Generated net cash from operating activities of $395 million in the 9 months ended October 1, 2022.
 
I didn't know:

"Strange as it may seem, California stops requiring that AV companies share disengagement data and collision locations as soon as they begin collecting passenger fares, as Waymo and Cruise now do. From that point forward, if an AV vehicle jeopardizes safety on the street—for instance, by causing a crash or blocking a transit line—the public won’t know unless the AV company chooses to publicize it (unlikely) or if a passerby reports the incident to 911 or posts about it on social media (unreliable)."

 
Perhaps that's allright. After all we have traffic rules, we have traffic police, we have media that probably pick up interesting cases. If automated cars are reasonably safe, why have extra rules for them?

I see the opposite in Germany, a country and a population obsessed with rules. They hinder progress.
 
I didn't know:

"Strange as it may seem, California stops requiring that AV companies share disengagement data and collision locations as soon as they begin collecting passenger fares, as Waymo and Cruise now do. From that point forward, if an AV vehicle jeopardizes safety on the street—for instance, by causing a crash or blocking a transit line—the public won’t know unless the AV company chooses to publicize it (unlikely) or if a passerby reports the incident to 911 or posts about it on social media (unreliable)."


I don't think it is that strange. CA requires companies to share disengagement data and apply for several permits from the CA DMV and the CPUC before they can start collecting fares. The entire process from testing with a safety driver to collecting fares from driverless rides can take years and millions of miles of disengagement data. And if the CA DMV feels your AV is not safe enough for, they can deny or withdraw your driverless testing permit. If the CPUC feels that your AV is unsafe, they can deny or withdraw your permit to allow unpaid driverless rides or to collect fares. So I think the CA probably feels that if you get through all that testing, data sharing and get the final permit to collect fares, that your AV has "graduated" and you don't need to share data anymore. At that point, your AV should be "safe enough" in the eyes of both the CA DMV and the CPUC. Of course, nothing is perfect. But at that point, if accidents happen, then the police can investigate, just like any human car accident. And, I would assume that if the robotaxi gets into enough accidents, that the CPUC could withdraw their permit to collect fares. Additionally, companies could be sued if the robotaxi is involved in an accident that causes damage, injury or fatality. So there are lots of of checks both before and after the companies starts collecting fares for driverless rides. Companies are NOT collecting fares from untested robotaxis with no disengagement data.
 
Last edited:
I don't think it is that strange. CA requires companies to share disengagement data and apply for several permits from the CA DMV and the CPUC before they can start collecting fares. The entire process from testing with a safety driver to collecting fares from driverless rides can take years and millions of miles of disengagement data. And if the CA DMV feels your AV is not safe enough for, they can deny or withdraw your driverless testing permit. If the CPUC feels that your AV is unsafe, they can deny or withdraw your permit to allow unpaid driverless rides or to collect fares. So I think the CA probably feels that if you get through all that testing, data sharing and get the final permit to collect fares, that your AV has "graduated" and you don't need to share data anymore. At that point, your AV should be "safe enough" in the eyes of both the CA DMV and the CPUC. Of course, nothing is perfect. But at that point, if accidents happen, then the police can investigate, just like any human car accident. And, I would assume that if the robotaxi gets into enough accidents, that the CPUC could withdraw their permit to collect fares. Additionally, companies could be sued if the robotaxi is involved in an accident that causes damage, injury or fatality. So there are lots of of checks both before and after the companies starts collecting fares for driverless rides. Companies are NOT collecting fares from untested robotaxis with no disengagement data.

Other states don't require disengagement monitoring at all, but CA is not like other states.

There are several issues in stopping keeping track of disengagement once an AV is approved to start collecting fares.

The assumption is it has passed the disengagement test and is safe enough.

The issue is that there's no standard for maximum disengagements for a passing test.

The companies themselves don't believe their cars are now good enough to graduate from a testing program because they are still testing on their own even when CA stop monitoring them.

The public riders need to know how many disengagements so they can judge whether to use a particular company or not.

"One day after the California Department of Motor Vehicles granted the GM-owned autonomous driving firm Cruise a permit to begin charging passengers for rides in its self-driving Chevy Bolt EV robotaxis, one of these vehicles was involved in a crash. Injuries were disclosed, and per a report by the San Francisco Police Department, one of the AV’s passengers was transported to a hospital."

Occupant Of GM’s Cruise AV Involved In Collision Taken To Hospital, Says Police Report
 
Other states don't require disengagements at all, but CA is not like other states.

There are several issues in stopping keeping track of disengagement once an AV is approved to start collecting fares.

The assumption is it has passed the disengagement test and is safe enough.

The issue is: There's no standard what's the maximum disengagements for a passing test.

The companies themselves don't believe their cars are now good enough to graduate from a testing program because they are still testing on their own even when CA stop monitoring them.

The public riders need to know how many disengagements so they can judge whether to use a particular company or not.

"One day after the California Department of Motor Vehicles granted the GM-owned autonomous driving firm Cruise a permit to begin charging passengers for rides in its self-driving Chevy Bolt EV robotaxis, one of these vehicles was involved in a crash. Injuries were disclosed, and per a report by the San Francisco Police Department, one of the AV’s passengers was transported to a hospital."

Occupant Of GM’s Cruise AV Involved In Collision Taken To Hospital, Says Police Report
Can't have disengagements when there is no driver! That Cruise crash was reported to the DMV:
I agree that there should be reporting of car blocking traffic. They don't seem to be reporting that.
 
The assumption is it has passed the disengagement test and is safe enough.

The issue is: There's no standard what's the maximum disengagements for a passing test.

No, but these companies only get to charge a fare for a driverless ride AFTER the CPUC gives them permission and AFTER the CA DMV also gives them a permit to even test driverless. So I think it is assumed that if the CA DMV and the CPUC give them a permit to do paid driverless rides that they have passed the disengagement test at least as far as the CA DMV and the CPUC are concerned. After all, if the CA DMV and/or the CPUC don't think the AV has passed the disengagement test, why give a permit to charge fares?

Disengagements by themselves are not the best metric IMO. Disengagements only measure if you disengage the autonomous driving. If you only disengage when the AV is going to make a mistake, then yes, disengagement rate could be a good measure of how many errors the AV makes per miles. But you could let the AV make mistakes and not disengagement the system and the disengagement rate would look better than it really is since you did not disengage. But the AV srill made mistakes. Alternatively, an overly cautious safety driver could disengage too much, when the AV was not going to make a mistake, and the disengagement rate will look worse than it really is.

That is one reason why measuring safety is not easy. Ultimately, you want to measure "bad driving behavior" and minimize those events over a big enough and diverse enough sample of driving. But you need to define "bad driving behavior". Traffic violations, near misses, actual accidents, safety envelope violations, sudden braking, jerky lateral movement would be examples of "bad driving behavior". Of course, some instances of sudden braking might be good if they avoided a crash. So you would need to examine each case and see if it really was "bad driving behavior" or not.

Ultimately, companies like Waymo and Cruise examine all their driving data over millions of miles and make a determination if the AV is "safe enough" in that specific ODD. If yes, and if they get a permit from the CPUC, then they deploy driverless in that limited ODD where the data says it is "safe enough".

The companies themselves don't believe their cars are now good enough to graduate from a testing program because they are still testing on their own even when CA stop monitoring them.

That's a bit misleading. The companies are deploying driverless rides. So clearly they think AVs are good enough to do some driverless rides. Of course they still test because there is always room for improvement. Apple still does R&D development on iPhones but that does not mean that iPhones are not good enough.

The public riders need to know how many disengagements so they can judge whether to use a particular company or not.

You can't measure disengagements in a driverless ride since there is no safety driver to disengage. You could measure accidents or remote assistance events.

Also keep in mind that all companies doing driverless could probably show very good disengagement rates. So it is not like one driverless company will have a disengagement rate of 1 per 1,000 miles and another has a disengagement rate of 1 per 30,000 miles where you can say the company with 1 per 30,000 miles is clearly far better.

In 2021, Cruise reported a disengagement rate of 1 per 41,000 miles. And yet, we've seen Cruise have several incidents of the cars getting stuck and needing remote assistance. So a high disengagement rate won't necessarily mean that the company won't have incidents or accidents.

I am not sure disengagement rates would really be informative once companies decide to remove the safety driver. I mean if company A reports a disengagement rate of 1 per 41,000 miles and company B reports a disengagement rate of 41,500 miles, should the public really choose company B since they have the better disengagement rate? The fact that company B has a disengagement of 500 miles better than company A, does not mean that company B won't also have some incidents of cars getting stuck.
 
...You can't measure disengagements in a driverless ride since there is no safety driver to disengage. You could measure accidents or remote assistance events...
Can't have disengagements when there is no driver!

When the automation system is overridden, like an accident or technicians picking up a stopped AV... that's disengagement.
 
  • Like
Reactions: Doggydogworld
But the way that reads it leads the reader to think the Cruise was at fault. It wasn't.
The car stopped in the intersection, which resulted in a collision with the oncoming speeding Prius (which probably thought the Cruise was going to accelerate to avoid them) .

Cruise changed their programming and admitted it had problems. They did a safety recall.


So I guess I’d say that is a 50/50 split.

But we’ll never see the video so we’ll never know for sure. My guess is that the Cruise had plenty of time to clear the intersection and the accident was easily avoidable, with no change in the initial “go” decision.
 
When the automation system is overridden, like an accident or technicians picking up a stopped AV... that's disengagement.

Disengagements are only counted when the car is switched back into manual driving. So, if a technician actually turns off the autonomous driving and puts the car back into manual driving that is a disengagement. When remote assistance provides guidance without ever switching the car back into manual driving, that does not count as a disengagement. So if they override the system without actually turning the autonomous driving off, that won't be counted as a disengagement.

Perhaps we need to measure "interventions" or "overriding events" instead of "disengagements"? Simply measuring when the car is switched back into manual driving will not capture all the data we need to measure safety and reliability.
 
Disengagements are only counted when the car is switched back into manual driving. So, if a technician actually turns off the autonomous driving and puts the car back into manual driving that is a disengagement. When remote assistance provides guidance without ever switching the car back into manual driving, that does not count as a disengagement. So if they override the system without actually turning the autonomous driving off, that won't be counted as a disengagement.

Perhaps we need to measure "interventions" or "overriding events" instead of "disengagements"? Simply measuring when the car is switched back into manual driving will not capture all the data we need to measure safety and reliability.
The idea is the public needs to know when they ride in a car without a driver: such as when the car intentionally stops right in the middle of the intersection and wouldn't move until someone can override its automation system and move it away from the intersection either by a technician manually getting in the car and drive or the police calling a tow truck to tow the car away.

The automation has its own reason for stopping right in the middle of the intersection. It works as programmed, but that doesn't mean it passed the driving test in this case. Similar to Tesla phantom brakes. It looks like there's no reason to brake but the machine works as programmed: there's a good reason for it to brake but human drivers just disagree with those reasons (shadow on the road, flashing lights from tow truck 3 lanes away well within its shoulder...)

Thus, instead of relying on social media, those incidents need to be recorded by the DMV for the public to know.

Whatever the terminology is called "interventions" or "overriding events," the fleet still needs the DMV to monitor and record overridden automation events.
 
The idea is the public needs to know when they ride in a car without a driver: such as when the car intentionally stops right in the middle of the intersection and wouldn't move until someone can override its automation system and move it away from the intersection either by a technician manually getting in the car and drive or the police calling a tow truck to tow the car away.

The automation has its own reason for stopping right in the middle of the intersection. It works as programmed, but that doesn't mean it passed the driving test in this case. Similar to Tesla phantom brakes. It looks like there's no reason to brake but the machine works as programmed: there's a good reason for it to brake but human drivers just disagree with those reasons (shadow on the road, flashing lights from tow truck 3 lanes away well within its shoulder...)

Thus, instead of relying on social media, those incidents need to be recorded by the DMV for the public to know.

Whatever the terminology is called "interventions" or "overriding events," the fleet still needs the DMV to monitor and record overridden automation events.

Oh I agree that all AV incidents should be reported and made public. It could help prevent "fake news" or speculation. I don't like that we seem to only hear about a Cruise or Waymo incident from a social media post which is often just a picture or a short video with no real context or details about what actually happened. It leads to a lot of often wrong speculation.

I think total miles should also be shared so that the public can see how often the incidents happen to put them in context. Just sharing total incidents would be misleading. There is a big difference between say a company that had 10 incidents but they do 1M driverless miles in 6 months versus a company that has 10 incidents but they only do 1,000 driverless miles in 6 months. I might also suggest maybe only sharing incidents from say the last 6 months. That's because AVs make such fast progress with new software versions that an incident from a year ago might not be relevant anymore (the software has changed so much since then). It would also be important to share the exact nature and cause of the incident so that people can see if the AV was to blame, how serious the incident was etc... I do think it is important that we avoid knee jerk reactions when an incident happens. Just because Cruise had an incident where a robotaxi got stuck, does not necessarily mean that all Cruise AVs are unsafe, dangerous and you should not ride in them. I would not want incident reports to give people the wrong idea about AVs.
 
Last edited:
"The Board of Supervisors unanimously approved a resolution that calls on regulators to address safety and traffic concerns around the growing presence of autonomous vehicles (AVs) on San Francisco streets, and establishes an official city policy of San Francisco on those vehicles.

Among other recommendations, the city asked federal regulators to require AV operators to submit regular data on vehicle failures that block roadways and response times by company staff, and to notify relevant city authorities in the event of a cybersecurity incident.

The city also calls on regulators to collaborate on research to analyze pickup and drop-off impacts, expand access to crash and near-crash data, and limit deployment of vehicles in San Francisco unless they meet certain performance standards."


The two recommendations look reasonable to me. And I've said before that Cruise needs to address their incidents. But I would also caution against overreacting. IMO, the goal should be for everybody to work together to make AVs safer and better, not try to ban them or slow their progress.
 
Last edited:
  • Like
  • Informative
Reactions: Dewg and Tam
"The Board of Supervisors unanimously approved a resolution that calls on regulators to address safety and traffic concerns around the growing presence of autonomous vehicles (AVs) on San Francisco streets, and establishes an official city policy of San Francisco on those vehicles.

Among other recommendations, the city asked federal regulators to require AV operators to submit regular data on vehicle failures that block roadways and response times by company staff, and to notify relevant city authorities in the event of a cybersecurity incident.

The city also calls on regulators to collaborate on research to analyze pickup and drop-off impacts, expand access to crash and near-crash data, and limit deployment of vehicles in San Francisco unless they meet certain performance standards."


The two recommendations look reasonable to me. And I've said before that Cruise needs to address their incidents. But I would also caution against overreacting. IMO, the goal should be for everybody to work together to make AVs safer and better, not try to ban them or slow their progress.
It's a pendulum, it'll eventually get to the middle.
 
  • Like
Reactions: diplomat33