Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
That's embarrassing.

Sounds like Pony.ai was trying to flex ("LeT mE sHoW yOU real FuLL sElF DriViNG, ElOn!!") in front of Tesla's Fremont factory and it wound up blowing up in their face. 😂

Pony.ai backed by Toyota gets its driverless license in California suspended for hitting a divider.
Quote:
On Oct. 28, a Pony.ai vehicle operating in autonomous mode hit a road center divider and a traffic sign in Fremont after turning right, showed the technology firm’s accident report filed with the California Department of Motor Vehicles (DMV).

“There were no injuries and no other vehicles involved,” the company, backed by Japan’s Toyota Motor Corp, said in the report.

... It was unclear what aspect of this incident prompted the suspension.

“On Nov. 19, the DMV notified Pony.ai that the department is suspending its driverless testing permit, effective immediately, following a reported solo collision in Fremont, California, on Oct. 28,” the DMV said in a statement.

The regulator said Pony.ai has 10 Hyundai Motor Co Kona electric vehicles registered under its driverless testing permit, and that the suspension does not impact Pony.ai’s permit for testing with a safety driver.

The suspension comes only six months after Pony.ai became the eighth company to receive a driverless testing permit in California, joining the likes of Alphabet Inc unit Waymo as well as Cruise, backed by General Motors Co.
 
  • Funny
Reactions: EVNow
For those interested, Mobileye will be presenting at CES 2022.

The most interesting presentation will probably be the "under the hood" talk:

Mobileye Deep Dive – “Under the Hood” with Prof. Amnon Shashua

Join Mobileye CEO Prof. Amnon Shashua as he explains how the company will deliver economically viable consumer autonomous vehicles (AV) to the world. He will unveil new chip technology, share progress on radar and lidar technology, and, for the first time, disclose details about Mobileye’s approach to enabling fully autonomous solutions across vehicle types and use cases around the globe. The 2022 “Under the Hood” session is not to be missed, as Shashua shows how Mobileye is rewriting the AV game.

When: Wednesday, Jan. 5, 2022, 11:30 a.m. PST.

Where: Las Vegas Convention Center, Room W326, Las Vegas.

Livestream: Watch live on the Intel Newsroom.

 
  • Like
  • Informative
Reactions: Jeff N and EVNow
But increase the sensor fusion error. There is no free lunch.

There is no free lunch because it cost money.

Whether it introduces sensor fusion error depends on how the data from it is used.

Simply having Lidar doesn't tell us anything about how its being used.

I could easily see Lidar being used only to cross check things. Like is the Vidar working correctly? Are the depth estimates from the cameras accurate or is some issue causing them not to be?

There are numerous low contrast situations on the road that having Lidar as a means to get additional data would be extremely valuable.

Tesla themselves uses Lidar for R&D purposes.

GM doesn't have Lidar on their Supercruise vehicles, but Lidar is used for the maps used by Supercruise.
 
There is no free lunch because it cost money.
No - I don't mean that way.

I mean - if you want higher accuracy, you need to figure out how to integrate and how to handle conflicts.

For eg., if you want a good map - you can use Google. If you want a better map, you can try to get OSM, TomTom, Google etc and in some intelligent way try to integrate. But the result is only as good as how well you integrate - the resulting map can also be worse than just Google.
 
Before or after IPO ?

Not sure. I think the IPO is planned for sometime next year (2022). Mobileye just got a permit to test in Paris so it will probably be some time before they launch a fully public ride-hailing service. I imagine the fully public ride-hailing in Paris will probably be after the IPO. However, it looks like employees from Galeries Lafayette Paris Haussmann will get to be early riders for the testing. So they may get to test the autonomous rides before the IPO.
 
  • Informative
Reactions: EVNow
Cruise not performing well and CEO gets the axe?
Quote:
General Motors Co (GM.N) said on Thursday that Dan Ammann, the chief executive of its majority-owned Cruise self-driving car subsidiary, is leaving the company, effective immediately.

The U.S. automaker did not give a reason for the departure of Ammann, a former GM president and chief financial officer.
/quote

Dec 1, Cruise was denied the ability for charging fairs for its robotaxi service, because city didn't like how it always double parked.
 
Last edited:
Cruise not performing well and CEO gets the axe?
Quote:
General Motors Co (GM.N) said on Thursday that Dan Ammann, the chief executive of its majority-owned Cruise self-driving car subsidiary, is leaving the company, effective immediately.

The U.S. automaker did not give a reason for the departure of Ammann, a former GM president and chief financial officer.
/quote

Dec 1, Cruise was denied the ability for charging fairs for its robotaxi service, because city didn't like how it always double parked.
Do you have a source of the latter part (Cruise being denied the application)? The article you link was previously discussed and it was just SFMTA calling to deny the application, but says nothing about the application being denied. On December 6th, Cruise responded claiming what they did was legal. I was able to google the full response:
https://www.cpuc.ca.gov/-/media/cpu...lication-for-driverless-deployment-permit.pdf

That is the latest document at CPUC if you search for Cruise, so I don't think there is a final decision yet on the application.
Search Results
 
Last edited:
You're right, they claim it UltraCruise is being developed in-house.


I'm surprised that people think such a system is a good idea. I'm still skeptical that it will ever be released (and that applies to FSD Beta as well).
Moving the discussion to this thread.

I don't think there is a natural or logical limit to ADAS. People who have their own cars do want more and more automation. So, not surprising at all that we are getting progressively more capable ADAS systems.
 
Moving the discussion to this thread.

I don't think there is a natural or logical limit to ADAS. People who have their own cars do want more and more automation. So, not surprising at all that we are getting progressively more capable ADAS systems.
I have no doubt that people want such a system, I just don't think it will end up being safe. If FSD Beta is currently safe (I think it might be) it's because it's so bad that it induces hyper vigilant monitoring. It's hard for me to imagine a system that drives competently for thousands of miles between errors being safe without very invasive driver monitoring.
I don't think either side of this argument is going to convince the other. We don't have any data because no one has released such a system to the public.
 
  • Like
Reactions: daktari
I have no doubt that people want such a system, I just don't think it will end up being safe. If FSD Beta is currently safe (I think it might be) it's because it's so bad that it induces hyper vigilant monitoring. It's hard for me to imagine a system that drives competently for thousands of miles between errors being safe without very invasive driver monitoring.
I don't think either side of this argument is going to convince the other. We don't have any data because no one has released such a system to the public.
Do you think AP/NOA is safer ? Do you think FSD will get to AP/NOA level safety ?

Seems to me there is a gradual decrease in attentiveness as the software gradually gets better. I don't think there is any "step" decrease in attentiveness. Even for people who start using AP/ FSD Beta later on - in the beginning they are naturally less trusting until they learn the way the software works. They adjust the level of attentiveness to match their experience with the software.

There will be accidents - question is worse or better than manual driving. So far the data says better than manual driving.
 
Do you think AP/NOA is safer ? Do you think FSD will get to AP/NOA level safety ?

Seems to me there is a gradual decrease in attentiveness as the software gradually gets better. I don't think there is any "step" decrease in attentiveness. Even for people who start using AP/ FSD Beta later on - in the beginning they are naturally less trusting until they learn the way the software works. They adjust the level of attentiveness to match their experience with the software.

There will be accidents - question is worse or better than manual driving. So far the data says better than manual driving.
I don't know if AP/NoA is safer (the data Tesla provides doesn't correct for when and where AP is used).
I don't think supervised FSD will ever achieve AP/NoA safety because the environment people will use it in has much less margin for error.
The Uber safety driver responsible for the pedestrian fatality learned the way the software worked by doing the route 73 times. I don't think it's possible for individual drivers to determine the required level of attentiveness.
What I do think dramatically improves safety are systems like AEB, lane departure avoidance, etc.
 
  • Like
Reactions: daktari
I don't know if AP/NoA is safer (the data Tesla provides doesn't correct for when and where AP is used).
You don't know if its safer than FSD Beta ? Ofcourse it is/ AP is definitely better than FSD Beta. Wait - below you say AP is better than FSD Beta ...

I don't think supervised FSD will ever achieve AP/NoA safety because the environment people will use it in has much less margin for error.
There is definitely more chances for smaller accidents with FSD Beta. But "ever" is a long time.

The Uber safety driver responsible for the pedestrian fatality learned the way the software worked by doing the route 73 times. I don't think it's possible for individual drivers to determine the required level of attentiveness.
In millions of miles of driving only one fatal accident related to AVs. I think that is safer than human driving. BTW, there was a lot of discussion on whether even if he was paying attention or driving manually, the driver could have prevented the accident. The crash victim seemed to suddenly appear on the road out of nowhere.

What I do think dramatically improves safety are systems like AEB, lane departure avoidance, etc.
In our Toyota van, line departure avoidance is quite bad and my wife asked me to turn it off.
 
You don't know if its safer than FSD Beta ? Ofcourse it is/ AP is definitely better than FSD Beta. Wait - below you say AP is better than FSD Beta ...


There is definitely more chances for smaller accidents with FSD Beta. But "ever" is a long time.


In millions of miles of driving only one fatal accident related to AVs. I think that is safer than human driving. BTW, there was a lot of discussion on whether even if he was paying attention or driving manually, the driver could have prevented the accident. The crash victim seemed to suddenly appear on the road out of nowhere.


In our Toyota van, line departure avoidance is quite bad and my wife asked me to turn it off.
I mean I don't know if it's safer for people to use AP/NoA or drive manually. I also have no idea how safe FSD Beta is (how many miles has it actually been used? how many collisions have there been and what was the severity? I doubt there are enough miles driven to determine safety and of course the software is always changing...)
I have no doubt that FSD will be safer than the average human driver when it's out of beta (i.e. robotaxi capable). It's supervised FSD that I'm concerned about.
The average fatality rate in the US is 1 per 100 million miles of driving so that's actually a horrible safety record. The video that Uber released was extremely misleading and it was later determined that the crash was entirely avoidable. I'm always surprised when this comes up that people think it's ok to drive on suburban streets at night so fast that they would not be able to stop for a person in the road.
 
I don't think there is a natural or logical limit to ADAS. People who have their own cars do want more and more automation. So, not surprising at all that we are getting progressively more capable ADAS systems.

To me this is clearly a case of "careful what you wish for"

The natural limit to ADAS is the ability of a human to oversee an autonomous system.

For that to begin to happen the ADAS has to be useful where the driver actually uses it, and the the ADAS has to perform well enough that the human trusts it unconsciously.

What I've seen is the illusion of capability more than I've seen actual capability.

Like Blue Cruise offers hands free highway driving, but its been dumbed down to where it forces customers to put their hands on the steering wheel during corners.

Autopark would be great if it wasn't so slow.

FSD Beta you'd have to be completely nutty to trust. Even when it's not making a mistake I've found myself canceling out because of a lack of trust.

I don't expect FSD Beta to get pulled in its infancy because it lacks consistency to trust it. But, I do expect there will be a time where FSD Beta works so well that people begin to trust it with terrible consequences.

That might make me seem like I'm anti-Tesla or anti-FSD, but its actually because I'm anti-Human. :)

That the human is prone to failing and can't be relied on. It's also the fact that when a robot kills a human the statistics don't matter because there is a media uproar. It also won't matter one bit in the medias eyes that the driver was supposed to pay attention. AP1 didn't make it past one or two fatalities before nags were introduced.
 
Last edited:
I have no doubt that FSD will be safer than the average human driver when it's out of beta (i.e. robotaxi capable). It's supervised FSD that I'm concerned about.
I think supervised FSD is a terrible idea, and is going to fall on its face so I agree with you on that one.

I don't doubt that some version of FSD (or a competitors version) will be safer than the average human driver, but I don't think that's a good comparison.

The entire comparison to average driver should be thrown out.

The reason being is most accidents aren't really accidents so why are we comparing FSD driving to non-accidents?

I'm not a drunk so why should my driving score be reduced because of drunks?
I don't drive while distracted so why should my driving score be reduced by those who do?
I'm not an unexperienced driver so my score shouldn't be driven down by the unexperienced.
I'm not a million years old so my score shouldn't be driven down by them.
I don't race on the roads so my score shouldn't be driven down by them.
I don't road rage so my score shouldn't be driven down by those who do.

Once you throw out all the trash drivers the human score is a LOT better.

I also don't want an autonomous systems "score" to be driven down by all the human nutjobs running into them.
 
That might make me seem like I'm anti-Tesla or anti-FSD, but its actually because I'm anti-Human. :)

That the human is prone to failing and can't be relied on.
Queue the "humans are bad drivers" comments to which I ask "compared to what?"
It's possible that humans are even worse at supervising automation systems than they are at driving.
In our Toyota van, line departure avoidance is quite bad and my wife asked me to turn it off.
That doesn't mean it wouldn't be safer for it to be on. How does your wife feel about FSD Beta?