Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Say after a Storm the visualizations that the car uses against the mapping no longer exist. Say Fort Myers was mapped and self driving cars were operating in the city. Now a Storm rearranges the landscape so the visualizations don't match the mapping. Can the car still drive and find it's way
I'll show you an example using Yandex localizing in the snow, and you tell me if that is what you mean by snowstorm rearranging the landscape.


Localization layers can have information from radar, lidar, and feature extraction from cameras. That is why sensor fusion is used because not all sensor types fail in the exact same way which provides redundancy. I would also add that people shouldn't drive in a snowstorm or snow-covered roads that has not been cleared. I don't ever imagine Cruise or Waymo sending out their car in that type of condition even if they can localize accurately.

Another technology that is being researched is LGPR for autonomous driving which works well in snow.

 
Last edited:
  • Like
Reactions: diplomat33
California Mountain View based Nuro winds down operations in Phoenix in an apparent cost saving move. Tempe Arizona operations continue.
Nuro told employees that the Phoenix Depot location would be closed by October 1, according to an internal email viewed by TechCrunch. It will continue to operate out of its Tempe, Arizona facility and corporate employees will not be affected. However, several autonomous vehicle operators (AVO) in Phoenix have been laid off as a result.
 
Hyundai Group introducing Level 3 autonomous tech this year

The Genesis G90 will be available with Level 3 autonomous driving technology in South Korea by the end of 2022 at speeds up to 80kph / 50mph


 
I was talking about storms like Hurricanes and Tornados rearranging the Landscape. I wonder if it is a problem with Snow, Ice dirt and mud buildup on the ground penetrating radar. If the outside of the car looks like this the underside maybe worse
1666229854525.png
 
Interesting article by Mobileye on the camera vision redundancy they employ to improve reliability of pedestrian detection:

"To increase the accuracy of detection and enhance the safety of such vulnerable road users, Mobileye’s computer-vision ADAS technology employs not just one method of detection, but several operating in parallel – each processing the same camera feeds:
  • Classic Pattern Recognition enables the system to automatically identify and classify objects and other road users.
  • Full Image Detection does the same for larger objects in close proximity to the vehicle.
  • The Segmentation method labels individual and groups of pixels to better identify smaller elements in the driving environment (such as pedestrians and cyclists).
  • The Top-View Free Space method identifies other objects and road users as distinct from the road surface.
  • Wheel Detection classifies other vehicles by identifying their wheels.
  • Vidar employs cutting-edge deep learning to create a lidar-like 3D model of the driving environment for increased situational awareness.
On top of these, we implement specific algorithms dedicated to detecting baby strollers, wheelchairs, and open car doors – particularly important elements that vehicles are likely to encounter in their driving environment. Further algorithms identify and monitor the orientation, posture, and gestures of pedestrians to better recognize their situation and predict what they might do next."

 
That kind of redundancy tells me Mobileye is ahead in the lab. But obviously to me, Tesla remains ahead in the real world. I expect that to to continue for years to come.

IMO, Tesla's real world lead is fragile. Tesla only appears to be in the lead now because they have deployed so-called "FSD" to more cars. But "in the lab" as you say, Tesla is behind. So when the competition deploys what they have "in the lab", Tesla's real world lead will disappear. And they are already starting to do that. Mobileye has deployed SuperVision in China to tens of thousands of cars. I suspect Mobileye will start deploying SuperVision in the US soon. Mobileye will be able to deploy superior L2 to more cars than Tesla. Furthermore, Mobileye is testing L4 which Tesla does not have and getting ready to deploy L4 robotaxis and L4 personal cars in the next couple of years. Not to mention other companies deploying real world L4 to more and more areas. Tesla's real world lead won't last for years to come. My guess is that by 2025, Tesla's real world lead will be completely gone. Why do you think Elon is so insistent on deploying FSD beta wide by the end of this year? He knows time is running out for Tesla. In just a couple years, the competition will flood the market with better FSD!
 
Last edited:
IMO, Tesla's real world lead is fragile. Tesla only appears to be in the lead now because they have deployed so-called "FSD" to more cars. But "in the lab" as you say, Tesla is behind. So when the competition deploys what they have "in the lab", Tesla's real world lead will disappear. And they are already starting to do that. Mobileye has deployed SuperVision in China to tens of thousands of cars. I suspect Mobileye will start deploying SuperVision in the US soon. Mobileye will be able to deploy superior L2 to more cars than Tesla. Furthermore, Mobileye is testing L4 which Tesla does not have and getting ready to deploy L4 robotaxis and L4 personal cars in the next couple of years. Not to mention other companies deploying real world L4 to more and more areas. Tesla's real world lead won't last for years to come. My guess is that by 2025, Tesla's real world lead will be completely gone. Why do you think Elon is so insistent on deploying FSD beta wide by the end of this year? He knows time is running out for Tesla. In just a couple years, the competition will flood the market with better FSD!
I'm not sure things are that bad for Telsa but I don't completely disagree with you either. My concern is that Tesla is more concerned with being first rather than being the best. I could definitely see others passing them unless Tesla makes some fundamental changes in its approach over the next 2-3 years. It's just hard to be convinced that vision-only can reach the same level that others will.
 
At Tech Day, Xpeng shared some news on their FSD progress:

When it debuted on the automaker’s G9 SUV in September, XNGP (navigation-guided pilot) was announced as XPeng’s last step before achieving fully autonomous driving. XNGP combines all scenarios of the automaker’s existing ADAS capabilities (highway, city, parking) into one holistic system that will soon no longer require high-precision maps to function – essentially opening up its availability to any and all areas.

At Tech Day 2022, XPeng shared that XNGP is backed by major hardware upgrades, including 508 TOPS of computing power, a dual-LiDAR system, 8-megapixel HD cameras, and a new software architecture called XNet, which operates using a closed-loop, self-evolving AI and data system.

XPeng’s XNet varies from its first-generation visual perception architecture by adopting a deep neural network that was developed in-house to deliver visual recognition with human-like decision-making capabilities, drawing data from multiple cameras.

The company explained that its autonomous driving technical stack can reach 600 PFLOPS, increasing the training efficiency of the autonomous driving model by over 600 times. To that note, model training can be significantly reduced from 276 days to just 11 hours.

For added texture in regard to XNet’s streamlined efficiencies, it now only uses 9% of its Orin-X chip’s processing power, compared with 122% before optimization. These upgrades have enabled XPeng to establish an entirely closed-loop autonomous driving data system (data collection, labeling, training, and deployment) that utilizes lightning-fast machine learning to consistently self-improve. Per the release:

XPeng’s high-efficient AI capabilities enable consistent and unsupervised machine leaning and rapid iterations in training models, resolving over 1,000 rare corner cases each year. This highly efficient closed-loop AI and data system has helped reducing incident rate for the Highway NGP by 95%.

 
  • Like
Reactions: SCTes1aMan
IMO, Tesla's real world lead is fragile. Tesla only appears to be in the lead now because they have deployed so-called "FSD" to more cars. But "in the lab" as you say, Tesla is behind. So when the competition deploys what they have "in the lab",
Do we know what Tesla has in their lab? I would expect Tesla to continue to improve their "FSD" too, so they may not fall so far behind the competition by the time those are deployed in the US.
 
At Tech Day, Xpeng shared some news on their FSD progress:

When it debuted on the automaker’s G9 SUV in September, XNGP (navigation-guided pilot) was announced as XPeng’s last step before achieving fully autonomous driving. XNGP combines all scenarios of the automaker’s existing ADAS capabilities (highway, city, parking) into one holistic system that will soon no longer require high-precision maps to function – essentially opening up its availability to any and all areas.

At Tech Day 2022, XPeng shared that XNGP is backed by major hardware upgrades, including 508 TOPS of computing power, a dual-LiDAR system, 8-megapixel HD cameras, and a new software architecture called XNet, which operates using a closed-loop, self-evolving AI and data system.

XPeng’s XNet varies from its first-generation visual perception architecture by adopting a deep neural network that was developed in-house to deliver visual recognition with human-like decision-making capabilities, drawing data from multiple cameras.

The company explained that its autonomous driving technical stack can reach 600 PFLOPS, increasing the training efficiency of the autonomous driving model by over 600 times. To that note, model training can be significantly reduced from 276 days to just 11 hours.

For added texture in regard to XNet’s streamlined efficiencies, it now only uses 9% of its Orin-X chip’s processing power, compared with 122% before optimization. These upgrades have enabled XPeng to establish an entirely closed-loop autonomous driving data system (data collection, labeling, training, and deployment) that utilizes lightning-fast machine learning to consistently self-improve. Per the release:



The English replay of their Tech Day is coming up tonight - XPENG - Official Website | XPENG Motors – XPENG (Global)
 
Do we know what Tesla has in their lab? I would expect Tesla to continue to improve their "FSD" too, so they may not fall so far behind the competition by the time those are deployed in the US.

Tesla has shown us what is "in the lab" on AI Day. But I realize Tesla's FSD is not static. It will continue to improve too. The question is how good will Tesla's vision-only become? Can Tesla offer L4, or better yet L5, with their vision-only approach before the competition is able to offer L4/L5 with their sensor fusion approach, especially since we are already seeing the competition deploy limited L4? My feeling is that Tesla will improve their vision-only FSD but it will likely require driver supervision for awhile still. And if Tesla is offering FSD with driver supervision while the competition are offering FSD with no driver supervision, Tesla is going to lose their "lead".
 
  • Like
Reactions: SCTes1aMan
Tesla, Waymo, NVIDIA, Wayve Panel Discussion: Overcoming Top Challenges in Autonomous Vehicles


Super interesting. Just from the bits I've listened to so far, the panelists are very well informed. It's great hearing from experts in the AV space. Thanks for sharing. I will watch the whole thing.

One bit on modular vs end-to-end approach really caught my attention. Drago argues for the modular approach but says that the trend is towards fewer modules that do more tasks. Alex (Wayve) argues for the end-to-end approach. It's a fascinating debate IMO.
 
Tesla has shown us what is "in the lab" on AI Day. But I realize Tesla's FSD is not static. It will continue to improve too. The question is how good will Tesla's vision-only become? Can Tesla offer L4, or better yet L5, with their vision-only approach before the competition is able to offer L4/L5 with their sensor fusion approach, especially since we are already seeing the competition deploy limited L4? My feeling is that Tesla will improve their vision-only FSD but it will likely require driver supervision for awhile still. And if Tesla is offering FSD with driver supervision while the competition are offering FSD with no driver supervision, Tesla is going to lose their "lead".
And, even more important to the folks in this forum who are Tesla owners, if Tesla's vision-only improvements come to rely on a 10x processing power increase and 4x camera resolution increase (in addition to potentially more cameras) in HW4, then what happens to FSD in the fleet of existing vehicles? Does it just become stuck at current L2 capability?
 
And, even more important to the folks in this forum who are Tesla owners, if Tesla's vision improvements come to rely on a 10x processing power increase and 4x resolution increase on cameras (HW4), not to mention more cameras, then what happens to FSD in the fleet of existing vehicles? Does it just become stuck at current L2 capability?

IMO, it is very likely the current fleet on HW3 will be stuck at L2.