Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Autonomous Car Progress

This site may earn commission on affiliate links.
Before the pandemic, I think the airport is where the real $ was. Many expense accounts where costs are a don't care and only reliability and comfort matter. I always got an Uber or coach for myself and never shared. Uber was a game-changer since I had a real-time update and didn't have to worry if my reservation was forgotten or driver was running behind.

I think I used to pay $25/day for airport parking at SFO and SJC, if it was available. It made more sense to get an Uber or coach so I wouldn't have to bother with reserving or figuring out parking and risking damage to my car.

It's actually the opposite. Ridership is very low because their 5x10 service area (~2% of Phoenix metro area) is useless. As such they only have ~0.5 revenue cars per square mile. Let's say each service vehicle can cover 5 square miles with reasonable response times. So right now they have 10 fully staffed service cars for only 25 revenue cars. That's awful, higher costs than a taxi!

A viable business model, e.g. one that cherry picks high traffic routes (airport, downtown, ASU campus), would have 5-50 revenue cars per square mile. That's one service car per 25-250 revenue cars, a very manageable cost structure.

The same math applies if they cover the entire metro area well enough to be a 2nd/3rd car replacement (and even 1st car replacement for urban singles, elderly, etc.). Figure 30k cars, similar to Uber/Lyft in a well-served metro area and something like 5% of all local traffic. That's 10 revenue cars per square mile and a very manageable ratio of 1 service car to 50 revenue cars.
 
...Careful observation indicates Tesla sets their confidence dial at 78.3%. ...
I was 91.5% confident that you were kidding around.
But you never can tell, because the crowd on TMC is not a Normal distribution.
In particular, comparisons of Tesla vs. Waymo reveal a strongly Bimodal characteristic, which can skew the data towards a low Coefficient of Humor.
 
Last edited:
If we're going to be really honest about it L4 is autonomous driving.

The very definition of L5 is so designed around both what humans are good at, and what humans are bad at that its badly flawed. It's simply not realistic from an AI perspective, and it's not even realistic about average human capability. Most humans have weather restrictions, and I think all of us have had moments driving in poorly marked construction zones going "wtf???".

Good thing that SAE doesn't say the average human right?
The average human statistics are screw by drunk drivers, bad drivers and teenagers.

Heck teenage drivers 12% of the US fatalities each year.

SAE says "skilled human drivers".

"As specified herein, Level 5 is distinguished from Level 4 by the fact that it is not operationally limited to a specific operational design domain and can rather operate on-road anywhere that a typically skilled human driver can reasonably operate a conventional vehicle."

"“Unconditional/not ODD-specific” means that the ADS can operate the vehicle on-road anywhere within its region of the world and under all road conditions in which a conventional vehicle can be reasonably operated by a typically skilled human driver."


That eliminates not just the drunk drivers, bad drivers, teenagers. But the drivers with 17 years old cars. As more than half the car on the road are 17 years old.

A capable L5 would have to be better than a skilled driver with the latest car model with all the Active Safety and ADAS features.
 
Last edited:
"As specified herein, Level 5 is distinguished from Level 4 by the fact that it is not operationally limited to a specific operational design domain and can rather operate on-road anywhere that a typically skilled human driver can reasonably operate a conventional vehicle."
"Typically skilled" is not the same as "skilled". In fact, I'd say it sounds pretty close to average, though probably excluding cases of impairment.
 
Could you elaborate? By definition, geofencing is constraining to a smaller area and your system has to deal with the idiosyncrasies of this smaller geofenced area. If your argument is that "in theory" it still works if you expand your geofenced area, sure, but that is certainly non-trivial and we're not in a position to fully understand the level of time, money and resources required to scale up "generally" to large swaths of the country with the approach Waymo has taken?

There is zero additional time, money and resources required for Waymo to drive outside their geofence zone since they are already doing it. Not only do they test in 25 cities, they test using different hardware/sensor config and with less map (presumably also no map).

The myth is that Waymo is stuck in Phoenix because their system isn't general. That if your system is geofenced then it doesn't work anywhere else. That the minute you cross the HD map boundary, "you lose all self driving capability" as Tesla fans put it.

An AV consists of three separate systems. Perception, Behavior Prediction and Planning. Let's start with perception. It consists of object detection, classification & tracking (distance, velocity, orientation), object semantic, freespace, etc

A few acronyms I'm using is MTBF (Mean Time Between Failure). Miles you can drive between your P system having a major error that can lead to unsafe maneuver/accident. For example the desired MTBF for Waymo's/Mobileye's perception system is in millions of miles.

An example of failure is not detecting a peds/vehicle/static object or inaccurately detecting the distance, velocity, orientation or semantics of a peds or vehicle.

We can infer that Waymo has driven 100k+ miles with no driver in Phoenix. The question then becomes.

Do you shape shift into an alien when you travel from one city to another? (This is actually very important for the perception system)
Does Your cars transform like autobots from you go from one city to another? (Again very important for the perception system)
Does your car transform into a UFO and levitate when you travel between cities? (This is vital for the prediction system)
Do you walk backwards like in Tenet when you visit other cities? This is again vital for the prediction system)

If you answered yes to any of these questions, you should go to Phoenix and if your logic is correct. Waymo perception and prediction sys will fail & should run u over/rear-end u. You would be in for a big pay day.

What about the people in Phoenix? There are 19 million visitors from around the world that visit Phoenix each year. All the millions of tourists who fly/drive into Phoenix are in danger of being run over/rear end as Waymo's perception & prediction is brittle, not general & will instantly fail.

But because no of that is true. It allows Waymo's NNs to generalize. This is why SDC companies test in a dozen cities (cities that contain highway, suburbs, urban and then cities with variety of weather conditions like sunny, snow and rain) and use the gained experience from the cities to develop one driver. The road system isn't as diverse as most think. IF i showed you 10 pictures from 10 different cities, you would think they were pictures from the exact same city. Why? because more than 90-95% of the US road system is virtually identical. That 90%-95% is what i expect if i did a OpenStreetsMap analysis on US roads.

I'm not just making this up.

This is manifested by 3 systems that are about to release in the coming months.

  • FSD Beta which works anywhere in the US at a safety disengagement of about 1-5 miles currently.
  • Huawei Advanced Autopilot which will be released in 6 months and works anywhere in China which is orders of magnitude more harder to drive in than the US at a claimed 1,000 KM per safety disengagement.
  • Finally Mobileye's Supervision which works anywhere in the world thanks to REM HD Map. Which will be released in around 5 months and function anywhere in China which is orders of magnitude more harder to drive in than the US
 
Last edited:
There is zero additional time, money and resources required for Waymo to drive outside their geofence zone since they are already doing it. Not only do they test in 25 cities, they test using different hardware/sensor config and with less map (presumably also no map).

The myth is that Waymo is stuck in Phoenix because their system isn't general. That if your system is geofenced then it doesn't work anywhere else. That the minute you cross the HD map boundary, "you lose all self driving capability" as Tesla fans put it.

An AV consists of three separate systems. Perception, Behavior Prediction and Planning. Let's start with perception. It consists of object detection, classification & tracking (distance, velocity, orientation), object semantic, freespace, etc

A few acronyms I'm using is MTBF (Mean Time Between Failure). Miles you can drive between your P system having a major error that can lead to unsafe maneuver/accident. For example the desired MTBF for Waymo's/Mobileye's perception system is in millions of miles.

An example of failure is not detecting a peds/vehicle/static object or inaccurately detecting the distance, velocity, orientation or semantics of a peds or vehicle.

We can infer that Waymo has driven 100k+ miles with no driver in Phoenix. The question then becomes.

Do you shape shift into an alien when you travel from one city to another? (This is actually very important for the perception system)
Does Your cars transform like autobots from you go from one city to another? (Again very important for the perception system)
Does your car transform into a UFO and levitate when you travel between cities? (This is vital for the prediction system)
Do you walk backwards like in Tenet when you visit other cities? This is again vital for the prediction system)

If you answered yes to any of these questions, you should go to Phoenix and if your logic is correct. Waymo perception and prediction sys will fail & should run u over/rear-end u. You would be in for a big pay day.

What about the people in Phoenix? There are 19 million visitors from around the world that visit Phoenix each year. All the millions of tourists who fly/drive into Phoenix are in danger of being run over/rear end as Waymo's perception & prediction is brittle, not general & will instantly fail.

But because no of that is true. It allows Waymo's NNs to generalize. This is why SDC companies test in a dozen cities (cities that contain highway, suburbs, urban and then cities with variety of weather conditions like sunny, snow and rain) and use the gained experience from the cities to develop one driver. The road system isn't as diverse as most think. IF i showed you 10 pictures from 10 different cities, you would think they were pictures from the exact same city. Why? because more than 90-95% of the US road system is virtually identical. That 90%-95% is what i expect if i did a OpenStreetsMap analysis on US roads.

I'm not just making this up.

This is manifested by 3 systems that are about to release in the coming months.

  • FSD Beta which works anywhere in the US at a safety disengagement of about 1-5 miles currently.
  • Huawei Advanced Autopilot which will be released in 6 months and works anywhere in China which is orders of magnitude more harder to drive in than the US at a claimed 1,000 KM per safety disengagement.
  • Finally Mobileye's Supervision which works anywhere in the world thanks to REM HD Map. Which will be released in around 5 months and function anywhere in China which is orders of magnitude more harder to drive in than the US
I get how Perception systems and the underlying components work. I train and develop object detection / computer vision algorithms on a daily basis at work. But there are several nuances that your argument above does not capture.

At a base level, the component NNs, and algorithms can generalize fairly well since as you pointed out, for the most part things all look quite similar across the US. But there is a HUGE long-tail of edge cases that driving around in Phoenix is not going to capture. The 80-90% solution in 2021 is quite easy to arrive at. It is that last 10-20% that is excruciatingly hard to nail down because there are so many edge-cases that need to be handled well. Tesla has been focused on this long-tail for a long time now and have been able to capture a decent chunk of it by virtue of having a huge fleet of cars on the road always capturing new training data and being able to run algorithms in shadow mode to go hand in hand with the impressive back-end ML infrastructure they have built out that Karpathy has described in several talks. I'm sure Waymo has also had to do a lot of this to improve their system to where it is (which is quite impressive) in their current areas of operation.

As someone who has trained cutting-edge, real-time object detection systems for many years now, I will say that while the initial 80-90% solution is relatively easy and can generalize fairly well, there is a ridiculous long-tail of edge cases that a small geofenced area is not going to capture. And you need to capture those, train on those and improve your component models on those. Unfortunately, despite all the hype in ML and computer-vision, the fact of the matter remains that for most of these component building block algorithms, brute-force training with large amounts of high quality data that sufficiently samples the long-tail of edge cases that you care about is the only real thing you can do to improve them (along with some cleverish self-supervised/weakly supervised learning tricks)

And then there is system-level algorithms/reasoners/trackers that run on top of these component blocks that are ultimately in charge of the final reasoning and actions taken by the self-driving software. Those get even more complex and their long-tail gets even worse because it isn't just about a particular type of intersection or type of configuration of traffic, but is an amalgamation of all the factors impacting each underlying component algorithm along with nuanced things that may be specific to a certain state/city/town/neighborhood.

As someone who works on this stuff all the time, I strongly disagree that this is as simple as flipping a geofencing switch for Waymo. I'm not saying that their approach can't generalize at all or that they will have to start from scratch each time at all. They will have a pretty good base system that will still work, but you cannot get away from the additional things in the long-tail of scenarios you will be exposed to as you expand your geofencing, and factoring that all in, updating your component and higher-level algorithms to acceptably handle these scenarios will take a bunch of time, money and resources.

Even if they are relying on a different sensor suite, it doesn't obviate the need for this work to happen and it is also why Tesla continues to work on collecting more data and improving their algorithms to better handle the long-tail stuff. Their technological choices may allow them to more easily surmount this issue compared to Tesla's vision-only approach, but they will still need to put in the extra work and resources into getting things to generalize at the level they need when expanding their geofenced regions of operation.
 
Last edited:
I wouldn't waste any time discussing progress or comparisons until Tesla releases V9 beta.

At this point, Tesla is the only fsd developer that matters. Then there's Waymo and the other fsd developers doing the Waymo-approach. Things are looking bad for the Waymo-approach. It'll be clear within a year.

Previously I had hoped multiple approaches would provide safe, reliable, and practical fsd, but considering the immense challenge of the problem, there's likely only going to be one approach that "wins".
 
I get how Perception systems and the underlying components work. I train and develop object detection / computer vision algorithms on a daily basis at work. But there are several nuances that your argument above does not capture.

At a base level, the component NNs, and algorithms can generalize fairly well since as you pointed out, for the most part things all look quite similar across the US. But there is a HUGE long-tail of edge cases that driving around in Phoenix is not going to capture. The 80-90% solution in 2021 is quite easy to arrive at. It is that last 10-20% that is excruciatingly hard to nail down because there are so many edge-cases that need to be handled well.
You missed that part where I said that Waymo has been driving in 25 cities? SDC companies test in the hardest cities for a reason. To increase the amount of cases their system can handle.

Tesla has been focused on this long-tail for a long time now and have been able to capture a decent chunk of it by virtue of having a huge fleet of cars on the road always capturing new training data
This is simply not true. Tesla has NOT been focused on the long tail. They have been focused on developing driver assist to match what they had on AP1 with mobileye and develop features they have promised under EAP. This is proven by the fact that FSD Beta fails on the simplistic situation. You are not dealing with edge cases that show up once every million miles when your system is crashing every 1-5 miles on average on city streets.


and being able to run algorithms in shadow mode to go hand in hand with the impressive back-end ML infrastructure they have built out that Karpathy has described in several talks.
Have you seen any talks on any other companies infrastructure?
As someone who has trained cutting-edge, real-time object detection systems for many years now, I will say that while the initial 80-90% solution is relatively easy and can generalize fairly well, there is a ridiculous long-tail of edge cases that a small geofenced area is not going to capture. And you need to capture those, train on those and improve your component models on those.
Again like I have said. Waymo NN are trained from dataset from 25 cities. For perception in favorable weather. They are not 80-90%, they are 100% there.
And then there is system-level algorithms/reasoners/trackers that run on top of these component blocks that are ultimately in charge of the final reasoning and actions taken by the self-driving software. Those get even more complex and their long-tail gets even worse because it isn't just about a particular type of intersection or type of configuration of traffic, but is an amalgamation of all the factors impacting each underlying component algorithm along with nuanced things that may be specific to a certain state/city/town/neighborhood.

As someone who works on this stuff all the time, I strongly disagree that this is as simple as flipping a geofencing switch for Waymo. I'm not saying that their approach can't generalize at all or that they will have to start from scratch each time at all. They will have a pretty good base system that will still work, but you cannot get away from the additional things in the long-tail of scenarios you will be exposed to as you expand your geofencing, and factoring that all in, updating your component and higher-level algorithms to acceptably handle these scenarios will take a bunch of time, money and resources.
So explain to me how did huawei create a system that works anywhere in China?
And how did mobileye create a system that works anywhere in the world and in China?

this isn’t some theory. It already happened. This is something that no Tesla fan can admit because it completely invalidates their narrative. So they simply Ignore it.
Even if they are relying on a different sensor suite, it doesn't obviate the need for this work to happen
If you worked on lidar data then you would know that they are orders of magnitude less variance than camera data and need orders of magnitude less data
and it is also why Tesla continues to work on collecting more data and improving their algorithms to better handle the long-tail stuff.

Their technological choices may allow them to more easily surmount this issue compared to Tesla's vision-only approach, but they will still need to put in the extra work and resources into getting things to generalize at the level they need when expanding their geofenced regions of operation.

if you crash one minute after you leave your garage, you are not dealing with long tail.

both Waymo, cruise and Tesla operate in SF and we know their disengagement rate. It’s not even close.
 
  • Funny
Reactions: shrineofchance
At this point, Tesla is the only fsd developer that matters.

This comes across as incredibly arrogant. You may prefer the Tesla approach. You may even think Tesla has the best chance of "winning". But you can't just outright dismiss other FSD, especially when those other FSD companies like Waymo and Cruise have cutting edge, state-of-the-art FSD. To say that AV companies with some of the most advanced FSD in the world, don't matter, is ridiculous.

Then there's Waymo and the other fsd developers doing the Waymo-approach. Things are looking bad for the Waymo-approach. It'll be clear within a year.

That's your opinion that is entirely subjective. Most people don't think Waymo is doing badly at all. In fact, the general consensus is that Waymo is the leader in autonomous driving. But you keep making these declarative predictions that everybody will fail within a year, except Tesla of course. So far, you've been wrong.

Previously I had hoped multiple approaches would provide safe, reliable, and practical fsd, but considering the immense challenge of the problem, there's likely only going to be one approach that "wins".

IMO, if there is one approach that "wins", it is likely to be sensor fusion because sensor fusion, done right, increases safety. But in the sensor fusion camp, there is likely to be several variations that "win". At the end of the day, the safest FSD will win.
 
I wouldn't waste any time discussing progress or comparisons until Tesla releases V9 beta.

At this point, Tesla is the only fsd developer that matters. Then there's Waymo and the other fsd developers doing the Waymo-approach. Things are looking bad for the Waymo-approach. It'll be clear within a year.

Previously I had hoped multiple approaches would provide safe, reliable, and practical fsd, but considering the immense challenge of the problem, there's likely only going to be one approach that "wins".

Oh look. The guy who told us Tesla will be at ~150k miles per safety disengagement in 6-9 months, 7 months ago.

The guy who said Tesla would have level 5 by end of 2021 acouple months after his 6-9 months statement were evident to have failed. The same guy is now telling us, wait another year. Then the next year he will say just wait another year.

Rinse and Repeat.

While Tesla's FSD is still at a safety disengagement per drive. Same as it was in 2016, the same as it was in 2019. Still the same in 2021.

At this point, Waymo, Cruise, Mobileye, Huawei, etc is the only fsd developer that matters. Then there's Tesla with webcam grade cameras in comprised angles that are susceptible to inclement weather and have no air and water cleaning solution. Things are looking bad for Tesla. It is already clear and evident with Waymo going driverless in phoenix and will continue to be when Huawei's and Mobileye's door to door autopilot releases in china in a few months.
 
I get how Perception systems and the underlying components work. I train and develop object detection / computer vision algorithms on a daily basis at work. But there are several nuances that your argument above does not capture.

At a base level, the component NNs, and algorithms can generalize fairly well since as you pointed out, for the most part things all look quite similar across the US. But there is a HUGE long-tail of edge cases that driving around in Phoenix is not going to capture. The 80-90% solution in 2021 is quite easy to arrive at. It is that last 10-20% that is excruciatingly hard to nail down because there are so many edge-cases that need to be handled well. Tesla has been focused on this long-tail for a long time now and have been able to capture a decent chunk of it by virtue of having a huge fleet of cars on the road always capturing new training data and being able to run algorithms in shadow mode to go hand in hand with the impressive back-end ML infrastructure they have built out that Karpathy has described in several talks. I'm sure Waymo has also had to do a lot of this to improve their system to where it is (which is quite impressive) in their current areas of operation.

As someone who has trained cutting-edge, real-time object detection systems for many years now, I will say that while the initial 80-90% solution is relatively easy and can generalize fairly well, there is a ridiculous long-tail of edge cases that a small geofenced area is not going to capture. And you need to capture those, train on those and improve your component models on those. Unfortunately, despite all the hype in ML and computer-vision, the fact of the matter remains that for most of these component building block algorithms, brute-force training with large amounts of high quality data that sufficiently samples the long-tail of edge cases that you care about is the only real thing you can do to improve them (along with some cleverish self-supervised/weakly supervised learning tricks)

And then there is system-level algorithms/reasoners/trackers that run on top of these component blocks that are ultimately in charge of the final reasoning and actions taken by the self-driving software. Those get even more complex and their long-tail gets even worse because it isn't just about a particular type of intersection or type of configuration of traffic, but is an amalgamation of all the factors impacting each underlying component algorithm along with nuanced things that may be specific to a certain state/city/town/neighborhood.

As someone who works on this stuff all the time, I strongly disagree that this is as simple as flipping a geofencing switch for Waymo. I'm not saying that their approach can't generalize at all or that they will have to start from scratch each time at all. They will have a pretty good base system that will still work, but you cannot get away from the additional things in the long-tail of scenarios you will be exposed to as you expand your geofencing, and factoring that all in, updating your component and higher-level algorithms to acceptably handle these scenarios will take a bunch of time, money and resources.

Even if they are relying on a different sensor suite, it doesn't obviate the need for this work to happen and it is also why Tesla continues to work on collecting more data and improving their algorithms to better handle the long-tail stuff. Their technological choices may allow them to more easily surmount this issue compared to Tesla's vision-only approach, but they will still need to put in the extra work and resources into getting things to generalize at the level they need when expanding their geofenced regions of operation.

7ab.gif
 
SAE says "skilled human drivers".

It says "typically skilled human drivers"

To me that means average licensed human drivers

Drunk drivers don't skew operating domain of human drivers
Teenager accidents don't skew operating domain of human drivers
Bad drivers are likely bad drivers because they operate in domains beyond their capability. Some even try to drive on public roads. :p

All a human driver has to do is to pass a test that doesn't even cover operating domains beyond ideal situations. This makes it extremely unfair to the Robot because it has to operate safely in situations humans drivers aren't even tested, and a lot of times not even tested. My "L5 status" is untested as I don't drive by choice in difficult situations. Like in the middle of Paris, or during white out blizzard conditions. I don't even take uncontrolled left turns over multiple lanes of heavy traffic (I don't have the patience to wait).

If human drivers had to drive safely in all situations we'd have a substantially smaller Operating domain than we'd assume we're capable of. Instead we give up safety margin in order to get places during non-ideal situations. With autonomous driving it can't give up safety margin.

I'm not willing to give up safety margin so I'll remain an L4 driver unless cookies are involved. If I need cookies I might qualify for L5 status during that drive. That exception can't be given to autonomous cars as they are not cookie monsters.

When the SAE does write expected safety of Autonomous vehicles I do hope they reject "typically skilled", and instead go with "skilled drivers". That way it really does exclude drunks, inexperienced drivers, old drivers (who run into buildings), etc.
 
  • Like
Reactions: rxlawdude
When the SAE does write expected safety of Autonomous vehicles I do hope they reject "typically skilled", and instead go with "skilled drivers". Or better yet it should use German drivers as a the frame as reference.

I suspect that when IEEE P2846 is released at the end of this year, they will define AV safety more precisely than "typically human skilled driver."
 
  • Like
Reactions: S4WRXTTCS
Yes, great. But it's inexplicable that those with the same hardware (presumably!) as the new builds (sans radar) would now be put on a lower priority list for getting "Tesla Vision." Given the v8.x abandonment, we have become bastard stepchildren with one certainty: we will not see "Tesla Vision" FSD (or ANY FSD for city streets) until Tesla validates with the radar-less vehicles.

Illogical and a really bad move from a PR standpoint. (Are they charging $15K for the new FSD? If not, I paid the same $10K for FSD in 2018 and 2020, so why would I be put at the back of the line?) Our cars have all the hardware. Software can easily ignore the radar, so this is truly a mystery.
 
  • Disagree
Reactions: MP3Mike
Yes, great. But it's inexplicable that those with the same hardware (presumably!) as the new builds (sans radar) would now be put on a lower priority list for getting "Tesla Vision." Given the v8.x abandonment, we have become bastard stepchildren with one certainty: we will not see "Tesla Vision" FSD (or ANY FSD for city streets) until Tesla validates with the radar-less vehicles.

Illogical and a really bad move from a PR standpoint. (Are they charging $15K for the new FSD? If not, I paid the same $10K for FSD in 2018 and 2020, so why would I be put at the back of the line?) Our cars have all the hardware. Software can easily ignore the radar, so this is truly a mystery.

Do you want your Autopilot to suddenly be restricted to 75 MPH because they turned your radar off? (In addition to disabling summon, etc.) Or would you rather wait until they bring Tesla Vision to feature parity before they turn off your radar?
 
  • Like
Reactions: mikes_fsd
Cross-posting from another thread because, to me, this is a fascinating and entertaining example of a company’s claims about the autonomy software they’ve developed looking pretty dubious. The employee reviews paint a picture of the biggest “sugar” show at a big “tech” company I’ve ever heard of, with the exceptions of Nikola and Theranos.



What did Xpeng actually develop?

There is little visibility into Xpeng's software. I'm unsure how much of Xpeng's ADAS software they actually own, control, or developed in-house.

Xpeng's ADAS software is called Xpilot. The latest production release of Xpilot is Xpilot 3.0.

Xpilot 3.0 uses a system called IPU-03 (third-gen Intelligent Processing Unit). IPU-03 is made by a company called Desay SV Automotive. Desay seems like the equivalent of Mobileye and the IPU series seems like the equivalent of Mobileye's EyeQ series.

IPU-03 isn't just chip hardware; in fact, the chip itself comes from Nvidia. As with Mobileye's EyeQ, IPU-03 appears to be an integrated hardware-software package.

This is from Desay's press release:

"Recently, Desay SV Automotive announced the launch of its third generation Intelligent Processing Unit (IPU-03). This re-inforces its commitment to become one of the future leading Level-4 players in the autonomous vehicle domain. With the introduction of the IPU-03, Desay SV Automotive re-affirms and is resolute to achieve this goal in the foreseeable years. Powered by NVIDIA’s Drive AGX Xavier platform, the IPU-03 will enable Xpeng Motors of China to achieve Level 3 autonomous driving capability in the company’s latest and future car model launches.​

Amongst the many significant intelligent features that Desay SV Automotive is able to offer are : 1) High-Speed Lane Change Assist (LCA) which assists the driver in making safe lane changing during high speed drive; 2) Safe Distance Assist (SDA) which assists the driver in keeping safe distance from other vehicles while in traffic jam; 3) Active Parking Assist (APA) which assists the driver in making easy parking; and 4) Automated Valet Parking (AVP) which enables the vehicle to perform self-parking (without driver). These are some intelligent features which are expected out of a Level-3 Autonomous Vehicle System. Desay SV Automotive has cleverly integrated multiple signals and information derived from the multitude and array of vehicle sensors (e.g. radars, lidar, camera, ultra-sonic, etc.) and performs complex data processing as well as fusion of derived information. All these in-house development work were performed with a high degree of knowledge in deep & machine learning algorithms coupled with strong artificial intelligence capabilities. The seamless operations of these intelligent functions exhibited by IPU03 is a testimony of those capabilities."​
Another press release:

"Available in China, the Xpeng P7 is one of the world’s leading autonomous EVs and carries the Desay SV automatic driving domain control unit – the IPU-03. Through multi-sensor data collection, the IPU-03 calculates the vehicle’s driving status and provides 360-degreee omnidirectional perception with real time monitoring of the surrounding environment to make safe driving decisions."​

On its website, Desay claims to have developed an autonomous driving system that encompasses perception, localization, path planning, decision-making, and control.
Desay’s IPU-03 runs Blackberry's QNX OS, a proprietary, closed source real-time operating system.

Nvidia says of the Xpeng P7:

"Development of the P7 began in Xpeng’s data center, with NVIDIA’s AI infrastructure for training and testing self-driving deep neural networks.​

With high-performance data center GPUs and advanced AI learning tools, this scalable infrastructure allows developers to manage massive amounts of data and train autonomous driving DNNs.​

Xpeng is also using NVIDIA DRIVE OS software, in addition to the DRIVE AGX Xavier in-vehicle compute, to run the XPilot 3.0 system. The open and flexible operating system enables the automaker to run its proprietary software while also delivering OTA updates for new driving features."​
So, how much of Xpilot 3.0 did Xpeng actually develop, versus buy from suppliers, namely Desay and Nvidia?

How much of the backend infrastructure for Xpilot did Xpeng actually develop vs. purchase?



“This is for the most part not a real company”

I'm reading Glassdoor reviews for "Xmotors.ai", Xpeng's R&D subsidiary in Mountain View. The reviews are dismal.

w6comYe.jpg

"This is for the most part not a real company. Probably of 75% of the people here are not working on real projects. As others have mentioned, most of the people here spending time appropriating or copying demos from academic research or other companies and then passing it off as their own work. Only a quarter of the people are doing anything actually related to technology that will be deployed or used in production, the rest are a glorified marketing team. Investors have paid enormous sums of money to fund a US marketing campaign who's only goal is to attract more investment through PPTs. Real shareholder value. The admin is also horrendous. Never have I seen such unprofessionalism in the admin they have hired. Benefits promised at hiring are not delivered. Promised stock options never was real."​


"A lot of engineering time is spent on preparing unnecessary demos. This can often be a nuisance because it can distract you from your current project and bring down your productivity. This problem is more serious than it seems."​


"They talked about the company’s goal of making the world better, solving challenges and being the next Tesla to lure you to accept the offer. Once you come to work, you find people here are busy with producing demos used to show to the headquarter. And as mentioned by others “There are lots of talk,lots of planning,lots of meetings, but no actual action”. Senior managers in China crazy about demos, they have no direction and vision."​


"Poor management, poor (or non-existing, i.e fake) stock option policy, poor technology execution."​


"Many decisions that are made seem to be strange and beyond understanding of the people. An example is sometimes projects will start in the US office and then without much explanation we are asked to provide all results to the China team and stop working on projects. And then for some reason certain weeks after that, the project will be transferred back with the US team. And then later the entire project is outsourced to a contractor. And then the contract is cancelled."​


"* The stock options were too good to be true. They aren't granted, basically a broken promise​
* Press releases about Apple IP theft means you won't have any career prospects after working here​
* Senior management does not have technical credentials -- resulting mediocre or low quality middle management being hired -- naturally resulting in low quality talent being hired generally.​
* No technical questions asked during interviews. Speaks to quality of hiring practices. I was asked something along the lines of "how many binaries have I compiled in the last month"."​


"Lots of talk,lots of planning,but no actual action. Company talks like it is the next Tesla but actually have produced nothing of value. They have an electric car but otherwise it is all marketing. Investors and employees were tricked into thinking this is a technology company. It is not. It is a marketing company. No technical questions during interview and all the fancy presentations given to investors and management were copied from unrelated academic presentations."​



"you will lose your skills here because there no actual development. stock options are fake. no one wants to hire someone from a company with all the public news about FBI investigation. leaders have no experience."​


And it goes on. I'm not cherry picking these examples. Go see for yourself.
 
Last edited:
  • Disagree
Reactions: Bladerskb