Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Google AI: Revisiting the Unreasonable Effectiveness of Data

This site may earn commission on affiliate links.
Is Xpeng using production fleet learning with these vehicles or not? And if so, in what form exactly? What sources substantiate this?

There are lots of vehicles equipped with cameras, radar, and ultrasonics that don't upload sensor data and don't do firmware updates.

Yes they are and they also do OTA/FOTA updates. Their recent update was for Navigation Guided Pilot, An navigate on autopilot equivalent.

Did you even watch the Tech presentation Video I linked?

In Xpeng's 2021 Q1 Investor conference they said: "we’re able to collect highly valuable rate cases with our customers using NGP. We’re now able to achieve fast iterations of our algorithm on a weekly basis, based on our advanced closed-loop data capabilities. With the growing number of P7s on the road, I believe Xpeng will have the largest and fast growing Smart EV fleet close-loop data capabilities on China’s road network."
 
There is little visibility into Xpeng's software. I'm unsure how much of Xpeng's ADAS software they actually own, control, or developed in-house.

Xpeng's ADAS software is called Xpilot. The latest production release of Xpilot is Xpilot 3.0.

Xpilot 3.0 uses a system called IPU-03 (third-gen Intelligent Processing Unit). IPU-03 is made by a company called Desay SV Automotive. Desay seems like the equivalent of Mobileye and the IPU series seems like the equivalent of Mobileye's EyeQ series.

IPU-03 isn't just chip hardware; in fact, the chip itself comes from Nvidia. As with Mobileye's EyeQ, IPU-03 appears to be an integrated hardware-software package.

This is from Desay's press release:

"Recently, Desay SV Automotive announced the launch of its third generation Intelligent Processing Unit (IPU-03). This re-inforces its commitment to become one of the future leading Level-4 players in the autonomous vehicle domain. With the introduction of the IPU-03, Desay SV Automotive re-affirms and is resolute to achieve this goal in the foreseeable years. Powered by NVIDIA’s Drive AGX Xavier platform, the IPU-03 will enable Xpeng Motors of China to achieve Level 3 autonomous driving capability in the company’s latest and future car model launches.​

Amongst the many significant intelligent features that Desay SV Automotive is able to offer are : 1) High-Speed Lane Change Assist (LCA) which assists the driver in making safe lane changing during high speed drive; 2) Safe Distance Assist (SDA) which assists the driver in keeping safe distance from other vehicles while in traffic jam; 3) Active Parking Assist (APA) which assists the driver in making easy parking; and 4) Automated Valet Parking (AVP) which enables the vehicle to perform self-parking (without driver). These are some intelligent features which are expected out of a Level-3 Autonomous Vehicle System. Desay SV Automotive has cleverly integrated multiple signals and information derived from the multitude and array of vehicle sensors (e.g. radars, lidar, camera, ultra-sonic, etc.) and performs complex data processing as well as fusion of derived information. All these in-house development work were performed with a high degree of knowledge in deep & machine learning algorithms coupled with strong artificial intelligence capabilities. The seamless operations of these intelligent functions exhibited by IPU03 is a testimony of those capabilities."​
IPU-03, in turn, runs Blackberry's QNX OS.

Then, Nvidia says:

"Development of the P7 began in Xpeng’s data center, with NVIDIA’s AI infrastructure for training and testing self-driving deep neural networks.​

With high-performance data center GPUs and advanced AI learning tools, this scalable infrastructure allows developers to manage massive amounts of data and train autonomous driving DNNs.​

Xpeng is also using NVIDIA DRIVE OS software, in addition to the DRIVE AGX Xavier in-vehicle compute, to run the XPilot 3.0 system. The open and flexible operating system enables the automaker to run its proprietary software while also delivering OTA updates for new driving features."​
So, how much of Xpilot 3.0 did Xpeng actually develop, versus buy from suppliers, namely Desay and Nvidia?

How much of the backend infrastructure for Xpilot did Xpeng actually develop vs. purchase?
 
I'm reading Glassdoor reviews for "Xmotors.ai", Xpeng's R&D subsidiary in Mountain View. The reviews are dismal.

w6comYe.jpg

"This is for the most part not a real company. Probably of 75% of the people here are not working on real projects. As others have mentioned, most of the people here spending time appropriating or copying demos from academic research or other companies and then passing it off as their own work. Only a quarter of the people are doing anything actually related to technology that will be deployed or used in production, the rest are a glorified marketing team. Investors have paid enormous sums of money to fund a US marketing campaign who's only goal is to attract more investment through PPTs. Real shareholder value. The admin is also horrendous. Never have I seen such unprofessionalism in the admin they have hired. Benefits promised at hiring are not delivered. Promised stock options never was real."​


"A lot of engineering time is spent on preparing unnecessary demos. This can often be a nuisance because it can distract you from your current project and bring down your productivity. This problem is more serious than it seems."​


"They talked about the company’s goal of making the world better, solving challenges and being the next Tesla to lure you to accept the offer. Once you come to work, you find people here are busy with producing demos used to show to the headquarter. And as mentioned by others “There are lots of talk,lots of planning,lots of meetings, but no actual action”. Senior managers in China crazy about demos, they have no direction and vision."​


"Poor management, poor (or non-existing, i.e fake) stock option policy, poor technology execution."​


"Many decisions that are made seem to be strange and beyond understanding of the people. An example is sometimes projects will start in the US office and then without much explanation we are asked to provide all results to the China team and stop working on projects. And then for some reason certain weeks after that, the project will be transferred back with the US team. And then later the entire project is outsourced to a contractor. And then the contract is cancelled."​


"* The stock options were too good to be true. They aren't granted, basically a broken promise​
* Press releases about Apple IP theft means you won't have any career prospects after working here​
* Senior management does not have technical credentials -- resulting mediocre or low quality middle management being hired -- naturally resulting in low quality talent being hired generally.​
* No technical questions asked during interviews. Speaks to quality of hiring practices. I was asked something along the lines of "how many binaries have I compiled in the last month"."​


"Lots of talk,lots of planning,but no actual action. Company talks like it is the next Tesla but actually have produced nothing of value. They have an electric car but otherwise it is all marketing. Investors and employees were tricked into thinking this is a technology company. It is not. It is a marketing company. No technical questions during interview and all the fancy presentations given to investors and management were copied from unrelated academic presentations."​



"you will lose your skills here because there no actual development. stock options are fake. no one wants to hire someone from a company with all the public news about FBI investigation. leaders have no experience."​


And it goes on. I'm not cherry picking these examples. Go see for yourself.
 
  • Helpful
Reactions: Microterf
I'm reading Glassdoor reviews for "Xmotors.ai", Xpeng's R&D subsidiary in Mountain View. The reviews are dismal.
This is no different than attacking the person instead of the idea.
Elon is often considered a pretty awful person- pedo tweets, firing people he doesn't like, poor husband, lying about ex employees, abusive to employees, takes credit for things he didn't do, anti-Union, COVID denier, etc. There are whole podcasts about this. But we don't discuss that when we are discussing what Tesla as a company has actually done.
 
  • Funny
Reactions: shrineofchance
This is no different than attacking the person instead of the idea.
Elon is often considered a pretty awful person- pedo tweets, firing people he doesn't like, poor husband, lying about ex employees, abusive to employees, takes credit for things he didn't do, anti-Union, COVID denier, etc. There are whole podcasts about this. But we don't discuss that when we are discussing what Tesla as a company has actually done.

What a total non-sequitur! 😝

Read the reviews!
 
I can find every single category of complaint you found on the Tesla glassdoor page also. So we're in agreement Tesla can't be collecting a lot of data?

Tesla has very good reviews overall and we have pretty good visibility into its internal operations through investigative journalism and the like.

A large majority of Glassdoor reviews from Xpeng's Silicon Valley subsidiary report a shocking degree of either fraudulence, incompetence, or both, when it comes to Xpilot software development.

Combined with my discovery that a significant amount of Xpilot's functionality may have just been bought off the shelf from suppliers, this looks very bad for Xpeng.

It reminds me of Theranos talking about its amazing in-house R&D, which was actually in complete disarray and never produced anything of note, while the blood testing machine it actually used was purchased off the shelf from a supplier.
 
There is little visibility into Xpeng's software. I'm unsure how much of Xpeng's ADAS software they actually own, control, or developed in-house.

Xpeng's ADAS software is called Xpilot. The latest production release of Xpilot is Xpilot 3.0.

Xpilot 3.0 uses a system called IPU-03 (third-gen Intelligent Processing Unit). IPU-03 is made by a company called Desay SV Automotive. Desay seems like the equivalent of Mobileye and the IPU series seems like the equivalent of Mobileye's EyeQ series. IPU-03 isn't just chip hardware; in fact, the chip itself comes from Nvidia. As with Mobileye's EyeQ, IPU-03 appears to be an integrated hardware-software package.
No it isn't. Its a multi domain controller, it replaces the hundreds of ECU found in cars with a single computer unit. It facilitates the communication between the main chip (Nvidia..whatever) and the car's actuators. You either design and create your own hardware or work with tier 1s to create one based on the chip of your choice. With Xpeng they used Nvidia's Xavier.

This has nothing to do with the AV software that Xpeng is actually creating/created.

"Desay SV argues that: tier-1 suppliers and OEMs will collaborate in the following two ways in the area of autonomous driving domain controller:​
First, tier-1 suppliers are devoted to making middleware and hardware, and OEMs develop autonomous driving software. As tier-1 suppliers enjoy edges in producing products at reasonable cost and accelerating commercialization, automakers are bound to partner with them: OEMs assume software design while tier-1 suppliers take on the production of hardware and integration of middleware and chip solutions.​
Second, tier-1 suppliers choose to work with chip vendors in solution design and research and development of central domain controllers and then sell their products to OEMs. Examples include Continental ADCU, ZF ProAI and Magna MAX4."​


IPU-03, in turn, runs Blackberry's QNX OS.
QNX is the most popular OS for providing functional safety, security and real time operation and is a standard in the auto industry. Its in over 150 million cars. SDC companies either use QNX or Linux. But to disparage Xpeng because they use QNX secure OS is like disparaging a company because they run their software on Linux or Windows.


Then, Nvidia says:

"Development of the P7 began in Xpeng’s data center, with NVIDIA’s AI infrastructure for training and testing self-driving deep neural networks.​

With high-performance data center GPUs and advanced AI learning tools, this scalable infrastructure allows developers to manage massive amounts of data and train autonomous driving DNNs.​

Xpeng is also using NVIDIA DRIVE OS software, in addition to the DRIVE AGX Xavier in-vehicle compute, to run the XPilot 3.0 system. The open and flexible operating system enables the automaker to run its proprietary software while also delivering OTA updates for new driving features."​
So, how much of Xpilot 3.0 did Xpeng actually develop, versus buy from suppliers, namely Desay and Nvidia?

How much of the backend infrastructure for Xpilot did Xpeng actually develop vs. purchase?

Alot of SDC companies uses Nvidia Drive OS including Zoox because it has embedded RTOS, hypervisor, NVIDIA CUDA libraries, NVIDIA Tensor RT and other components optimized to provide direct access to DRIVE AGX hardware acceleration engines. which is needed for the acceleration of deep learning models that Xpeng will be developing.

Nvidia DRIVE OS is NOT Nvidia DRIVE AV or Nvidia DRIVE IX

This has nothing to do with the AV software that Xpeng is actually creating/created.

Again what you are doing is equivalent to disparaging companies for using windows/android/linux and drivers that come with it.
 
Last edited:
  • Funny
Reactions: shrineofchance
Rather than have an unbiased discussion on YOUR OWN MODEL. You resort to attempting to disparage and discredit a company because what they are doing completely throws a wrench into your own logic (or should i say illogic). Because none of this is actually based on logic, deductive reason, research or data. Because if it did, you would actually research what others are doing and welcome the news. But no, its 100% pure fanaticism.

For anyone wondering what DRIVE OS is and what it isn't. Take a look at the link below.

NVIDIA DRIVE® OS is a foundational software stack consisting of an embedded real-time operating system (RTOS), NVIDIA Hypervisor, NVIDIA® CUDA® libraries, NVIDIA TensorRT™ and other modules that provide you access to the hardware engines. DRIVE OS offers a safe and secure execution environment for applications such as secure boot, security services, firewall and over-the-air (OTA) updates.

Details:
  • Multiple guest operating systems
  • 64-bit user space and runtime libraries
  • NvMedia APIs for hardware-accelerated multimedia and camera input processing
  • CUDA parallel computing platform
  • Graphics APIs: OpenGL, OpenGL ES, EGL with EGLStream extensions
  • Deep learning libraries: TensorRT, cuDNN

For anyone wondering what QNX OS (which is integrated into Nvidia Drive OS by Nvidia) is and what it isn't. Take a look at the link below.

QNX OS for Safety​

Streamline your products’ functional safety certifications with a microkernel operating system pre-certified specifically for safety-critical embedded systems, and toolchains pre-qualified for building these systems. Ideal for building complex safe systems, the QNX OS for Safety is a full-featured, deterministic OS designed for use in every sector where functionally safe, reliable embedded software is critical: medical devices, industrial controls, aerospace, automotive, power generation, robotics and rail transportation.


For anyone wondering what a multi domain controller like IPU-03 is. Take a look at the link below.

Multi Domain Controllers​

Today, every electronic control system in the car, such as Instrument Cluster, Infotainment, Anti-lock braking system, Engine Management System, Transmission Control Unit, and Body Control Management is a self-sufficient unit with its own sources like ROM, RAM memory, microprocessor or microcontroller, I/O and power supply. The idea of automotive domain controller is to replace multiple distributed ECUs with a single powerful central computer with multi-core. Multi-core processing technologies integrate multiple ECUs into one single chip. In a multi-core solution, these individual ECUs retain separated and independent processing space. However, a lot of redundant components like housing, drivers, wire and harnesses, and power supplies can be eliminated. These not only largely save cost but also the component's weight and space. Moreover, communication between ECUs is within the processor itself instead of communicating over an external network like CAN or LIN, this will reduce the data latency and system complexity considerably.

 
  • Funny
Reactions: shrineofchance
I think you're confusing the IPU-03 sold by Desay SV with other products sold by Desay SV or other companies. Or you're simply misunderstanding the functionality offered by the IPU-03.

The IPU-03 is an autonomous driving domain controller. This is what the IPU-03 does in the Xpeng P7, according to a press release:

"Available in China, the Xpeng P7 is one of the world’s leading autonomous EVs and carries the Desay SV automatic driving domain control unit – the IPU-03. Through multi-sensor data collection, the IPU-03 calculates the vehicle’s driving status and provides 360-degreee omnidirectional perception with real time monitoring of the surrounding environment to make safe driving decisions."​

Like Mobileye, Desay appears to have developed an end-to-end autonomous driving system, encompassing perception, localization, path planning, decision-making, and control.

Xpeng doesn't seem competent to develop such a system in-house, at least not at their Silicon Valley subsidiary. I would venture further that the dysfunction at their Silicon Valley office probably extends to their offices in China as well.
 
Alot of SDC companies uses Nvidia Drive OS including Zoox

Source? If you Google any of the following search terms:

"zoox" "nvidia drive os"

"zoox" "nvidia drive"

"zoox" "nvidia os"

"zoox" "nvidia operating system"

"zoox" "nvidia" "operating system"


You get no relevant results except for a blog post that doesn't say Zoox is using (or has ever used) Nvidia Drive OS.
 
I did a little searching to see if I could find any info on what operating systems Zoox or other AV companies use.

For Zoox, I found a job posting that hints that Zoox may use some form of "real-time Linux" (archive.org, archive.is). Maybe Automotive Grade Linux?

If I were an AV company, I would want to use an OS that is either a) my own proprietary OS or b) a free and open source OS. Or a mix of (a) and (b). I would feel uneasy about using a proprietary, closed source, licensed OS owned and controlled by another company.
 
Last edited:
Ok, so let's discuss data collection. You claim Tesla has a wealth of data, more than anyone else, and this creates a lead in autonomy.
But what about the quality of data? Every time I see this, I hear Tesla has X million/billion miles of data. Except all they generally have is metadata. They clearly are not uploading videos of every car driving all the time, or even just on AP. We can all tell than since they don't upload gigabytes after every drive. Machine learning / vision systems need this raw video to do anything as the "training signal."

So why does it matter at all that Tesla has a bunch of cars out in the world? Aren't 1,000 vehicles logging data all the time with professional drivers giving feedback more useful than 1,000,000 cars just sending back disengagement metadata and a video clip now and then, with the trigger to send those video clips created by the very same network that is being trained?

99% of my disengagements with AP are just because AP is totally incapable of doing the thing I need it to (like slowing down for a red light or moving over for stopped traffic a quarter mile ahead). How do my disengagement stats even help to learn where AP needs to be trained more, or even form a useful metric on if changes are leading to a more useful system?

Tesla's own fracturing of autopilot among AP/EAP/FSD with different feature sets makes this even worse. My car doesn't stop for stop signs/red lights, but other cars do! So I have way more disengagements than a FSD car, just because I didn't want to pay $5K for this feature. Now my data is also worth less to Tesla. It seems if this data was so darn valuable to them, creating immense future value, that they would want as many people using as many features as much as possible, not maximizing their revenue this quarter.
I think neural nets benefit from lots of "dirty" data. Training on clean professional data isn't of much value.
 
  • Like
Reactions: mikes_fsd
If I were an AV company, I would want to use an OS that is either a) my own proprietary OS or b) a free and open source OS. Or a mix of (a) and (b). I would feel uneasy about using a proprietary, closed source, licensed OS owned and controlled by another company.
Then you can't fly on aircraft or use medical devices. Huge numbers of safety critical systems rely on 3rd party operating systems that have been proven to support safety critical needs, and come pre-certified and fully supported.

Look up wind river.

Nobody writes their own OS. It's insane. It's a total waste of developer time. Everyone needs an OS- it's one of the things you buy because it's a commodity, not a differentiator.
 
  • Disagree
Reactions: shrineofchance
Then you can't fly on aircraft or use medical devices. Huge numbers of safety critical systems rely on 3rd party operating systems that have been proven to support safety critical needs, and come pre-certified and fully supported.

Look up wind river.

Nobody writes their own OS. It's insane. It's a total waste of developer time. Everyone needs an OS- it's one of the things you buy because it's a commodity, not a differentiator.

This is a non-sequitur. Airplanes and medical devices aren't software-differentiated products. An AV is. An AV essentially is software.

Tesla OS for the infotainment system is based on Linux. Similarly, I imagine companies like Tesla and Zoox are probably writing their own Linux-based RTOS. That's why I mentioned Automotive Grade Linux.

There's also Real-Time Linux, which uses the PREEMPT_RT patch for the Linux kernel.

On top of an RTOS version of Linux, you can run Robot OS, which, despite the name, is not an OS, but helps with real-time robotics applications.

Waymo's RTOS is probably based on the Linux kernel (or possibly the Zircon kernel).

The companies using Blackberry's proprietary QNX OS for AVs seem to be almost entirely legacy auto companies that don't know how to write software.
 
Last edited:
Ok, so let's discuss data collection. You claim Tesla has a wealth of data, more than anyone else, and this creates a lead in autonomy.
But what about the quality of data? Every time I see this, I hear Tesla has X million/billion miles of data. Except all they generally have is metadata. They clearly are not uploading videos of every car driving all the time, or even just on AP. We can all tell than since they don't upload gigabytes after every drive. Machine learning / vision systems need this raw video to do anything as the "training signal."

So why does it matter at all that Tesla has a bunch of cars out in the world? Aren't 1,000 vehicles logging data all the time with professional drivers giving feedback more useful than 1,000,000 cars just sending back disengagement metadata and a video clip now and then, with the trigger to send those video clips created by the very same network that is being trained?

99% of my disengagements with AP are just because AP is totally incapable of doing the thing I need it to (like slowing down for a red light or moving over for stopped traffic a quarter mile ahead). How do my disengagement stats even help to learn where AP needs to be trained more, or even form a useful metric on if changes are leading to a more useful system?

Tesla's own fracturing of autopilot among AP/EAP/FSD with different feature sets makes this even worse. My car doesn't stop for stop signs/red lights, but other cars do! So I have way more disengagements than a FSD car, just because I didn't want to pay $5K for this feature. Now my data is also worth less to Tesla. It seems if this data was so darn valuable to them, creating immense future value, that they would want as many people using as many features as much as possible, not maximizing their revenue this quarter.

As a data scientist I will give you some examples of how this can work, and why it matters. I will give you an example with even an extremely simplyified algorithm. I made an "AI" (lol, logistic regression) algorithm for detecting when a person enters and exits an elevator a few years ago. This machine learning algo is actually in production on Huawei phones.

I had people collect data on for this. Because the use case is generally narrow, you don't need that much data to get a working algorithm that can give you high sensitivity and specificity (basically accurate and low false positive rates). High enough to be good enough for production because errors aren't going to kill anyone anyways.

But say I wanted to get the algorithm to 99.9999999% accuracy in real world operation. How could I do that? If I had 1000 "professional" elevator users, would that get me there? No, because even then I probably wouldn't grab enough weird cases which are the only ones I really need to go from 99 to 99.99999%. An in fact, if they are all professionals, they may use the phone and elevator in similar ways, which is what I don't want. I want diversity.

What would I want? I would want to get data from every user with a Huawei phone. How do I know what to collect? Should I just collect everything? Of course not, that is inefficient. What I would do is put the production algorithm on the phones (99% accurate). Then, I would only care about the cases where the algorithm prediction and actual reality differ. - For instance if the algorithm is just using accelerometer, but the phone also has a barometric pressure sensor, than I can use the latter to make a ground truth (or closer to).

Then I would pull only the conflicting cases. This is a small % of the data. Then I would retrain an algorithm (now likely more complicated than a logistic regression) and I would get better accuracy, say 99.999%

When I deploy this algorithm, I start get less feedback, because there is less cases where the model is wrong. So it either takes longer, or I have to deploy to more users until I get even more data (like an order of magnitude) to try the same thing again to build a bigger, better model.

The point being, this process is useful even for simpler algorithms, that still need edge case data! Autonomous cars have even more sparse data.

It is hilarious as a data scientist to hear people thinking you will get enough sparse edge case data from some test cars. No. Way.

As for people without FSD, the FSD algorithm can still run in the background making predictions, and pull data only when it is grossly different from user action.
 
Last edited:
As for people without FSD, the FSD algorithm can still run in the background making predictions, and pull data only when it is grossly different from user action.

People also underestimate Tesla's increasing ability to identify and collect edge cases as the car's NN predictions improve. Once Tesla Vision can accurately position / predict all road objects, Tesla can ask the fleet for more odd situations, allowing them to tackle the trailing 9s quicker. You can imagine all sorts of odd situations:

Car on highway abruptly brakes with no car ahead of it
Car ahead is on left turn lane but goes straight
There's a double yellow lane ahead but cars are crossing it
Cars on all lanes are stopped for no obvious reason
Better understanding of circumstances / road object positions / velocities just prior to an accident
Etc.
 
For instance if the algorithm is just using accelerometer, but the phone also has a barometric pressure sensor, than I can use the latter to make a ground truth (or closer to).
Yes, everything you said makes sense up to here. The issue here is ground truth. There is no ground truth when a Tesla customer car is out driving. You have no idea if the customer disconnected because of danger or because they just wanted to go get a donut, and you have no idea if you missed that speed limit sign. Tesla drivers are not training or even testing the system every time they drive, because the feedback the system can get is minimal, and all learning systems need a feedback signal.

As for people without FSD, the FSD algorithm can still run in the background making predictions, and pull data only when it is grossly different from user action.
Only for a very narrow set of actions. Without FSD active, there is no planned route, so FSD has no idea if the user turned at that intersection because they meant to or there was construction. If you watch a video of someone driving a car in the city, and you had no idea of their destination, how would you determine when they did something unexpected? Even if they have NAV on, how often does it get ignored because of traffic, missed turns, or new information?

People also underestimate Tesla's increasing ability to identify and collect edge cases as the car's NN predictions improve. Once Tesla Vision can accurately position / predict all road objects, Tesla can ask the fleet for more odd situations, allowing them to tackle the trailing 9s quicker. You can imagine all sorts of odd situations:
You can imagine them, but Kaparthy specifically says he hand codes for specific detections to collect data. They teach it a few stop signs, then hope it can pick up more and more stop sign variations. They are nowhere near just having the system learn by itself and just record and upload "odd" situations.

Check out this video of him from a year ago- He's talking just about how they even collect all the variations of stop signs so they can train the model on them. These are single frame images of single objects, which humans then have to classify. They are nowhere near complex conditional cases like "all lanes are stopped for no reason".

None of this is to say that Tesla gets no value from the cars on the road- Clearly they do. But this data is much less valuable per mile than a specific test fleet, especially when the functionality is so basic that it is fully expected that users will be taking over constantly for the system.
 
Last edited:
They are nowhere near just having the system learn by itself and just record and upload "odd" situations.

Yup, I wasn't saying that the car understands what is odd by itself. As the NN predictions improve, the Tesla team can ask for odd situations because the car is better able to understand more about its surroundings.

If the NN predictions aren't good, they'll get more trash data to sift through.