Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

How does fsd beta get smarter?

This site may earn commission on affiliate links.
Understand that there are many different tiers here. At the bottom is the basic visualization layer(s), that create the world view for the car. Above that are prediction layers that predict behavior, and above that are decision layers that drive car behavior and responses. (This is all a horrible simplification of course). The thing that makes the FSD beta exciting is that the bottom-most visualization layer has been completely reworked and gives much more detailed and accurate information (thank HW3 for making the necessary processing power available for that).

So right now the car is running two distinct "stacks" .. the old stack for NoA and Smart Summon, and the new stack for City Streets (aka FSD). I'm sure as FSD stabilizes the Tesla engineers will migrate NoA and Smart Summon to use the new stack since (a) it will work far better, and (b) they only want to support one stack, not two, and (c) it will give a more fully integrated feel to AP overall, since the move from city streets to freeways will be seamless.

That's a huge bummer that they have left the old version on NoA. I find that driving on the LA freeways is a fairly dangerous experience on NoA. The car often fails to slow down when there are curves that necessitate it. It also gets way too close to the barriers when in the far left lane. Bummer that they didn't just use the rewrite to impact everything at once.
 
That's a huge bummer that they have left the old version on NoA. I find that driving on the LA freeways is a fairly dangerous experience on NoA. The car often fails to slow down when there are curves that necessitate it. It also gets way too close to the barriers when in the far left lane. Bummer that they didn't just use the rewrite to impact everything at once.

I think it's just a matter of one step at a time .. you will see it reworked sometime later in 2021 I suspect.
 
And yet there were supposed to be robotaxis already!

This is constantly being taken out of context. At Autonomy Day, Elon said that by the end of 2020 there would be 400K Tesla cars on the road with the capability to become robotaxis “at the flip of a switch”, i.e. whenever the software was ready. He was touting Tesla’s hardware suite and over-the-air rollout ability. Not promising robotaxis this year.

I knew as soon as he said it that it’d be misquoted over and over and wish he had phrased it differently.
 
This is constantly being taken out of context. At Autonomy Day, Elon said that by the end of 2020 there would be 400K Tesla cars on the road with the capability to become robotaxis “at the flip of a switch”, i.e. whenever the software was ready. He was touting Tesla’s hardware suite and over-the-air rollout ability. Not promising robotaxis this year.

I knew as soon as he said it that it’d be misquoted over and over and wish he had phrased it differently.

No, he said, "next year, for sure, we'll have a million robotaxis on the road. The fleet wakes up with an over the air update."

- 2:04:41

There is nothing about needing approval to turn the fleet on. He clearly states that there WILL be millions of robotaxis in the road.
 
Ok, first of all, “over a million” does not equal “millions”. Tesla just made their millionth car in March.

Second, I didn’t say anything about approval. I said I thought it was pretty clear from context that when Elon said “robotaxis on the road” he was referring to rolling stock capable of FSD whenever the software update is ready (as compared to say Waymo with a fleet of mere dozens of prototypes), not a fully built-out robotaxi network by 12/31/20. For one thing, neither Elon nor anyone else at Tesla has ever given a timeline for having a Tesla Network app ready (I’m not counting the very rough proof of concept they used at a demo once).

I certainly agree Elon could save himself some trouble and choose his words more carefully, and his timelines are famously optimistic, but there’s no false promises or intent to mislead there.

FWIW, I do think the first wide FSD software update will be released by end of year, but it was never going to be perfectly capable of Level 5 right out the gate. That’s just not how software works.
 
  • Like
Reactions: preilly44
Second, I didn’t say anything about approval.

No, but Elon did.

Elon Musk in 2019 said:
I feel very confident in predicting autonomous robots taxis for Tesla next year,” said Musk. “Not in all jurisdiction because we don’t have regulatory approval every where. But I’m confident we’ll have regulatory approval somewhere next year. From our standpoint, if you fast-forward a year, maybe a year and three months…we’ll have over a million robo taxis on the road.”


He explicitly said that robotaxis would be approved in at least one jurisdiction in 2020.

That's clearly not happening.

(and 2021 ain't looking great right now either given Dojo was still a full year out as a couple months ago)



I'm sure this has been explained before, but there are so many long threads that I thought I'd just straight up ask for a clear explanation.

My understanding is that when you file a bug on the non-beta fsd Tesla, Tesla never reviews this information, and thus, the car's ability to navigate on Autopilot doesn't improve. It's never been clear to me why Tesla would incorporate a bug filing feature but not use that function to improve the system.

That feature is so if you open a service ticket with Tesla, there's a bookmark in the logs about the problem the technician can easily reference.

Other than that use the data just sits local on the car.

Imagine the amount of human review needed if they were going to look at every bug report from every person in a fleet of over 1 million cars (and likely to almost double next year).

Some folks post about how they were doing a bug report dozens of times a day for incorrect speed limits for example (because they incorrectly thought someone reviewed every report)... multiply that by fleet size and yikes.


Now I'm seeing the FSD beta videos, and it appears people are filing bug reports and Tesla is putting out very quick updates, fixing very big issues in a matter of days.


The FSD beta testers have a special button on their screens (the extra, white, camera icon you can see slightly to the right of the top middle in videos) that captures (and actually DOES send to Tesla) a MUCH more detailed set of logs including video content) than what the standard bug report feature captures.

As you note the fleet is vastly smaller so they can afford the manpower to review those reports unlike if they had to do so fleet-wide.


I think the fact the car doesn't learn locally has been explained by others now, so two other questions of yours seems to remain outstanding-

Sorry if this is a stupid question, but it just seems like Tesla was ignoring basic issues like going down curvy roads, and yet now they're able to take and implement feedback immediately. People keep mentioning the neural network, but I don't understand how that neural network works (I assume it's doing what I mentioned above - using driver engagement as a sign that its decision was faulty.)

Currently the only thing the system is using NNs for is detection/recognition.... basically it's the vision system.

Following is a very simplified explaination-


The NNs are fed input from the sensors and try to figure out what it's seeing.... identifying that thing over there is a sedan moving ahead of me at 38 miles per hour, that other thing is a pedestrian standing on the corner, those things up ahead are stop lights that are currently green, etc...

So that gets to training... let's say Tesla has a system that can't identify stop signs (it can now, but once could not).

So Tesla sends out a campaign to the fleet "send me a picture from this specific front camera every time you stop at a GPS location that the map says has a stop sign" (lots, but not all, stop signs are in the map data).

This sends a flood of photos back to Tesla. They have humans manually go through them and label the stop sign in the data.

This labeled data is then used to "train" the NN that will need to recognize stop signs, so that it "learns" what stop signs look like from different angles, in different light, etc...

With each round of new fleet data and training, it gets better at reliably recognizing stop signs under a very wide variety of conditions and situations.

Eventually you get the feature where it'll stop at them on its own because it's good enough to (along with the map data) realize it's seeing them virtually all the time.


Thanks, I just watched both of these. First off, it's amazing that there are people intelligent enough to code like this. It is beyond my mental capacity.

I will say that I'm still confused about AP's learning capabilities. In the videos, he mentions that the cars can predict scenarios based on data sets, but of course, its predictions can be inaccurate because there are an infinite number of nuanced situations. But then he talks about how a car can - for example - fail to recognize an occluded stop sign. And if it does, Tesla can ask the fleet to look for a number of these instances. Then they train the fleet to recognize occluded stop signs.


Right.... as in the above example, now that it recognizes clear stop signs, they could tell the fleet to send photos of, say "any time your map data says there's a stop sign but the NN doesn't think there is one" and that will likely capture some occuled situations they can use to train to recognize the situation.


This implies - and he discusses at the end of the longer video - that when a driver disengaged Autopilot, Tesla knows the car didn't act as planned

Naah... maybe you just felt like taking over... or maybe you saw a woman with a stroller up ahead and didn't want to take any chances... or maybe there was a pothole which the current system will just drive right into and you wanted to avoid it... the car has no idea why you disengaged.... and the "disengagement report" it creates is tiny, basically just GPS coordinates of where you were and what method was used (brake, AP switch, wheel pull, etc).

Those tiny reports might be useful for fleet aggregation like "Hey the fleet shows that a huge % of users all disengage this same away at this same spot- maybe we should send out a campaign to capture pictures there and see what's up" (or they might just be able to look at that spot in google and it's obvious what the issue is)


. My confusion is that he mentions the AI can make fixes on the fly without an update

it definitely can not do that.



And then I've seen mentions of Dojo. What is Dojo and what does it have to do with the car learning to drive better?

Can anyone give a simple explanation?

Dojo is basically an AI/NN training supercomputer that will be able to train on massive amounts of data and, ideally, handle a significant amount of the labeling of it automatically (whereas again this is largely all done manually by humans right now)
 
My guess. When you file a bug report, most likely someone/some algorithm will look at it. Most likely it will be ignored, but sometimes they decide that it was useful. If it was useful most likely it will be labelled and added to the dataset for the neural network. A few times it was so useful that they decide to add it to the unit test set and start the process of gathering similar situations to improve that situation. Most of the time they decide it is a dataproblem, sometimes they decide that it is a software 1.0 problem and make it a task for some engineer to look at when they have time...
 
My guess. When you file a bug report, most likely someone/some algorithm will look at it

Your guess is incorrect.

A conventional bug report isn't even sent to Tesla. At all.

It sits local on the car- where a Tesla service center tech can look at it if you open a service call, and otherwise is never seen.


The "special" button only the FSD beta folks get is an entirely different system where every report is sent to Tesla for review, but is obviously only in a tiny # of cars relative to the fleet.
 
  • Informative
Reactions: heltok
Your guess is incorrect.

A conventional bug report isn't even sent to Tesla. At all.

It sits local on the car- where a Tesla service center tech can look at it if you open a service call, and otherwise is never seen.


The "special" button only the FSD beta folks get is an entirely different system where every report is sent to Tesla for review, but is obviously only in a tiny # of cars relative to the fleet.
Ok thanks. But some events are flagged in the car, ”cutin detection with snow covering lane markers” etc. Maybe sometimes ”bug report” is being flagged for upload. With 4D and Dojo my guess is that more data will be flagged for upload and some users are reporting higher wifi load. Exactly what is being flagged I guess we won’t find out.
 
No, but Elon did.




He explicitly said that robotaxis would be approved in at least one jurisdiction in 2020.

That's clearly not happening.

(and 2021 ain't looking great right now either given Dojo was still a full year out as a couple months ago)





That feature is so if you open a service ticket with Tesla, there's a bookmark in the logs about the problem the technician can easily reference.

Other than that use the data just sits local on the car.

Imagine the amount of human review needed if they were going to look at every bug report from every person in a fleet of over 1 million cars (and likely to almost double next year).

Some folks post about how they were doing a bug report dozens of times a day for incorrect speed limits for example (because they incorrectly thought someone reviewed every report)... multiply that by fleet size and yikes.





The FSD beta testers have a special button on their screens (the extra, white, camera icon you can see slightly to the right of the top middle in videos) that captures (and actually DOES send to Tesla) a MUCH more detailed set of logs including video content) than what the standard bug report feature captures.

As you note the fleet is vastly smaller so they can afford the manpower to review those reports unlike if they had to do so fleet-wide.


I think the fact the car doesn't learn locally has been explained by others now, so two other questions of yours seems to remain outstanding-



Currently the only thing the system is using NNs for is detection/recognition.... basically it's the vision system.

Following is a very simplified explaination-


The NNs are fed input from the sensors and try to figure out what it's seeing.... identifying that thing over there is a sedan moving ahead of me at 38 miles per hour, that other thing is a pedestrian standing on the corner, those things up ahead are stop lights that are currently green, etc...

So that gets to training... let's say Tesla has a system that can't identify stop signs (it can now, but once could not).

So Tesla sends out a campaign to the fleet "send me a picture from this specific front camera every time you stop at a GPS location that the map says has a stop sign" (lots, but not all, stop signs are in the map data).

This sends a flood of photos back to Tesla. They have humans manually go through them and label the stop sign in the data.

This labeled data is then used to "train" the NN that will need to recognize stop signs, so that it "learns" what stop signs look like from different angles, in different light, etc...

With each round of new fleet data and training, it gets better at reliably recognizing stop signs under a very wide variety of conditions and situations.

Eventually you get the feature where it'll stop at them on its own because it's good enough to (along with the map data) realize it's seeing them virtually all the time.





Right.... as in the above example, now that it recognizes clear stop signs, they could tell the fleet to send photos of, say "any time your map data says there's a stop sign but the NN doesn't think there is one" and that will likely capture some occuled situations they can use to train to recognize the situation.




Naah... maybe you just felt like taking over... or maybe you saw a woman with a stroller up ahead and didn't want to take any chances... or maybe there was a pothole which the current system will just drive right into and you wanted to avoid it... the car has no idea why you disengaged.... and the "disengagement report" it creates is tiny, basically just GPS coordinates of where you were and what method was used (brake, AP switch, wheel pull, etc).

Those tiny reports might be useful for fleet aggregation like "Hey the fleet shows that a huge % of users all disengage this same away at this same spot- maybe we should send out a campaign to capture pictures there and see what's up" (or they might just be able to look at that spot in google and it's obvious what the issue is)




it definitely can not do that.





Dojo is basically an AI/NN training supercomputer that will be able to train on massive amounts of data and, ideally, handle a significant amount of the labeling of it automatically (whereas again this is largely all done manually by humans right now)
Ummm.......Florida allows autonomous cars right now. All Tesla would need to do is take responsibility for their actions.
 
  • Love
Reactions: Daniel in SD
Ummm.......Florida allows autonomous cars right now. All Tesla would need to do is take responsibility for their actions.


Florida allows autonomous cars if they can obey all traffic rules and other laws at all times.

As the FSD demo videos show us- Teslas can't do that yet.

So...no.



Ok thanks. But some events are flagged in the car, ”cutin detection with snow covering lane markers” etc. Maybe sometimes ”bug report” is being flagged for upload. With 4D and Dojo my guess is that more data will be flagged for upload and some users are reporting higher wifi load. Exactly what is being flagged I guess we won’t find out.


Nothing gets "flagged" for the bug report voice function.

You say bug report, and it basically just puts a bookmark in the local car logs for the service center to reference later.

That's it.

Nothing at all goes to Tesla for any review.


Now, they could send out a targeted campaign that's basically "Send us any bug report that happens within 10 seconds of stopping at a stop sign" for example.... but AFAIK the logs it tags are for the UI, not the autopilot computer, and it doesn't capture any pics/video so I don't see how that'd be at all useful unless they were seeing something like "spotify crashes every time it stops at a stop sign"
 
Last edited:
Tesla describes the answer to OP's question here (bolding for emphasis is mine, except for the referenced numbers):
GENERATING GROUND TRUTH FOR MACHINE LEARNING FROM TIME SERIES ELEMENTS - Tesla, Inc.

"In various embodiments, the result of vehicle controls such as a change in speed, application of braking, adjustment to steering, etc. are retained and used for the automatic generation of training data. In various embodiments, the vehicle control parameters are retained and transmitted at 411 for the automatic generation of training data.

At 411, sensor and related data are transmitted. For example, the sensor data received at 401 along with the results of deep learning analysis at 405 and/or vehicle control parameters used at 409 are transmitted to a computer server for the automatic generation of training data. In some embodiments, the data is a time series of data and the various gathered data are associated together by the computer server. For example, odometry data is associated with captured image data to generate a ground truth. In various embodiments, the collected data is transmitted wirelessly, for example, via a WiFi or cellular connection, from a vehicle to a training data center. In some embodiments, metadata is transmitted along with the sensor data. For example, metadata may include the time of day, a timestamp, the location, the type of vehicle, vehicle control and/or operating parameters such as speed, acceleration, braking, whether autonomous driving was enabled, steering angle, odometry data, etc. Additional metadata includes the time since the last previous sensor data was transmitted, the vehicle type, weather conditions, road conditions, etc. In some embodiments, the transmitted data is anonymized, for example, by removing unique identifiers of the vehicle. As another example, data from similar vehicle models is merged to prevent individual users and their use of their vehicles from being identified.

In some embodiments, the data is only transmitted in response to a trigger. For example, in some embodiments, an incorrect prediction triggers the transmitting of the sensor and related data for automatically collecting data to create a curated set of examples for improving the prediction of a deep learning network. For example, a prediction performed at 405 related to whether a vehicle is attempting to merge is determined to be incorrect by comparing the prediction to the actual outcome observed. The data, including sensor and related data, associated with the incorrect prediction is then transmitted and used to automatically generate training data. In some embodiments, the trigger may be used to identify particular scenarios such as sharp curves, forks in the roads, lane merges, sudden stops, or another appropriate scenario where additional training data is helpful and may be difficult to gather. For example, a trigger can be based on the sudden deactivation or disengagement of autonomous driving features. As another example, vehicle operating properties such as the change in speed or change in acceleration can form the basis of a trigger. In some embodiments, a prediction with an accuracy that is less than a certain threshold triggers transmitting the sensor and related data. For example, in certain scenarios, a prediction may not have a Boolean correct or incorrect result and is instead evaluated by determining an accuracy value of the prediction.

In various embodiments, the sensor and related data are captured over a period of time and the entire time series of data is transmitted together. The time period may be configured and/or be based on one or more factors such as the speed of the vehicle, the distance traveled, the change in speed, etc. In some embodiments, the sampling rate of the captured sensor and/or related data is configurable. For example, the sampling rate is increased at higher speeds, during sudden braking, during sudden acceleration, during hard steering, or another appropriate scenario when additional fidelity is needed."
 
Another part from that link describes how it runs in shadow mode:

"FIG. 4 is a flow diagram illustrating an embodiment of a process for training and applying a machine learning model for autonomous driving. In some embodiments, the process of FIG. 4 is utilized to collect and retain sensor and odometry data for training a machine learning model for autonomous driving. In some embodiments, the process of FIG. 4 is implemented on a vehicle enabled with autonomous driving whether the autonomous driving control is enabled or not. For example, sensor and odometry data can be collected in the moments immediately after autonomous driving is disengaged, while a vehicle is being driven by a human driver, and/or while the vehicle is being autonomously driven. In some embodiments, the techniques described by FIG. 4 are implemented using the deep learning system of FIG. 1. In some embodiments, portions of the process of FIG. 4 are performed at 207, 209, and/or 211 of FIG. 2 as part of the process of applying a machine learning model for autonomous driving."

x0s5WaN.png