Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
The next big milestone for FSD is 11. It is a significant upgrade and fundamental changes to several parts of the FSD stack including totally new way to train the perception NN.

From AI day and Lex Fridman interview we have a good sense of what might be included.

- Object permanence both temporal and spatial
- Moving from “bag of points” to objects in NN
- Creating a 3D vector representation of the environment all in NN
- Planner optimization using NN / Monte Carlo Tree Search (MCTS)
- Change from processed images to “photon count” / raw image
- Change from single image perception to surround video
- Merging of city, highway and parking lot stacks a.k.a. Single Stack

Lex Fridman Interview of Elon. Starting with FSD related topics.


Here is a detailed explanation of Beta 11 in "layman's language" by James Douma, interview done after Lex Podcast.


Here is the AI Day explanation by in 4 parts.


screenshot-teslamotorsclub.com-2022.01.26-21_30_17.png


Here is a useful blog post asking a few questions to Tesla about AI day. The useful part comes in comparison of Tesla's methods with Waymo and others (detailed papers linked).

 
Last edited:
Sorry, I didn’t take that post as a specific response to my question. I feel like there was a better response to my question than what was perceived by me as a snarky tone that would have made the same point. If I misread, I apologize, otherwise I’ll ignore as every time a conversation relevant to the thread comes up, someone ultimately posts a reply that ends up derailing the thread for pages of attacks back and forth making discussion about the topic impossible at worst, and a chore at best.


I'm very confused how you didn't take it as a specific response to your question when I directly quoted your question in writing the reply, it was specific to your vehicle (HW4) and the thing you asked about (what the amount of data being downloaded was), I gave reasons relevant to why that it might be the case for your car, and even tagged Green to see if he had anything to add.... I've re-read my reply 3 times now and can't find how you found any snark there either.


Are you sure you read the post I'm actually talking about where I directly addressed you?

 
Tesla didn’t send me 4gb to be lost during a reboot
Most of this data is not as interesting as you think. Network communication requires both upload and download to ensure all of the video is uploaded correctly. If you look at the amount of download across multiple days of a lot of upload, it should be roughly proportional. For example my Google WiFi Model 3 usage shows 11GB download with 181GB upload including the software update for roughly 5% download of upload.

This data to ensure correct upload indeed doesn't last past a reboot. Other downloaded data over the proportional amount could be what others have suggested such as campaigns to find interesting scenarios to collect data.
 
I'm very confused how you didn't take it as a specific response to your question when I directly quoted your question in writing the reply, it was specific to your vehicle (HW4) and the thing you asked about (what the amount of data being downloaded was), I gave reasons relevant to why that it might be the case for your car, and even tagged Green to see if he had anything to add.... I've re-read my reply 3 times now and can't find how you found any snark there either.


Are you sure you read the post I'm actually talking about where I directly addressed you?

I mean no disrespect to anyone offering their opinion, as I enjoy chatting with everyone speculating on topics I find fun and interesting (FSDb) and I appreciated your answer as it definitely gave me more to research into, but I didn’t take it as the only answer and was more or less inviting others to discuss the topic further. The “yes? I already answered you” was what I was referring to as it seemed as if your answer was meant to be the close of the topic unless green had anything to add. I don’t want the thread to spin off into the 100 posts of madness that it has before so either way I appreciate your thoughts on the matter and look forward to yours, and others, insight into how this sometimes awesome, sometimes stupid, FSDb is progressing.
 
  • Like
Reactions: FSDtester#1 and GSP
I mean no disrespect to anyone offering their opinion, as I enjoy chatting with everyone speculating on topics I find fun and interesting (FSDb) and I appreciated your answer as it definitely gave me more to research into, but I didn’t take it as the only answer and was more or less inviting others to discuss the topic further. The “yes? I already answered you” was what I was referring to as it seemed as if your answer was meant to be the close of the topic unless green had anything to add. I don’t want the thread to spin off into the 100 posts of madness that it has before so either way I appreciate your thoughts on the matter and look forward to yours, and others, insight into how this sometimes awesome, sometimes stupid, FSDb is progressing.
I was preparing to ditch this thread as too hostile, but have found that ignoring a handful of posters has made it tolerable again and still informative.
 
Except that's literally impossible because the entire firmware is flashed as a single block, and checksummed that way.

You can't change "part" of it and survive a reboot.

Now, could Tesla entirely change the way they secure and verify the AP computers code to ALLOW what you describe?

Sure.

Does it work that way NOW? No.
So.. I have this problem with, "The firmware is flashed as a single block and checksummed that way." statement.

Sure, that's the firmware. The usual meaning of firmware is, say, in a C-language sense, code and #DEFINES which are set at compile time. It's trivial to set up a checksum on something like that.

The values that are #DEFINED are treated, in a way, by a programmer as a variable. But they're not, really: A #DEFINED value is recognized by a compiler as being a fixed value so, if the programmer uses a variable such as x_is_my_variable, but the #DEFINE says that x_is_my_variable is a 0x03, then, at compilation time, ever instance of x_is_my_variable is set to 0x03. Depending upon the language (I'm looking at you, oddball versions of FORTRAN), sometimes there's a fixed location that gets 0x03 and everywhere that uses x_is_my_variable has a pointer set in namespace that points at that location with 0x03 in it. Which leads to a certain amount of hilarity when some programmer makes the mistake of doing a line with

x_is_my_variable = 0x5

And suddenly that fixed value of 0x03 is 0x05 everywhere in the program. (There are FORTRAN and other languages nowadays that Detect That and prevent it from happening.)

But I have never run into a program that hasn't used scratch memory for variables. If one has a loop like

for (i=0; i<10; i++)
{
do_something_or_other_with_i(i);
}

then somewhere there's a variable being incremented away. In the above code, note that "i" is initialized to zero at the beginning of the loop. But it sure doesn't stay fixed.

But it's perfectly possible to have a checksummed pile of code that, on first run, fills up a sub-block of FLASH RAM with a particular set of values. And then uses and/or modifies those values as time moveth on, surviving, since it's in FLASH, reboots and other niceties. Is that block of FLASH RAM part of the firmware checksum? It could be, if it's included in the compile. Otherwise, Nope. Might it have it's own checksum, put in there by the program, and recalculated and re-written every time something changes? Sure.

I mean, we already know that Tesla uses the FLASH for more than just code. A few years ago we all found out that Tesla had been saving log files to FLASH; since there wasn't, at that time, much free space in the FLASH, the incessant writing of log files actually wore out the FLASH, followed by people complaining about how their computers were going bad, enterprising technicians copying the FLASH and physically replacing the chip, and all that jazz.

Does anybody think that the log file area was part of the checsummed code base? Uh. Nope.

Now, is Tesla actually doing any of the above? No idea, they haven't said. We're all having fun guessing with Occam, but Occam's not a bunch of CS types with a bit in their collective teeth.

Let me be clear about my chops, or lack of them, here. Yes, I'm a EE. EE's are resistors, capacitors, inductors, transistors and all that. But I've also done some serious DSP work, in and out of assembly, written ridiculous amounts of C/C++/FORTRAN/PASCAL code and $RANDOM assembly for more and different microprocessors and controllers than I can easily remember. By percentages, mostly diagnostic code to find bugs in honking huge piles of hardware. But RAM tests, parity checking, checksums, and just overall weird stuff (VHDL is fun, too). Real CS majors make my piddly attempts at making sure the hardware's not dead look silly, but, well, I do have the background. From register flip-flops on up to ISRs and worse.

And if five minutes of thought on my part can come up with schemes to defeat, "It's all checksummed! It's frozen!", then I shudder to think what a real pro could do.

Finally: While I have thrown around the idea of self-modifying code a bit, I should say that most programmers hate the notion. With good reason, too. Writing regular code with all its inevitable piles of bugs is bad enough; making that code self-modifying is like getting out that 45-cal automatic and taking careful aim at one's foot. Those people who like to mess with, "Proof of correctness" in highly reliable software probably regard any mention of self-modifying code with disgust and a, "Just... don't go there" comments. I've done that kind of thing in very limited circumstances and observed other coders do the same. How much of this Tesla does.. Well, they probably don't. But that's "probably".
 
I mean, we already know that Tesla uses the FLASH for more than just code. A few years ago we all found out that Tesla had been saving log files to FLASH; since there wasn't, at that time, much free space in the FLASH, the incessant writing of log files actually wore out the FLASH, followed by people complaining about how their computers were going bad, enterprising technicians copying the FLASH and physically replacing the chip, and all that jazz.

Does anybody think that the log file area was part of the checsummed code base? Uh. Nope.


Except those log files were saved on the flash of the media computer not the driving computer.

Because if someone hacked the media computer with bad code the worst they could really do is give you bad directions to somewhere.... or play country music I guess.

The code on the driving computer is a bit more important to insure is correct and what Tesla put there and hasn't changed.

If you remain unclear on this, here's Tesla explaining the flash thing was the media computer-


Go back and re-read the quotes from Green I posted. The entire rootfs of the driving computer is checksummed as a single blob.

It can not be changed without a full update of the whole thing.

Nothing else on the driving computer survives a reboot (nor would you want it to)
 
  • Informative
Reactions: EVNow
So if the FSD computer instructions can only be modified with an honest to goodness firmware update, and behavior is somehow altered with map updates / data, can tolerances be modified in map data and FSD hooks into that data and changes behaviors that way, effectively allowing changes to occur beyond reboots until the next alteration?
 
So if the FSD computer instructions can only be modified with an honest to goodness firmware update, and behavior is somehow altered with map updates / data, can tolerances be modified in map data and FSD hooks into that data and changes behaviors that way, effectively allowing changes to occur beyond reboots until the next alteration?


I already suggested that exact thing regarding people reporting nearer-stop-line stops, again in this thread, recently. Here's the post in question:

FWIW the pretty recently discovered "each drive gets real time updated map data" - especially HIGHLY DETAILED map data, down to exact locations of crosswalks, # and type of lane, etc removes a LOT of that suspicion to my mind.

In fact it even makes it make MORE sense the degree of "improvement" varies by location too, since the amount of newer/better map data will vary by location for any given drive.

Also consider the degree of behavior change you can manage with JUST map data.

For example, say you want to, I don't know, have every FSD car stop 2 feet closer to the intersection at stop signs... Just send a global map update moving all mapped stop lines forward 2 feet.

No need to touch NNs, driving code, or anything else checksummed.
 
  • Informative
Reactions: FSDtester#1
behavior is somehow altered with map updates / data
I suppose route data could even include information about traffic light attributes assuming the vehicle software knows what to do with it. For example with 11.4.3 today, it tried to creep into the intersection to go straight on red with a lot of cross traffic. Here's what the TeslaCam recording looks like:
11.4.3 not blinking red.jpg


So every 3 frames both lights appeared solid red then one off then the other off. This was reflected in the visualization too with blinking red lights and blue cross traffic. It might be that FSD Beta doesn't know that this particular intersection uses these types of traffic lights or that the software doesn't know to specially handle this data yet or the neural networks could be trained to treat very brief/1-frame blinks as still solid red.

And just to confirm the download happens while uploading, here's probably video being uploaded maybe for this attempted running red light disengagement:
tesla wifi realtime proportional.png


Most likely the upload is just going to some data collection endpoint and the server response/download isn't anything based on the video content. The proportion of download to upload here is around 3%, and notice in the graph how increases/decreases in upload have a matching change in download.
 
I already suggested that exact thing regarding people reporting nearer-stop-line stops, again in this thread, recently.
I was thinking more along the lines of behavior tolerances for things not necessarily “map” related, such as a road closed for a day, or a construction worker holding a stop sign, or a school bus stop sign while letting kids off, things that aren’t in the same place every time, or maybe I’m misunderstanding what you’re suggesting. As I read yours and Mardak’s posts, I was leaning toward the idea that the map data contains more than just there’s a stop sign “here” do this at it, and more this is what a stop sign looks like, and this is what a person holding a stop sign looks like, and if it’s attached to a bus, this is what it looks like, and here are the rules for that instance, now use this to make good decisions with the tolerances set in your firmware to get safely to your destination. But as mentioned, the firmware would need to “understand” what to do with that data. Hopefully that makes sense and I apologize if that’s also what you were saying and I just didn’t get it.
 
  • Like
Reactions: FSDtester#1
At a high level, the FSD Computer takes inputs to generate outputs. The types of inputs have different lifetimes of being fixed/unmodified:
  • "realtime:" camera video, motion and location sensors
  • per trip: route data with additional map data
  • per navigation map update: road connectivity and lanes data
  • per software update: neural networks and FSD Beta logic
such as a road closed
Matching up the above types of inputs to this example:
  • realtime: the cameras could see a road closed sign
  • trip: online routing might force a turn instead of continuing straight into the closed road
  • navigation: if the road is closed long enough for extended construction, the road could be disconnected
  • software: neural network could realize the sign means you must turn even if the route wants to go straight
So even between vehicle software updates, there's a lot of inputs that change even when taking the same route. But the software will need updates to update handle new types of capabilities such as stopping for a school bus with opened stop signs.

To give you a sense of how big of an impact the per-trip route data, look at the 11.4.x release notes' first entry:

- Improved control through turns, and smoothness in general, by improving geometry, curvature, position, type and topology of lanes, lines, road edges, and restricted space. Among other improvements, the perception of lanes in city streets improved by 36%, forks improved by 44%, merges improved by 27% and turns improved by 16%, due to a bigger and cleaner training set and updated lane-guidance module.

Where if the current route's map data is incorrect or missing, the "lane-guidance" portion will be relatively ineffective.
 
  • Informative
Reactions: FSDtester#1
I suppose route data could even include information about traffic light attributes assuming the vehicle software knows what to do with it. For example with 11.4.3 today, it tried to creep into the intersection to go straight on red with a lot of cross traffic. Here's what the TeslaCam recording looks like:
View attachment 948257

So every 3 frames both lights appeared solid red then one off then the other off. This was reflected in the visualization too with blinking red lights and blue cross traffic. It might be that FSD Beta doesn't know that this particular intersection uses these types of traffic lights or that the software doesn't know to specially handle this data yet or the neural networks could be trained to treat very brief/1-frame blinks as still solid red.

And just to confirm the download happens while uploading, here's probably video being uploaded maybe for this attempted running red light disengagement:
View attachment 948258

Most likely the upload is just going to some data collection endpoint and the server response/download isn't anything based on the video content. The proportion of download to upload here is around 3%, and notice in the graph how increases/decreases in upload have a matching change in download.
Wowzers, a smoking gun. Many pages ago I complained that 11.4.2, I think, came to a halt at a red light, first in line, then attempted to go through it. The weird part is that earlier, under 10.69.x, in a different direction but at the same intersection, had been doing that same action. Intermittently, yes, But noticeably just at this one location. Implying that it was the hardware at that intersection and the Tesla's response to it that was the problem. Looks like you've actually caught it in the act.

Some of the posters at the time suggested aliasing (i.e., not sampling fast enough) for some class of lights that flickered faster than the human eye could perceive. Looks like you've nailed that as the cause.

After we all discussed this, I asked if anybody knew how to contact Tesla development. Two things:
  • Crickets.
  • Somebody else referenced a paper where this problem was analyzed.
How do we get to the developers? Do they know about this? Are we reduced to tweeting to Musk?
 
How is this not a downloadable parameter set?
Well it is, but it's in the map data which apparently is set up for that.

I'm not the source of the claim that the FSD code can't be parameterized. And personally, I don't see why it absolutely cannot be, even with checksum verification of the download and/or the loaded execution code. I would think that the invariable check-summed code could call external files stored anywhere accessible, and have the routines embedded to update them and make use of them, just as it makes use of stored or live navigation data.

Regarding security, loading of parameter files could have encryption and source-verification layers even if they are read from the less secure infotainment computer or whatever.

But it doesn't matter what I think is possible, in the face of credible information to the contrary. I'm deferring to the conclusions of verygreen who understands the FSD code architecture better than any other outsider, as far as I know.

In any case, I think that almost all of the consistent/repeatable mid-version behavioral changes, reported here by users and experienced by myself, can be explained within the realm of map detail information. This includes corrections to stop-line locations, hints to suppress confusion regarding turn lanes vs on-route lanes and so on.

That in itself still doesn't prove that only navigation hints can be elements of drive-time or other mid-version data updates; but I'm saying the observed behavior is consistent with that theory, as put forth by verygreen (and amplified here by knightshade).

As a related note: It's long been suggested that Tesla's goal is not to rely too heavily on detailed maps, but to be as accurate and independent as possible through the perception stack. So I was thinking about whether the introduction and increasing use of map details actually runs counter to the philosophy. I actually don't think those things are mutually exclusive.

If we can agree that map details are helpful to performance, there's little reason not to use them even if they may not be needed as much in the long term. They don't necessarily preclude or or weaken the developing ability to understand the road environment.

For example, there could be background real-time internal comparison and scoring of the perception stack's conclusions versus the corresponding map hints. In each instance, a determination must be made as to which piece of information will rule the planning decision. But in any case, disagreements between the two are useful performance data when uploaded to the Mothership to a) provide input for newer updated maps, and/or b) to register perception errors for further training attention in upcoming versions. It's another dimension of the Shadow Mode concept, in which different software modules or code stacks in the car can supply performance information and training examples to refine the next round.
 
  • Like
Reactions: Tronguy
It's long been suggested that Tesla's goal is not to rely too heavily on detailed maps, but to be as accurate and independent as possible through the perception stack. So I was thinking about whether the introduction and increasing use of map details actually runs counter to the philosophy.
Ideally, Tesla is training the neural networks with and without map details as otherwise basically requiring correct map data becomes a crutch like early FSD Beta's usage of radar. For example, radar handling the "easy" cases probably made vision less capable in handling trickier scenarios where radar was providing bad data. Similarly, wrong map data in construction zones can cause FSD Beta to incorrectly cross over mispredicted white lines based on map data assuming oncoming traffic is separated by a median when in reality there's new double yellow lines redirecting traffic to share the same side of the road.

Fortunately, map data can be updated for each trip to overcome some of these prediction errors vs dealing with radar issues would have required software updates. This remote maps capability allows for closer-to-real-time improvements and is probably useful in staying within HW3 compute requirements, so it'll be interesting to see how Tesla balances these tradeoffs when squeezing in more capabilities to the existing fleet.

there could be background real-time internal comparison and scoring of the perception stack's conclusions versus the corresponding map hints
Yeah, disagreement between map data and neural network perception is a good baseline trigger to collect data. Even better and probably similar to what is already done for autolabeling is comparing future, current and past knowledge. For example, if past map knowledge doesn't indicate there's a dedicated right turn lane and current vision from a distance believes your lane is forking into 2 straight lanes, future data from a few seconds later with clearer view of the painted right turn arrows can result in automatic sending back of this intersection structure so FSD Beta getting updated route data stays left at the fork.
 
  • Like
Reactions: JHCCAZ
Yes, though this location memory, and corresponding adjustments, are understood to be taking place at the Mothership. To the point of your last sentence, this centralized database response has the extremely important side effect of helping everyone in the fleet who travels on the same streets.

It also allows a kind of voting or confirmation system, in which they can decide how many auto-reported incidents of rough roads, potholes, and fenerally new and/or temporary deviations from the stored map, are needed before they will propagate the adjustment to all users.

As ever, I implore Tesla to implement an offline report generation system allowing interested and capable users to upload helpful map details directly. Perhaps known and trusted users would find their reports have a quicker effect, and perhaps this would include initial testing of the proposed changes on the next drives of the reporting users themselves. This would allow us to squash annoying repeated bugs more quickly for ourselves, while allowing Tesla to vet the information before widespread deployment.
Although I agree that Tesla should create an offline map update download site, I would rather that they would just come out publicly and tell people to support OSM. It seems that's what Tesla uses for at least some of the mapping data. If Tesla has a contract with TomTom for map data they should state that that also.

I somehow think that Tesla doesn't want users downloading map updates directly to their cars. Too much could go wrong.
 
I somehow think that Tesla doesn't want users downloading map updates directly to their cars. Too much could go wrong.
I agree with this, and that's why I talked about the vetting process at Tesla headquarters. I don't think users would tell their cars directly, but through a mostly auto-curated database update. As I said, there could be a waiting system in which experienced users develop a positive track record, so their inputs receive a higher weighting and perhaps prioritized position in the queue. The first-time or occasional user's input might wait for a few more affirming inputs, or in simple terms have a weaker vote.

Regarding OSM, I don't have any personal experience with it, and I don't know if it affords the kinds of parameters the Tesla feels they need for high quality fleet-sources maps going into the future.
 
So here's today's update on 11.4.3, - my first and only two drives of the day, which I felt comfortable for collectively about 20 seconds on before disabling FSDj.

1.) Driving down a road that's straight and wide, yellow lanes - and has plenty of visibility, the type that FSD normally never has trouble with. I had a left turn (no green arrow), the car moved into the turn lane, and then came to a complete stop, on a green light, with no cars coming the other direction. It then stated in the display something to the effect of "Stopped for Traffic Control. Press the Accelerator to continue." Those were roughly the words. I remember "traffic control" specifically, and asking me to press the pedal. What on earth? I hit the pedal, it turned left, and then immediately came to a complete stop in its own lane when a car came towards me on the side street, because you know, it doesn't know the width of itself.

2.) This next one was a DOOZY. I was leaving Northgate (just outside Seattle, for those wondering), and getting on the on-ramp to I5 back to Seattle, (the ramp right by the Best Buy, if you want to repeat this). I saw the flashing sign getting onto this ramp saying that you had to wait for the green light up ahead to go (i.e. metering in effect), and I know this green light/red light setup for traffic flow is on a sharp bend. Well, of course the car didn't read that first sign, it just went round the bend onto I5 pretty briskly, - it then saw the red light LATE, slowed down dramatically, but NOT fast enough to not run the red light by about 10 ft, and then - because the car was past the red light and I guess could no longer see it clearly (even though it should), - just kept going and zipped down onto I5. Wow. Just. Wow.

So, figured I'd follow up on this. Turns out I did, in fact, save number 2 above to my clips. So here it is, in all its abominable glory (including its painfully slow entry onto the merge ramp):


Also, here's the car making a right turn and getting into the far left lane to be able to turn left (as it should - though this was far from an attractive maneuver in its entirety, and any passenger would have been appalled). It stopped at the red turn signal arrow, and then seconds later decided it didn't care it was a red arrow and started accelerating, which is when I took over.


@jebinc @Ramphex
 
Last edited: