Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
The reason everyone is talking about this is that the law in every state I’m aware of is that the speed limit is in effect at the exact spot the sign is at.

Every speed trap by a cop will utilize this exact thing when a speed limit drops from 50 to 35 etc.

The point everyone is concerned about is that letting FSDb drive this way puts you at much greater risk for the very common tickets cops trying to meet a quota in a smaller town will be writing.
This is another reason why FSD needs speed of traffic awareness.

In some places I drive, the very casual and late-triggered slow downs is exactly what the rest of traffic is doing, so it’s a good thing there.

In the kinds of places you’re describing, yeah, totally the wrong thing to do and will result in getting a ticket.

A lot of my interventions would go away if I could configure FSD to go up to 10 mph over the limit if that’s what the rest of traffic is doing (eg I’ve reached the speed limit, but the cars in front are going faster than me leaving a widening gap and the cars behind me are leaving no gap).
 
  • Like
Reactions: sleepydoc
The reason everyone is talking about this is that the law in every state I’m aware of is that the speed limit is in effect at the exact spot the sign is at.

Every speed trap by a cop will utilize this exact thing when a speed limit drops from 50 to 35 etc.

The point everyone is concerned about is that letting FSDb drive this way puts you at much greater risk for the very common tickets cops trying to meet a quota in a smaller town will be writing.
Sure, there's all that but it's much more TMCish to talk about how very very dangerous it is........

It's a joke, son, I say a joke........
 
Last edited:
Actually, no it's not subjective. I've objectively/quantitatively measured it several times so there's no denying that it's real.


exactly. They made a big deal over making sure FSD comes to a complete stop before the stop sign (arguably something you're much less likely to get a ticket for) yet this doesn't seem to matter. The really baffling part is that the rate of deceleration is different for AP and TACC than it is with FSD.
Right, so show us your evidence. Just like when you attested to having presented prior evidence and defended it in one of our Trauma conferences, right, remember? Or do you disagree, disagree, disagree? The fact is it is very subjective. Subjectivity does not define real versus perceived but not real.
 
Last edited:
It's subjective because there are many uncontrolled variables. I'll say it again, try presenting your above evidence as objective truth to anyone in academic medicine, for example, and see how far that gets you. Have you done a randomized controlled double-blinded study? No peaking now.......
 
This is another reason why FSD needs speed of traffic awareness.

In some places I drive, the very casual and late-triggered slow downs is exactly what the rest of traffic is doing, so it’s a good thing there.

In the kinds of places you’re describing, yeah, totally the wrong thing to do and will result in getting a ticket.

A lot of my interventions would go away if I could configure FSD to go up to 10 mph over the limit if that’s what the rest of traffic is doing (eg I’ve reached the speed limit, but the cars in front are going faster than me leaving a widening gap and the cars behind me are leaving no gap).
Why is it that you can’t make FSDb go 10mph over the speed limit?

That is perhaps the most useful thing I have gotten from FSDb so far, AP is no longer locked to 5mph over on city/surface streets.
 
  • Like
Reactions: ironwill96
Why is it that you can’t make FSDb go 10mph over the speed limit?

That is perhaps the most useful thing I have gotten from FSDb so far, AP is no longer locked to 5mph over on city/surface streets.
I do. That’s what I said in my last paragraph.

I often have to intervene to set the speed 5 or 10 mph higher than the speed limit because of traffic (using the scroll wheel which is very much an intervention).

I’d like it to automatically do that for me. I don’t always want to go 10 mph over. But I do want it to know it can go over if everyone else is.
 
Right, so show us your evidence. Just like when you attested to having presented prior evidence and defended it in one of our Trauma conferences, right, remember? Or do you disagree, disagree, disagree? The fact is it is very subjective. Subjectivity does not define real versus perceived but not real.
These days, they say words are violence.

One MD offers up a Second Opinion, the other says no, Follow the Science!

So here, it seems, we have Doc on Doc violence:
A Surgical Strike, as it were...or even worse, a shocking lack of Professional Courtesy.
This is obviously a dispute to be resolved by the "Medical Bored", based on recomendations from the Centers for Deceleration Control.
 
These days, they say words are violence.

One MD offers up a Second Opinion, the other says no, Follow the Science!

So here, it seems, we have Doc on Doc violence:
A Surgical Strike, as it were...or even worse, a shocking lack of Professional Courtesy.
This is obviously a dispute to be resolved by the "Medical Bored", based on recomendations from the Centers for Deceleration Control.
Nah, one is just a denier that doesn't like it when someone contradicts him with evidence. I get it, it's hard being wrong!

I'll have to make some direct measurements again tomorrow when I get off work comparing the deceleration of FSD with TACC and AP. I'm not sure that it will matter but I'll post them anyway.
 
No, they can't survive a reboot



He's the one who told us a while back this can't be done (and in some detail why)--

Sorry I haven't replied; been busy.

I've read the twitter post. And I understand about checksums and verify making sure that a load works as advertised. FWIW, I work on systems like that every day, and I'm not joking.

Having said that, the systems upon which I work do do things that are not volatile and survive reboots. Alarm logs are the obvious ones. There are components that simply do not get updated unless they're power cycled. And every smart pack has a store into which data may go that survives a reboot. It's FLASH RAM, and designed to look like a hard drive.

Now, having said all that: There's a decent suggestion that there are things in a Tesla that survive reboots. The obvious one is camera calibration. Put a new camera on a Tesla or drive a spanking new one around, and there's X miles that one has to go before some kind of algorithm runs and Saves the Camera Cal Data somewhere. Once that's done - it doesn't get repeated. And if you asked me where that cal data was being kept, my knee would Jerk Right Up and say the driving computer flash somewhere.

And it's a nickel bet du jour that the camera cal data is read by the controller on reboot or flash upgrade. From that perspective: The camera stuff is data that can be read by the CRC-checked firmware load.

So, what if there's some more data that actually consists of parameter data, as I suggested in my post? A new firmware load may very well have a set of default values that's CRC checked and all that. But there's nothing set in stone that Tesla might cometh down from on high and vary some of that data.

This is, I guess, a fun argument to be having. We got no proof either way, really. But I think that guy's twitter post doesn't prove anything one way or the other.
 
Sorry I haven't replied; been busy.

I've read the twitter post. And I understand about checksums and verify making sure that a load works as advertised. FWIW, I work on systems like that every day, and I'm not joking.

Having said that, the systems upon which I work do do things that are not volatile and survive reboots. Alarm logs are the obvious ones. There are components that simply do not get updated unless they're power cycled. And every smart pack has a store into which data may go that survives a reboot. It's FLASH RAM, and designed to look like a hard drive.

Now, having said all that: There's a decent suggestion that there are things in a Tesla that survive reboots. The obvious one is camera calibration. Put a new camera on a Tesla or drive a spanking new one around, and there's X miles that one has to go before some kind of algorithm runs and Saves the Camera Cal Data somewhere. Once that's done - it doesn't get repeated. And if you asked me where that cal data was being kept, my knee would Jerk Right Up and say the driving computer flash somewhere.

And it's a nickel bet du jour that the camera cal data is read by the controller on reboot or flash upgrade. From that perspective: The camera stuff is data that can be read by the CRC-checked firmware load.

So, what if there's some more data that actually consists of parameter data, as I suggested in my post? A new firmware load may very well have a set of default values that's CRC checked and all that. But there's nothing set in stone that Tesla might cometh down from on high and vary some of that data.

This is, I guess, a fun argument to be having. We got no proof either way, really. But I think that guy's twitter post doesn't prove anything one way or the other.
Why does it have to be an issue of external parameter changes versus inviolate fixed parameter configuration in the released build?

Isn't it possible that Tesla has programmed some conditional evaluation of various FSD parameters, based on some combination of date/time, ego location, recent Drive history etc.? They could well be conducting studies of performance sensitivity to various parameter settings. There may be at least two, or any number of value sets that are being swapped in and out for performance evaluation. This could explain the numerous reports of seemingly changing FSD personality or performance on different days or weeks.

The typical owner/driver has at least one set of standard drives aka commutes. In order to evaluate and optimize FSD across a huge set of owner/drivers, and across each of their somewhat standard commutes, I can easily imagine that they would like to gather performance data with Config A for say a week, and then Config B the for the following week, or any other combination of scenarios they deem useful. I don't follow the argument that this is impossible without external intervention to force downloads of configurations - the experimentation plan could be baked into the released firmware already.

Remember also that there are constantly references to the FSD computer's ability to run one stack in Shadow mode while the other stack runs in Real Drive control mode. At a minimum, these two could be swapped according to a pre-planned schedule - though I think it could be a much more sophisticated and code-efficient plan of varying configurations if they want. None of that demands interactivity from the Mothership, though of course it could be even more useful if they had such a capability.
 
Last edited:
Why does it have to be an issue of external parameter changes versus inviolate fixed parameter configuration in the released build?

Isn't it possible that Tesla has programmed some conditional evaluation of various FSD parameters, based on some combination of date/time, ego location, recent Drive history etc.? They could well be conducting studies of performance sensitivity to various parameter settings. There may be at least two, or any number of value sets that are being swapped in and out for performance evaluation. This could explain the numerous reports of seemingly changing FSD personality or performance on different days or weeks.

The typical owner/driver has at least one set of standard drives aka commutes. In order to evaluate and optimize FSD across a huge set of owner/drivers, and across each of their somewhat standard commutes, I can easily imagine that they would like to gather performance data with Config A for say a week, and then Config B the for the following week, or any other combination of scenarios they deem useful. I don't follow the argument that this is impossible without external intervention to force downloads of configurations - the experimentation plan could be baked into the released firmware already.

Remember also that there are constantly references to the FSD computer's ability to run one stack in Shadow mode while the other stack runs in Real Drive control mode. At a minimum, these two could be swapped according to a pre-planned schedule - though I think it could be a much more sophisticated and code-efficient plan of varying configurations if they want. None of that demands interactivity from the Mothership, though of course it could be even more useful if they had such a capability.
We have seen no indication that something like this is happening. In fact, the last I heard, is that they have to use both processors to get FSD to run, so there is no longer any true “shadow mode”.

Although it possible to imagine all sorts of scenarios for the inconsistent performance of FSDb, the most likely explanation is that it is not robust enough to deliver consistent performance over slightly varying inputs. This is probably because their training sets do not yet include enough samples.
 
Although it possible to imagine all sorts of scenarios for the inconsistent performance of FSDb, the most likely explanation is that it is not robust enough to deliver consistent performance over slightly varying inputs. This is probably because their training sets do not yet include enough samples.

I don't think any driving behavior is determined directly by a neural network at the moment, is it?

I think only perception is driven by neural networks, and personally I haven't noticed any degradation in perception performance in varying scenarios. Unless there's a drastic reduction in perception confidence or trajectory forecasting that's not visualized on the display.

But assuming perception quality is relatively equal across all circumstances, what would explain day-to-day variation in FSDb performance?
 
  • Like
Reactions: JHCCAZ
We have seen no indication that something like this is happening. In fact, the last I heard, is that they have to use both processors to get FSD to run, so there is no longer any true “shadow mode”.

Although it possible to imagine all sorts of scenarios for the inconsistent performance of FSDb, the most likely explanation is that it is not robust enough to deliver consistent performance over slightly varying inputs. This is probably because their training sets do not yet include enough samples.
I don't think any driving behavior is determined directly by a neural network at the moment, is it?

I think only perception is driven by neural networks, and personally I haven't noticed any degradation in perception performance in varying scenarios. Unless there's a drastic reduction in perception confidence or trajectory forecasting that's not visualized on the display.

But assuming perception quality is relatively equal across all circumstances, what would explain day-to-day variation in FSDb performance?
no one really knows for sure - this is all speculation that makes for interesting conversation.
 
At the presentation they showed the slide about how much has been done in improving FSD, like 35 releases and 75,778 models have been trained. Elon mentioned that the way to measure the improvement is to calcaulate the increase of time between intervention, however, he kept the data to himself.

I just wonder, considering what has been done, how much better FSD has become over the last year. Is it as significant as the presentation suggested? Has anyone keep Track hof the inteeven toons he or she has made over the past year?

I have a nagging feeling that the improvements aren’t as great as was suggested during the presentation. i am still waiting for my FSD (need the camera upgrade). Hope someone can enlighten us.
PNG image.png
 
Interesting breakdown thread of a slide from AI Day by Whole Mars Blog:


Basically, they're using crowd-sourced data from multiple trips through an intersection. But they're not using it to create an HD map that would become stale relatively quickly; they're using it to generate ground-truth for training data labeling.
 
We have seen no indication that something like this is happening. In fact, the last I heard, is that they have to use both processors to get FSD to run, so there is no longer any true “shadow mode”.
To be clear, my (speculative) argument about planned configurability does not, at all, hinge on whether or not there is a Shadow Mode stack running. I simply mentioned that swapping that in, if it exists, would be one way to implement an FSD configuration change. As I wrote immediately after that, there are more efficient ways. A simple swapping of a declaration header that sets system parameter values would add only a tiny amount of code to the release, and would open up far more possibilities than an A-B swap. Perhaps I shouldn't have mentioned the Shadow Mode if as an example, if it distracts from the general point re configurability.

While on that topic though, it's not very clear to me that Shadow Mode is really dead and gone. @Green only said that, at some point last year, he detected that the compute had expanded beyond one of the two processors on board. That's not the same as saying there's no Shadow Mode. Also, the popular conclusions that it was Game Over for FSD HW3 dual redundancy may not have aged well; that was based on the older and fairly different code.​
Although it possible to imagine all sorts of scenarios for the inconsistent performance of FSDb, the most likely explanation is that it is not robust enough to deliver consistent performance over slightly varying inputs. This is probably because their training sets do not yet include enough samples.
For sure this is possible and none of us outsiders know for sure. But I believe that a number of users have reported observing a week or so worth of repeatedly good drives, followed by another period of consistently worse drives. This sounds less random than one would expect from a simple lack of robustness.

My main point in the post, however, was that it's certainly feasible, and arguably very desirable, for Tesla to run different parameterized configurations of the FSD stack within the context of a particular release version, without requiring software-modifying interactions with the Mothership. This point was made to address a running side argument about whether released code could be modified, "survive reboot" etc.
 
  • Like
Reactions: Tronguy
Basically, they're using crowd-sourced data from multiple trips through an intersection. But they're not using it to create an HD map that would become stale relatively quickly; they're using it to generate ground-truth for training data labeling.
With a million Teslas on the road it would hardly be stale for long. It looks like they now have the similar crowd sourced map building to Mobileye.
Another mystery is how good their simulation environment for testing the planner is. Maybe I missed it but they appear to only use the simulation environment to generate training data for perception NNs.
 
At the presentation they showed the slide about how much has been done in improving FSD, like 35 releases and 75,778 models have been trained. Elon mentioned that the way to measure the improvement is to calcaulate the increase of time between intervention, however, he kept the data to himself.

I just wonder, considering what has been done, how much better FSD has become over the last year. Is it as significant as the presentation suggested? Has anyone keep Track hof the inteeven toons he or she has made over the past year?

At the presentation they showed the slide about how much has been done in improving FSD, like 35 releases and 75,778 models have been trained. Elon mentioned that the way to measure the improvement is to calcaulate the increase of time between intervention, however, he kept the data to himself.
In case anyone is interested on Elon's statement on FSD metrics.

"How many miles between necessary interventions...that is safety critical",... "Fundamental metric we are measuring every week"
Unfortunately we have no idea how that metric is changing over time or what determines "safety critical".

@2:55:05
 
  • Like
Reactions: Daniel in SD