Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSDb a couple questions

This site may earn commission on affiliate links.
You're on FSD Beta with HW4, right?

Did you have it with a hW3 car?
I am on HW4. My previous car was an autopilot 1 car. My sister in law has a 2.5 upgraded to 3 car, so I have some experience will all versions. HW4 was rock solid until a week ago or so, then it started doing odd things. We’re having a discussion in another thread on whether it’s doing things that are wonky by design to gather data on disengagements and human driver tolerances.
 
We’re having a discussion in another thread on whether it’s doing things that are wonky by design to gather data on disengagements and human driver tolerances.
Some people have curious fantasies.

Intentionally programming the car to do dangerous maneuvers would be a good way for a company to lose very, very large stacks of cash. It could also lead to minor legal difficulties for those involved, like negligent homicide or manslaughter charges. From a civil perspective, it would be a slam dunk for gross negligence, which invalidates any liability waivers.

And, on top of these inconveniences, the stock would likely do some phantom braking of its own.
 
Some people have curious fantasies.

Intentionally programming the car to do dangerous maneuvers would be a good way for a company to lose very, very large stacks of cash. It could also lead to minor legal difficulties for those involved, like negligent homicide or manslaughter charges. From a civil perspective, it would be a slam dunk for gross negligence, which invalidates any liability waivers.

And, on top of these inconveniences, the stock would likely do some phantom braking of its own.
Every one of us clicked a waiver that were would remain in control at all times ready to take over when in Elon’s own words “it does the wrong thing at the worst possible time”. Just the act of him saying this means that this behavior is expected. I don’t think it’s too far fetched for the map data to be updated daily to produce a different result with the tolerances set for the sake of obtaining disengagement and success data. It’s not intentionally programming dangerous maneuvers per se, but if that’s an unforeseen consequence at a stop sign, or red light, or roundabout, or any one of the thousands of posts on here of a near miss or forced disengagement to avoid a crash, then so be it, the data was captured and we knew what we were signing up for.
 
Every one of us clicked a waiver that were would remain in control at all times ready to take over when in Elon’s own words “it does the wrong thing at the worst possible time”. Just the act of him saying this means that this behavior is expected. I don’t think it’s too far fetched for the map data to be updated daily to produce a different result with the tolerances set for the sake of obtaining disengagement and success data. It’s not intentionally programming dangerous maneuvers per se, but if that’s an unforeseen consequence at a stop sign, or red light, or roundabout, or any one of the thousands of posts on here of a near miss or forced disengagement to avoid a crash, then so be it, the data was captured and we knew what we were signing up for.
"Wonky by design" (your words), implies intentionally making the car perform an unsafe maneuver. Maybe your definition of wonky means something else. Maybe you just have an inarticulate manner of expressing yourself and type things that are opposite what you mean. But, from what I recall of the thread you refer, I do believe the silly proposal was that Tesla was intentionally causing the car to do the wrong thing. That's a far cry from warning you that the car might do the wrong thing (presumably because of immature software, not because of intent) and to do so would hardly further Tesla's ambitions to create autonomous machines.

The other thing you mentioned is that "wonly by design" was to gather data on human driver tolerances. Performing a study like this without explicit informed consent would be a serious ethical violation in any research field. Doing this sort of study would require disclosing to participants that the study is intended to measure their tolerance to dangerous situations. The disclosures that accompany FSD beta make no mention that testers are being studied for their "tolerance" to any situation. It's also unclear what the purpose of collection such data would be, since it hardly furthers the aims of autonomous machines. But, it's your fantasy. Your entitled to it.

Good luck with your HW4 vehicle and your career as a lab monkey.

Bye.
 
"Wonky by design" (your words), implies intentionally making the car perform an unsafe maneuver. Maybe your definition of wonky means something else. Maybe you just have an inarticulate manner of expressing yourself and type things that are opposite what you mean. But, from what I recall of the thread you refer, I do believe the silly proposal was that Tesla was intentionally causing the car to do the wrong thing. That's a far cry from warning you that the car might do the wrong thing (presumably because of immature software, not because of intent) and to do so would hardly further Tesla's ambitions to create autonomous machines.

The other thing you mentioned is that "wonly by design" was to gather data on human driver tolerances. Performing a study like this without explicit informed consent would be a serious ethical violation in any research field. Doing this sort of study would require disclosing to participants that the study is intended to measure their tolerance to dangerous situations. The disclosures that accompany FSD beta make no mention that testers are being studied for their "tolerance" to any situation. It's also unclear what the purpose of collection such data would be, since it hardly furthers the aims of autonomous machines. But, it's your fantasy. Your entitled to it.

Good luck with your HW4 vehicle and your career as a lab monkey.

Bye.
Lol combative much? wonky: adjective, askew or off center; unstable. I would venture to say this characterizes most beta products. I love your rose colored glasses on how the world works, as no corporation would ever have an army of legal professionals on hand to redefine the boundaries of their actions. I wouldn’t worry about my ability to articulate as much as your reply seems to be a comprehension issue. i never said the software was told to explicitly perform a dangerous maneuver, the idea was that the parameters were changed, perhaps to the extreme, in an effort to collect the data of how the hardcoded FSD Firmware interpreted said changes, and the output that resulted, which very well could be unfavorable (this is where we go back to worst thing wrong time waiver) but there’s no ethical challenge here as we all explicitly agreed we would be in control at all times.

Regardless, I did not come here for insults or combative conversation, just friendly discussion. It’s cool to not agree with what I think, in fact, it’s preferred as that is how we all learn about the cars and software we enjoy and have fun guessing what Tesla is up to, but your implied tone is both unnecessary and doesn’t invite casual, friendly discussion, in fact, It appears to fuel a more hostile debate, which I have no interest in, so ”bye” to you good sir or ma’am.
 
Last edited:
  • Like
Reactions: JB47394
Lol combative much? wonky: adjective, askew or off center; unstable. I would venture to say this characterizes most beta products. I love your rose colored glasses on how the world works, as no corporation would ever have an army of legal professionals on hand to redefine the boundaries of their actions. I wouldn’t worry about my ability to articulate as much as your reply seems to be a comprehension issue. i never said the software was told to explicitly perform a dangerous maneuver, the idea was that the parameters were changed in an effort to collect the data of how the hardcoded FSD Firmware interpreted said changes, and the output that resulted, which very well could be unfavorable (this is where we go back to worst thing wrong time waiver) but there’s no ethical challenge here as we all explicitly agreed we would be in control at all times.

Regardless, I did not come here for insults or combative conversation, just friendly discussion. It’s cool to not agree with what I think, in fact, it’s preferred as that is how we all learn about the cars and software we enjoy and have fun guessing what Tesla is up to, but your implied tone is both unnecessary and doesn’t invite casual, friendly discussion, in fact, It appears to fuel a more hostile debate, which I have no interest in, so ”bye” to you good sir or ma’am.

I can understand why @Supcom is taking your comments the way he is taking them. The way you are making your comments can be read in a very bad light even though you don't intend them to be read that way.

However, I don't think Tesla would be coding things specifically to manipulate customer disengagements...meaning I don't think they are going to code things in order to gauge umm perhaps "what they can get away with" which is kind of how I am kind of reading your comments. I don't think user disengagements would really be a good metric to use, generally, to help improve code because people disengage for all kind of reasons, real and imaginary.

i will use the windshield wipers as an example though about actually using user actions to help coding. Back in 2019 Tesla was working out some bugs in the automatic windshield wipers and they explicitly used physical user feedback to help improve the code. In this case they explicitly stated the intention and explicitly told people what they were doing and what they wanted people to do.

I think there can be other ways user feedback(disengagements) can help guide code but I think from a general sense that would have to be guided.
 
  • Like
Reactions: fasteddie7
I can understand why @Supcom is taking your comments the way he is taking them. The way you are making your comments can be read in a very bad light even though you don't intend them to be read that way.

However, I don't think Tesla would be coding things specifically to manipulate customer disengagements...meaning I don't think they are going to code things in order to gauge umm perhaps "what they can get away with" which is kind of how I am kind of reading your comments. I don't think user disengagements would really be a good metric to use, generally, to help improve code because people disengage for all kind of reasons, real and imaginary.

i will use the windshield wipers as an example though about actually using user actions to help coding. Back in 2019 Tesla was working out some bugs in the automatic windshield wipers and they explicitly used physical user feedback to help improve the code. In this case they explicitly stated the intention and explicitly told people what they were doing and what they wanted people to do.

I think there can be other ways user feedback(disengagements) can help guide code but I think from a general sense that would have to be guided.
Maybe supcom is right, and I articulate poorly, and I’m in denial, sorry supcom. its just so hard for me to think something isn’t up, as two weeks ago I was having flawless drives, yesterday was riddled with disengagements. I’m not saying disengagements as going to plow into a guardrail, or blindly dart into heavy traffic on purpose, but on a road that has been driven dozens of times, FSDb starts changing lanes (into a lane that doesn’t make sense for upcoming turns) suddenly for no apparent reason or picks a lane littered with imperfections, that is also not the lane a person would normally be in for their destination, it feels suspiciously like they’re looking for a disengagement for “science”
 
  • Like
Reactions: JB47394
Maybe supcom is right, and I articulate poorly, and I’m in denial, sorry supcom. its just so hard for me to think something isn’t up, as two weeks ago I was having flawless drives, yesterday was riddled with disengagements. I’m not saying disengagements as going to plow into a guardrail, or blindly dart into heavy traffic on purpose, but on a road that has been driven dozens of times, FSDb starts changing lanes (into a lane that doesn’t make sense for upcoming turns) suddenly for no apparent reason or picks a lane littered with imperfections, that is also not the lane a person would normally be in for their destination, it feels suspiciously like they’re looking for a disengagement for “science”
This has been discussed a lot about 11.4.X, but it appears to have moved away from using map data as much like Elon said in November. It relies on vision and GPS, which is why it will enter turn lanes, take the wrong turn sometimes, go into the wrong lane directly before a turn, etc. IMO it was in 11.3.6, but the dependency from maps has been decreased even more.

It's definitely not map data. You can see the map, the planner, the roads the car handles fine all of the time.

And yes, there is map data updates with every drive as Green pointed out. I think this data provides some input, but also vision can respond different dependent on the lighting, weather, cleanliness, other traffic, etc.
 
  • Like
Reactions: fasteddie7
And yes, there is map data updates with every drive as Green pointed out. I think this data provides some input, but also vision can respond different dependent on the lighting, weather, cleanliness, other traffic, etc.
I figured based on the new stop sign behavior every few days something was map data based, but the different behavior in different conditions makes me nervous for true hands off anytime soon as Elon famously said again we’ll have this year (2023). Weather I can definitely understand, but lighting and windshield cleanliness they definitely need to figure out as I don’t think the average customer will accept the position of the sun or spots on the windshield determining their vehicles ability to perform, when it’s out of beta that is.
 
I figured based on the new stop sign behavior every few days something was map data based, but the different behavior in different conditions makes me nervous for true hands off anytime soon as Elon famously said again we’ll have this year (2023). Weather I can definitely understand, but lighting and windshield cleanliness they definitely need to figure out as I don’t think the average customer will accept the position of the sun or spots on the windshield determining their vehicles ability to perform, when it’s out of beta that is.

Did you have a map update or a firmware update that precipitated the change in behavior? Without one of those two things any change in behavior probably isn't going to be code based...perhaps a camera re-alignment could help you.
 
No visible updates, but it seems it's a commonly accepted theory that there are map data updates prior to each drive, which is why there are many of us experiencing changes in stop sign and on and off ramp behavior at the same time. It seems the actual firmware updates are unmodifiable, period, but there are global parameters loaded (stop signs increase stop distance by x) that applies across the map and across the fleet, which might account for the large downloads I get every day with no visible update. I've been documenting this in video across two different model vehicles, and since both are acting in a similar manner, I assume it's not camera calibration based. Green did a great explanation of this on his twitter, and I think some theories were extrapolated from there.
 
Instead of picking arguments, try rereading the thread with your brain on. You’re the only one claiming that a tap is sufficient. OP literally said “tap does not seem to do anything”.

I was just agreeing with OP’s observation and explained my workaround, which is similar to theirs.
Let's think of it this way. If the car is at a point where it is not moving because it is analyzing the situation, it's an action that moves the car out of that decision point into the next one. Basically, from analysis into a "wholly crap, I need to drive now" It's basically enough to make the car move a smidgen.
 
The below intersection looked like the picture 6 months ago. FSDb worked fine.

It's been redesigned. Now the northbound side has lane 1 (left most) as a dedicated left turn (unchanged), lane 2 is painted with chevrons and is not intended to be used. Lane 3 is straight or right turn. Northbound past the light it went from 2 lanes to one.

Southbound now has 4 lanes. Lanes 1 & 2 are dedicated left turn (taking one of the original two northbound lanes), lane 3 is straight only, lane 4 is right only.

All paint abd signs are clearly marked.

Now northbound straight FSDb insists on using the Chevron lane (former straight lane). And despite the double-yellow line northbound north of light, it insists on using the southbound lane #1 (former north straight lane but now south left turn lane). It's a safety intervention both approaching the light northbound and staying right of the double yellow northbound north of the light.

Bottom line - this lane positioning is clearly ignoring whatever visual indicators it has to choose lanes, including being on the wrong side of a double yellow with cars facing you.

I'm sure it's using map data to determine lane choice which has not yet been updated. I can't see FSD going far if it can't choose lanes based on road markings or signs instead of map data especially when they conflict.

Screenshot_20230718_230134_Earth.jpg