Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Wiki MASTER THREAD: Actual FSD Beta downloads and experiences

This site may earn commission on affiliate links.
Hi All,

So after giving 11.3.3 a try a few weeks ago, I gave up on it because I couldn't use it without multiple interventions on my drive. A few days ago I got 11.3.6 and gave it a bit of a go up to today. It has improved, a bit, from 11.3.3 but it's not something I would want to keep using until there are more improvements.

What's improved from 11.3.6 from 11.3.3
A) biasing away from trucks is better, on a straightaway it won't bias (or as much) to one side
B) lane changes seem a bit faster (I have it set to agressive all the time), but not fast enough for a busy metropolitan highway

What problems I still see (just from today's drive):
A) 11.3.6 still can't merge, every attempt to merge today was on a city street, and I had to intervene every time
- one instance it drove to the end of the merge, slowed down this time, then drove a bit on the shoulder, and then attempted to merge. Again, that's illegal where I live, I had to intervene to squeeze in (this first instance, it did not recognize that there was a merge lane even though lines and signs were clearly visible)
- second merging attempt, again, drives right to the end of the merge lane, and changes lanes without signalling, intervened to be able to report it needed to signal
- 3rd attempt, again, drives right ot the end of the merge lane, this time it signals to change lanes this time, but attempted to merge and I noticed there was a car very close behind me. I don't know if they were there before, or attempted to pass me as it was taking it's time
B) sometimes the wheel will jerk one direction or another when turning. It's like it thought it needed to turn wide, suddenly, and then decides not to. It's so fast that if I have my hand on the steering wheel I can inadvertently disengage FSD because my hand put enough torque on the steering wheel. This may be the corrected bug I saw with 11.3.3 where it would be attempting to turn left, but stear a bit to the right before steering to the left (or steer a bit to the left before turning right).
C) occasionally thinks the black filler used to seal cracks in the road are lines (when there are no lines on the road), and even though the sealer is black
D) Completely pooched a left hand turn see the image below:
Screenshot from 2023-04-18 18-13-00b.png

1- nagavation says "turn left imediately"
2- FSD visuals see the left hand turn lane, but doesn't attempt to change langes, proceeds forward at speed, then suddently turns in front of the left hand turn lane (luckly there were no cars there)
3- it drives foward and I had to intervene

What other drivers seem to think:
A) in busy highway traffic, no one would let my car change lanes, still just enough hesitation to make people assume I'm letting them pass (do I have to turn on Mad Max mode?)
B) got honked at a few times once at a stop sign (took too long to start moving again from a stop), two other times turning right due to hesitation


Conclusion:
1) 11.3.6 has improvements over 11.3.3
2) Tesla needs to add a comment button if you punch the accelerator or manually press the signal buttons, those are interventions if I'm using FSD, and those mistakes are not picked up
3) Merging has improved a bit, but just a bit, I think it's still dangerous to use
4) lane changes on highway (and I would also including merging) is done way too late, or at the last minute. Merging is done right at the end, rather than the first oppotunity when merging on a highway. Changing lanes to get to an exit is also left really late. On a busy highway I usually start getting over 2 km before my exit, it's doing it at around 500m, to try to get across 4 lanes of heavy traffic. The navagation system should accound for the number of lanes that need to be crossed to make sure there's enough time/distance to get all the way over to the other side of the highway. Especially since FSD still hesitates long enough that other drivers may not let the car change lanes.


Again, this should just be considered a bug report. I really hope someone at Tesla reads these things to get more details. On my short 30 km drive, I still have multiple interventions (all the problems were just from TODAY). I think the progress is amazing, but.... I'll be turning it off.
 
Hi All,

So after giving 11.3.3 a try a few weeks ago, I gave up on it because I couldn't use it without multiple interventions on my drive. A few days ago I got 11.3.6 and gave it a bit of a go up to today. It has improved, a bit, from 11.3.3 but it's not something I would want to keep using until there are more improvements.

What's improved from 11.3.6 from 11.3.3
A) biasing away from trucks is better, on a straightaway it won't bias (or as much) to one side
B) lane changes seem a bit faster (I have it set to agressive all the time), but not fast enough for a busy metropolitan highway

What problems I still see (just from today's drive):
A) 11.3.6 still can't merge, every attempt to merge today was on a city street, and I had to intervene every time
- one instance it drove to the end of the merge, slowed down this time, then drove a bit on the shoulder, and then attempted to merge. Again, that's illegal where I live, I had to intervene to squeeze in (this first instance, it did not recognize that there was a merge lane even though lines and signs were clearly visible)
- second merging attempt, again, drives right to the end of the merge lane, and changes lanes without signalling, intervened to be able to report it needed to signal
- 3rd attempt, again, drives right ot the end of the merge lane, this time it signals to change lanes this time, but attempted to merge and I noticed there was a car very close behind me. I don't know if they were there before, or attempted to pass me as it was taking it's time
B) sometimes the wheel will jerk one direction or another when turning. It's like it thought it needed to turn wide, suddenly, and then decides not to. It's so fast that if I have my hand on the steering wheel I can inadvertently disengage FSD because my hand put enough torque on the steering wheel. This may be the corrected bug I saw with 11.3.3 where it would be attempting to turn left, but stear a bit to the right before steering to the left (or steer a bit to the left before turning right).
C) occasionally thinks the black filler used to seal cracks in the road are lines (when there are no lines on the road), and even though the sealer is black
D) Completely pooched a left hand turn see the image below:
View attachment 929652
1- nagavation says "turn left imediately"
2- FSD visuals see the left hand turn lane, but doesn't attempt to change langes, proceeds forward at speed, then suddently turns in front of the left hand turn lane (luckly there were no cars there)
3- it drives foward and I had to intervene

What other drivers seem to think:
A) in busy highway traffic, no one would let my car change lanes, still just enough hesitation to make people assume I'm letting them pass (do I have to turn on Mad Max mode?)
B) got honked at a few times once at a stop sign (took too long to start moving again from a stop), two other times turning right due to hesitation


Conclusion:
1) 11.3.6 has improvements over 11.3.3
2) Tesla needs to add a comment button if you punch the accelerator or manually press the signal buttons, those are interventions if I'm using FSD, and those mistakes are not picked up
3) Merging has improved a bit, but just a bit, I think it's still dangerous to use
4) lane changes on highway (and I would also including merging) is done way too late, or at the last minute. Merging is done right at the end, rather than the first oppotunity when merging on a highway. Changing lanes to get to an exit is also left really late. On a busy highway I usually start getting over 2 km before my exit, it's doing it at around 500m, to try to get across 4 lanes of heavy traffic. The navagation system should accound for the number of lanes that need to be crossed to make sure there's enough time/distance to get all the way over to the other side of the highway. Especially since FSD still hesitates long enough that other drivers may not let the car change lanes.


Again, this should just be considered a bug report. I really hope someone at Tesla reads these things to get more details. On my short 30 km drive, I still have multiple interventions (all the problems were just from TODAY). I think the progress is amazing, but.... I'll be turning it off.
Good call, good call (to turn it off).
 
Conclusion:
2) Tesla needs to add a comment button if you punch the accelerator or manually press the signal buttons, those are interventions if I'm using FSD, and those mistakes are not picked up
3) Merging has improved a bit, but just a bit, I think it's still dangerous to use
4) lane changes on highway (and I would also including merging) is done way too late, or at the last minute. Merging is done right at the end, rather than the first opportunity when merging on a highway. Changing lanes to get to an exit is also left really late. On a busy highway I usually start getting over 2 km before my exit, it's doing it at around 500m, to try to get across 4 lanes of heavy traffic. The navigation system should account for the number of lanes that need to be crossed to make sure there's enough time/distance to get all the way over to the other side of the highway. Especially since FSD still hesitates long enough that other drivers may not let the car change lanes.
Supposedly Tesla does know when we press the accelerator. OG Beta Testers were informed that accelerator presses are tracked. The turn signal is a good question though... the wrong turn signals are the biggest consistent failure by FSD Beta for us. It gets into left/right turn lanes without signaling, signals way too late for turns (especially at higher speeds), and has lots of erroneous turn signals.

I agree that merging is still dangerous & illegal without proper turn signal use. And if I turn on the signal then it merges & completes an immediate lane change due to the turn signal. I've tried to get the timing down to shut off the turn signal at the right moment so it will just merge & not change lanes, but it's tough to find the right timing.

I've found lane changes on the highway for exits to be too early. It sits behind slow traffic needlessly when I can see far down the highway & know that there will be easy merge options closer to my exit that will move faster. On the other hand, it's way too slow to merge to pass slower traffic. FSD Beta basically never changes lanes automatically on the highway for me because I always initiate the lane changes first anticipating traffic before it realizes. The only time it changes lanes automatically is moving us right toward an exit.
 
Supposedly Tesla does know when we press the accelerator. OG Beta Testers were informed that accelerator presses are tracked. The turn signal is a good question though... the wrong turn signals are the biggest consistent failure by FSD Beta for us. It gets into left/right turn lanes without signaling, signals way too late for turns (especially at higher speeds), and has lots of erroneous turn signals.

I agree that merging is still dangerous & illegal without proper turn signal use. And if I turn on the signal then it merges & completes an immediate lane change due to the turn signal. I've tried to get the timing down to shut off the turn signal at the right moment so it will just merge & not change lanes, but it's tough to find the right timing.

I've found lane changes on the highway for exits to be too early. It sits behind slow traffic needlessly when I can see far down the highway & know that there will be easy merge options closer to my exit that will move faster. On the other hand, it's way too slow to merge to pass slower traffic. FSD Beta basically never changes lanes automatically on the highway for me because I always initiate the lane changes first anticipating traffic before it realizes. The only time it changes lanes automatically is moving us right toward an exit.
I've noticed that the visual horizon on the display has moved into what appears to be 100 yards where on the 69. version it was further out. IF (I do not know if this is true) the FSD only sees 100 yards ahead then it cannot make decisions on traffic further ahead than that distance. So, it is just guessing as to when to change lanes for traffic ahead.
 
  • Like
  • Helpful
Reactions: jebinc and Genie
I've noticed that the visual horizon on the display has moved into what appears to be 100 yards where on the 69. version it was further out. IF (I do not know if this is true) the FSD only sees 100 yards ahead then it cannot make decisions on traffic further ahead than that distance. So, it is just guessing as to when to change lanes for traffic ahead.
Not to contradict but I actually noticed the opposite yesterday. Driving on a two lane road I was impressed that it was seeing cars Way down the road easily 400 yards. I could barely see the cars by eye and it was looking way ahead. It’s possible in some display modes they cut off the top of the display to keep only the relevant info (cars) shown.
 
  • Like
Reactions: pilotSteve
Not to contradict but I actually noticed the opposite yesterday. Driving on a two lane road I was impressed that it was seeing cars Way down the road easily 400 yards. I could barely see the cars by eye and it was looking way ahead. It’s possible in some display modes they cut off the top of the display to keep only the relevant info (cars) shown.
I think he means the line where you can no longer distinguish drivable and non-drivable space on the display. My theory (posted here when I first got v11) was that the "horizon line" detectable there indicated the "planning area." IOW, even though it can see things further away, it is not considering them in its actions. That having been said, the issue being discussed likely also involves the difference in speed between the FSD vehicle and the soon-to-be-lead vehicle, as I am finally experiencing FSD switching lanes to get in a faster lane when it should (before slowing, presumably because the car in front of me isn't that much slower) and even when it shouldn't (also before slowing, presumably because it's ignoring a red light displayed just beyond the "planning area" and no cars is a "faster lane" than stopped cars in the absence of said red light).
 
Last edited:
  • Informative
Reactions: pilotSteve
I think he means the line where you can no longer distinguish drivable and non-drivable space on the display. My theory (posted here when I first got v11) was that the "horizon line" detectable there indicated the "planning area." IOW, even though it can see things further away, it is not considering them in its actions. That having been said, the issue being discussed likely also involves the difference in speed between the FSD vehicle and the soon-to-be-lead vehicle, as I am finally experiencing FSD switching lanes to get in a faster lane when it should (before slowing, presumably because the car in front of me isn't that much slower) and even when it shouldn't (also before slowing, presumably because it's ignoring a red light displayed just beyond the "planning area" and no cars is a "faster lane" than stopped cars in the absence of said red light).
Um. That there is a display, meant to be seen by humans, in part to help said humans to Drive The Darn Car.

The SO's a Human Factors (Ergonomics) engineer. I'm not one, but hang around enough to hear the SO gritting teeth and complaining about poor designs. One HFE guiding religions: Too Much Information Is Dangerous Because It Causes Distractions. Ask airplane pilots: Too many bells and whistles going off means that bells and whistles get ignored. Or distract the pilot from something more important. Sometimes followed by the discovery that an aircraft can't fly through a cliff that was mistaken as a cloud bank. There's been a lot of work to reduce the number of distractions in a cockpit so when an alarm goes off, it's not masked by other, non-critical things.

I have got to believe that the people in charge of a Tesla's display have that in mind. In fact, it's obvious that they do: All of you have seen those wonderful engineering displays that show wire frames, block objects, and zillions of other things on the occasional video that Tesla dumps out to the world to show that they're Working On Stuff. Far as I know, whether one is a shill, a hater, or what-all, one doesn't get just a cut-down version of that engineering display: One gets a display designed to work with ma-and-pa-sixpack drivers. And that means: Helpful displays, not distracting displays.

So, it's a pretty good guess that the range of the horizon isn't how far out the car can detect it, but rather what the interface designer thought was appropriate. Yeah, the car can't see as far as the Moon (although, with a Moon on the horizon, maybe it could!), but I'll bet that it can see and process stuff that's farther out than what's displayed on the screen. How much farther out? Durned if I know, and I'll bet there's no user-oriented specifications sheet that does say.
 
So, it's a pretty good guess that the range of the horizon isn't how far out the car can detect it, but rather what the interface designer thought was appropriate. Yeah, the car can't see as far as the Moon (although, with a Moon on the horizon, maybe it could!), but I'll bet that it can see and process stuff that's farther out than what's displayed on the screen. How much farther out? Durned if I know, and I'll bet there's no user-oriented specifications sheet that does say.
Guesses are cool and all, but I have provided observations that match my hypothesis and a possible explanation for a behavior I haven't experieced. Further, I'm not sure why you think a line indicating what objects the car is and isn't planning for would be irrelevant to a user display where the objects of primary focus arealready another color. However, since I didn't explicitly indicate my first observation when I originally stated the hypothesis, it is worth noting that my experience has been that the vehicle also does not start braking for red lights or stop signs until they meet that horizontal line (unless it is instead slowing for a lead vehicle in said space).
 
  • Helpful
Reactions: RabidYak
The conversation above piqued my interest, so I paid much closer attention on another drive. Interestingly, I noticed some conflicting things. For instance, multiple things do change right at that "horizon line" such as the planned path cuts off (and a curve will show up as it appears there while a poorly decided incorrect "lane change" will appear closer to the vehicle) and traffic lights frames aren't drawn (and filled blue or whatever if relevant) until they cross it, but other things indicate that at least some attention is being applied to objects further out. The only example I saw on this drive was lead vehicles, where vehicles going faster than me were still colored as lead vehicles after crossing it and even when a vehicle beyond the "horizon line" switched into my lane, it was also colored as a lead vehicle. I don't know if this indicates Tesla doesn't use a "planning zone," Tesla uses multiple "planning zones," or Tesla has different behavior for different classes of objects, but it is interesting.
 
  • Like
Reactions: Tronguy and KArnold
Um. That there is a display, meant to be seen by humans, in part to help said humans to Drive The Darn Car.

The SO's a Human Factors (Ergonomics) engineer. I'm not one, but hang around enough to hear the SO gritting teeth and complaining about poor designs. One HFE guiding religions: Too Much Information Is Dangerous Because It Causes Distractions. Ask airplane pilots: Too many bells and whistles going off means that bells and whistles get ignored. Or distract the pilot from something more important. Sometimes followed by the discovery that an aircraft can't fly through a cliff that was mistaken as a cloud bank. There's been a lot of work to reduce the number of distractions in a cockpit so when an alarm goes off, it's not masked by other, non-critical things.

I have got to believe that the people in charge of a Tesla's display have that in mind. In fact, it's obvious that they do: All of you have seen those wonderful engineering displays that show wire frames, block objects, and zillions of other things on the occasional video that Tesla dumps out to the world to show that they're Working On Stuff. Far as I know, whether one is a shill, a hater, or what-all, one doesn't get just a cut-down version of that engineering display: One gets a display designed to work with ma-and-pa-sixpack drivers. And that means: Helpful displays, not distracting displays.

So, it's a pretty good guess that the range of the horizon isn't how far out the car can detect it, but rather what the interface designer thought was appropriate. Yeah, the car can't see as far as the Moon (although, with a Moon on the horizon, maybe it could!), but I'll bet that it can see and process stuff that's farther out than what's displayed on the screen. How much farther out? Durned if I know, and I'll bet there's no user-oriented specifications sheet that does say.
Where there is a human machine interface there must be a good understanding by the human as to what the machine will do and when it will do what is supposed to do. We do not have that here. It is true, I suppose, that the car can see further than or as far as the driver but I don't know that so I am left to guess as to what the car will do. If I know the car could see and react to stopped traffic a mile ahead even if it was beyond the displayed horizon then I could make a rational decision to allow the car to continue and not intervene. If I know the car could only see to the displayed horizon I could and would intervene early to slow the car. It is true that to many distractions can cause a problem but that is not the problem we face here. Instead of release notes telling me what percentage improvement something has, give me information on how to use the system and what its limitations are. I am a user of the system tell me what it will and will not do so I am not surprised and ask myself "what's it doing now?", that's another aviation term I've heard. normally followed by "Oh *sugar*!"
 
The conversation above piqued my interest, so I paid much closer attention on another drive. Interestingly, I noticed some conflicting things. For instance, multiple things do change right at that "horizon line" such as the planned path cuts off (and a curve will show up as it appears there while a poorly decided incorrect "lane change" will appear closer to the vehicle) and traffic lights frames aren't drawn (and filled blue or whatever if relevant) until they cross it, but other things indicate that at least some attention is being applied to objects further out. The only example I saw on this drive was lead vehicles, where vehicles going faster than me were still colored as lead vehicles after crossing it and even when a vehicle beyond the "horizon line" switched into my lane, it was also colored as a lead vehicle. I don't know if this indicates Tesla doesn't use a "planning zone," Tesla uses multiple "planning zones," or Tesla has different behavior for different classes of objects, but it is interesting.
Yep. And I have no argument with your use of the words, "Planning Zone"; I don't know if that's a term that popped up on Autopilot day from Tesla or if it's just your own way (and I truly don't mean anything bad about this) of internally hypothisizing and rationalizing what the car's algorithms are doing.

And now I think I'm going to respond to @old pilot's post because, actually, when it comes to Beta testing, this hypothesizing is kind of important.
 
Where there is a human machine interface there must be a good understanding by the human as to what the machine will do and when it will do what is supposed to do. We do not have that here. It is true, I suppose, that the car can see further than or as far as the driver but I don't know that so I am left to guess as to what the car will do. If I know the car could see and react to stopped traffic a mile ahead even if it was beyond the displayed horizon then I could make a rational decision to allow the car to continue and not intervene. If I know the car could only see to the displayed horizon I could and would intervene early to slow the car. It is true that to many distractions can cause a problem but that is not the problem we face here. Instead of release notes telling me what percentage improvement something has, give me information on how to use the system and what its limitations are. I am a user of the system tell me what it will and will not do so I am not surprised and ask myself "what's it doing now?", that's another aviation term I've heard. normally followed by "Oh *sugar*!"
@old pilot, very good points. And, presumably with you being a pilot, a have a couple of curious questions to ask you.

First off, I'm a reader of the internet funny pages, like Ars Technica and similar and have been reading about $RANDOM air disasters, on and off, for, well, forever.

So, I've heard of stuff like a voice yelling, "Pull up! Pull up!", or other various alarms, visual, audible, and tactile stuff like stick shakers (stalls?). And I think I read an article in the long-ago that talked about There Being Too Darn Many Alarms or something. So, with you being a pilot, I'm guessing this is stuff that you're familiar with?

The question, I guess, is how much training you get on these alarms. I'd like your opinion: For example, take that, "Pull up!" audible message I've heard about. I'd guess that goes off when the flight path intersects something solid, like the ground or a mountain or what-all. But when they train pilots, are the exact criteria that sets that alarm off explained in excruciating detail? And is that true for all other alarms that might go off in an airplane for which a pilot is certified? If so, is this an FAA mandate of some kind?

And, going along these lines, there's autopilots of the aviation kind. Um. I've messed about with Microsoft Flight Simulator bopping around with Cessnas and the like, which are of the flavor, "fixed altitude and heading", but that's it. I understand that the autopilots on bigger airplanes get a heck of a lot more complicated. Additionally, I've had the impression that aircraft autopilots aren't particularly good at, say, dodging other aircraft, there being a Lot of Sky out there that doesn't contain, say, trees and guardrails. Or parked aircraft. Or intersections with stop signs. Comment?

Having said that, let's return to Tesla.

First off: With regards to training on how to use it, there's not a lot. Yes, I've read the manual, and I know that the great majority of the public, including the FSD-b techie types on this forum, often don't read the manual in detail. One just gets in, double-thumps the gear shifter, and one is off and running. And observing. And trying to keep the car and oneself out of trouble.

And this is where it gets interesting. Put people in an environment where programmed things are happening with no explanations and people are going go pattern-searching. It's what people's brains do. We hypothesize cause and effect. Yeah, we can read the release notes; but those release notes are written in a lingo favored by programmers who are writing algorithms for self-driving cars, and neither a dictionary nor a grammar guide have been supplied.

Which leads to, well, interesting statements by people when they report how their Teslas are getting around when running FSD-b. No, there's not a person inside the FSD-b computer, but a lot of people (including myself) describe the actions of the car as if there was. (English majors call this the, "Pathetic fallacy".)

It gets worse than that. A very long time ago, I took the mandated 3-credit college course in Communications for non-majors and, no, this wasn't the one about putting transistors together. It was more about how people communicate with each others. Now, that was a very long time ago; there have been a ton of brain studies on how the brain, memory, and all that actually work, so what follows here may not match the current vision of reality by those who actually know what's going on. But there's this idea from the course: People, according to this course, when they claim they know someone, have actually created a mental model of that person in their own brain. The better one knows that person, the better the mental model becomes; and, using that mental model (which comes naturally to people) one can predict what the known person may do, feel about something, or may be actually thinking about something.

This whole mental model building is supposedly not something that we actually think about doing: It's pretty much instinctual. (And I don't want to get into the explanations of how a "subconscious" fits into concept, or what happens to owner of a mental model when somebody one knows well dies.) But it bothers people quite a bit when somebody one knows changes: It takes a while to adjust that mental model and, until that mental model gets updated, the owner of the model is going to make mistakes.

Back to Tesla and FSD-b. So, we can't help ourselves: We build mental models of what we think the car is doing. Or what we think it is thinking. Then.. here comes the change from FSD-b 10.69.3 to 11.3.6. And we had neither a rule book of what was going on with 10.69.3 or 11.3.6.. except for the mental model we had for the first one that had to be adjusted for the second.

Yeah, the release notes help a bit. But that's only partial information. And people will make mistakes, that being what we do.

Might explain why so many people on this forum run around like there's steam coming out of their ears every time a new release shows up.

I wonder if, in the future when different FSD-type software may be be available from multiple car manufacturers, if drivers' ed classes will have to be segregated by the manufacturers of the FSD software that's out there?
 
@old pilot, very good points. And, presumably with you being a pilot, a have a couple of curious questions to ask you.

First off, I'm a reader of the internet funny pages, like Ars Technica and similar and have been reading about $RANDOM air disasters, on and off, for, well, forever.

So, I've heard of stuff like a voice yelling, "Pull up! Pull up!", or other various alarms, visual, audible, and tactile stuff like stick shakers (stalls?). And I think I read an article in the long-ago that talked about There Being Too Darn Many Alarms or something. So, with you being a pilot, I'm guessing this is stuff that you're familiar with?

The question, I guess, is how much training you get on these alarms. I'd like your opinion: For example, take that, "Pull up!" audible message I've heard about. I'd guess that goes off when the flight path intersects something solid, like the ground or a mountain or what-all. But when they train pilots, are the exact criteria that sets that alarm off explained in excruciating detail? And is that true for all other alarms that might go off in an airplane for which a pilot is certified? If so, is this an FAA mandate of some kind?

And, going along these lines, there's autopilots of the aviation kind. Um. I've messed about with Microsoft Flight Simulator bopping around with Cessnas and the like, which are of the flavor, "fixed altitude and heading", but that's it. I understand that the autopilots on bigger airplanes get a heck of a lot more complicated. Additionally, I've had the impression that aircraft autopilots aren't particularly good at, say, dodging other aircraft, there being a Lot of Sky out there that doesn't contain, say, trees and guardrails. Or parked aircraft. Or intersections with stop signs. Comment?

Having said that, let's return to Tesla.

First off: With regards to training on how to use it, there's not a lot. Yes, I've read the manual, and I know that the great majority of the public, including the FSD-b techie types on this forum, often don't read the manual in detail. One just gets in, double-thumps the gear shifter, and one is off and running. And observing. And trying to keep the car and oneself out of trouble.

And this is where it gets interesting. Put people in an environment where programmed things are happening with no explanations and people are going go pattern-searching. It's what people's brains do. We hypothesize cause and effect. Yeah, we can read the release notes; but those release notes are written in a lingo favored by programmers who are writing algorithms for self-driving cars, and neither a dictionary nor a grammar guide have been supplied.

Which leads to, well, interesting statements by people when they report how their Teslas are getting around when running FSD-b. No, there's not a person inside the FSD-b computer, but a lot of people (including myself) describe the actions of the car as if there was. (English majors call this the, "Pathetic fallacy".)

It gets worse than that. A very long time ago, I took the mandated 3-credit college course in Communications for non-majors and, no, this wasn't the one about putting transistors together. It was more about how people communicate with each others. Now, that was a very long time ago; there have been a ton of brain studies on how the brain, memory, and all that actually work, so what follows here may not match the current vision of reality by those who actually know what's going on. But there's this idea from the course: People, according to this course, when they claim they know someone, have actually created a mental model of that person in their own brain. The better one knows that person, the better the mental model becomes; and, using that mental model (which comes naturally to people) one can predict what the known person may do, feel about something, or may be actually thinking about something.

This whole mental model building is supposedly not something that we actually think about doing: It's pretty much instinctual. (And I don't want to get into the explanations of how a "subconscious" fits into concept, or what happens to owner of a mental model when somebody one knows well dies.) But it bothers people quite a bit when somebody one knows changes: It takes a while to adjust that mental model and, until that mental model gets updated, the owner of the model is going to make mistakes.

Back to Tesla and FSD-b. So, we can't help ourselves: We build mental models of what we think the car is doing. Or what we think it is thinking. Then.. here comes the change from FSD-b 10.69.3 to 11.3.6. And we had neither a rule book of what was going on with 10.69.3 or 11.3.6.. except for the mental model we had for the first one that had to be adjusted for the second.

Yeah, the release notes help a bit. But that's only partial information. And people will make mistakes, that being what we do.

Might explain why so many people on this forum run around like there's steam coming out of their ears every time a new release shows up.

I wonder if, in the future when different FSD-type software may be be available from multiple car manufacturers, if drivers' ed classes will have to be segregated by the manufacturers of the FSD software that's out there?
"Pull Up, Pull Up" is simple very little thought involved. It's the ground proximity warning, as you can guess the airplane is about to hit the ground. The response is disconnect the auto pilot, firewall the throttles (something you would never do except for the fact that hitting the ground would destroy the engines anyway, so) pitch up to follow the flight director until clear of the danger. Kind of like getting the big red" Stop" warning in the car, you should Stop.

As to training, different airlines have different operating specks (procedures) some do yearly training some every six months. It also depends on the type of airline your operating (FAR's part 135, 121). Most transport category airplanes require specific training for the type of aircraft. The autopilots on most large aircraft can do a much better job flying than the human simply because they can process data much faster than a human and react to that information. Most autopilots can do auto landings with zero visibility, although some visibility is required for the human pilot to taxi to the gate. But all this works because of ground based navigation system's even for the GPS systems and it is all done without using visual cues. What is happening with self driving cars is a visual system and not in the tightly controlled environment that exists in the aviation industry so, in my opinion, much more difficult.

The point I was making was the release notes, as they currently are, are useless. As the operator of the system I don't need to know how the magic works I need to know how to work the system and what its limitations are. When this is truly "Full Self Driving" and all I need to do is push a button and away we go my need for all the information will no longer exist.
 
Last edited:
Yep. And I have no argument with your use of the words, "Planning Zone"; I don't know if that's a term that popped up on Autopilot day from Tesla or if it's just your own way (and I truly don't mean anything bad about this) of internally hypothisizing and rationalizing what the car's algorithms are doing.

And now I think I'm going to respond to @old pilot's post because, actually, when it comes to Beta testing, this hypothesizing is kind of important.
So, "planning zone" comes from my experience with these guys, but the verbiage may not be exactly right even in that context. TBH, I haven't watched autopilot day and don't know Tesla's terms, hence the quotes.
 
But all this works because of ground based navigation system's even for the GPS
Just fyi, my 6 seat Bonanza had a radar altimeter. No “pull up” annunciation, but would give an alert when descending past minimum preset altitude above the ground. No ground based or gps nav involved, almost as simple as a grocery store automatic door opener.

Comparing Tesla FSD functions to aviation autopilots is like comparing the internet to a transistor radio. Several orders of magnitude in multiple dimensions different.
 
@old pilot, very good points. And, presumably with you being a pilot, a have a couple of curious questions to ask you.

First off, I'm a reader of the internet funny pages, like Ars Technica and similar and have been reading about $RANDOM air disasters, on and off, for, well, forever.

So, I've heard of stuff like a voice yelling, "Pull up! Pull up!", or other various alarms, visual, audible, and tactile stuff like stick shakers (stalls?). And I think I read an article in the long-ago that talked about There Being Too Darn Many Alarms or something. So, with you being a pilot, I'm guessing this is stuff that you're familiar with?

The question, I guess, is how much training you get on these alarms. I'd like your opinion: For example, take that, "Pull up!" audible message I've heard about. I'd guess that goes off when the flight path intersects something solid, like the ground or a mountain or what-all. But when they train pilots, are the exact criteria that sets that alarm off explained in excruciating detail? And is that true for all other alarms that might go off in an airplane for which a pilot is certified? If so, is this an FAA mandate of some kind?

And, going along these lines, there's autopilots of the aviation kind. Um. I've messed about with Microsoft Flight Simulator bopping around with Cessnas and the like, which are of the flavor, "fixed altitude and heading", but that's it. I understand that the autopilots on bigger airplanes get a heck of a lot more complicated. Additionally, I've had the impression that aircraft autopilots aren't particularly good at, say, dodging other aircraft, there being a Lot of Sky out there that doesn't contain, say, trees and guardrails. Or parked aircraft. Or intersections with stop signs. Comment?

Having said that, let's return to Tesla.

First off: With regards to training on how to use it, there's not a lot. Yes, I've read the manual, and I know that the great majority of the public, including the FSD-b techie types on this forum, often don't read the manual in detail. One just gets in, double-thumps the gear shifter, and one is off and running. And observing. And trying to keep the car and oneself out of trouble.

And this is where it gets interesting. Put people in an environment where programmed things are happening with no explanations and people are going go pattern-searching. It's what people's brains do. We hypothesize cause and effect. Yeah, we can read the release notes; but those release notes are written in a lingo favored by programmers who are writing algorithms for self-driving cars, and neither a dictionary nor a grammar guide have been supplied.

Which leads to, well, interesting statements by people when they report how their Teslas are getting around when running FSD-b. No, there's not a person inside the FSD-b computer, but a lot of people (including myself) describe the actions of the car as if there was. (English majors call this the, "Pathetic fallacy".)

It gets worse than that. A very long time ago, I took the mandated 3-credit college course in Communications for non-majors and, no, this wasn't the one about putting transistors together. It was more about how people communicate with each others. Now, that was a very long time ago; there have been a ton of brain studies on how the brain, memory, and all that actually work, so what follows here may not match the current vision of reality by those who actually know what's going on. But there's this idea from the course: People, according to this course, when they claim they know someone, have actually created a mental model of that person in their own brain. The better one knows that person, the better the mental model becomes; and, using that mental model (which comes naturally to people) one can predict what the known person may do, feel about something, or may be actually thinking about something.

This whole mental model building is supposedly not something that we actually think about doing: It's pretty much instinctual. (And I don't want to get into the explanations of how a "subconscious" fits into concept, or what happens to owner of a mental model when somebody one knows well dies.) But it bothers people quite a bit when somebody one knows changes: It takes a while to adjust that mental model and, until that mental model gets updated, the owner of the model is going to make mistakes.

Back to Tesla and FSD-b. So, we can't help ourselves: We build mental models of what we think the car is doing. Or what we think it is thinking. Then.. here comes the change from FSD-b 10.69.3 to 11.3.6. And we had neither a rule book of what was going on with 10.69.3 or 11.3.6.. except for the mental model we had for the first one that had to be adjusted for the second.

Yeah, the release notes help a bit. But that's only partial information. And people will make mistakes, that being what we do.

Might explain why so many people on this forum run around like there's steam coming out of their ears every time a new release shows up.

I wonder if, in the future when different FSD-type software may be be available from multiple car manufacturers, if drivers' ed classes will have to be segregated by the manufacturers of the FSD software that's out there?
Comparing Tesla FSD functions to aviation autopilots is like comparing the internet to a transistor radio. Several orders of magnitude in multiple dimensions different.
But Tron, you are quite right that the same human factors issues pertain. As a driver, a pilot and a product designer, I too have criticisms of Tesla’s interface designs.