@old pilot, very good points. And, presumably with you being a pilot, a have a couple of curious questions to ask you.
First off, I'm a reader of the internet funny pages, like Ars Technica and similar and have been reading about $RANDOM air disasters, on and off, for, well, forever.
So, I've
heard of stuff like a voice yelling, "Pull up! Pull up!", or other various alarms, visual, audible, and tactile stuff like stick shakers (stalls?). And I think I read an article in the long-ago that talked about There Being Too Darn Many Alarms or something. So, with you being a pilot, I'm guessing this is stuff that you're familiar with?
The question, I guess, is
how much training you get on these alarms. I'd like your opinion: For example, take that, "Pull up!" audible message I've heard about. I'd guess that goes off when the flight path intersects something solid, like the ground or a mountain or what-all. But when they train pilots, are the
exact criteria that sets that alarm off explained in excruciating detail? And is that true for
all other alarms that might go off in an airplane for which a pilot is certified? If so, is this an FAA mandate of some kind?
And, going along these lines, there's autopilots of the aviation kind. Um. I've messed about with Microsoft Flight Simulator bopping around with Cessnas and the like, which are of the flavor, "fixed altitude and heading", but that's it. I understand that the autopilots on bigger airplanes get a heck of a lot more complicated. Additionally, I've had the impression that aircraft autopilots aren't particularly good at, say, dodging other aircraft, there being a Lot of Sky out there that doesn't contain, say, trees and guardrails. Or parked aircraft. Or intersections with stop signs. Comment?
Having said that, let's return to Tesla.
First off: With regards to training on how to use it, there's not a lot. Yes, I've read the manual, and I
know that the great majority of the public, including the FSD-b techie types on this forum, often don't read the manual in detail. One just gets in, double-thumps the gear shifter, and one is off and running. And observing. And trying to keep the car and oneself out of trouble.
And this is where it gets interesting. Put people in an environment where programmed things are happening with no explanations and people are going go pattern-searching. It's what people's brains do. We hypothesize cause and effect. Yeah, we can read the release notes; but those release notes are written in a lingo favored by programmers who are writing algorithms for self-driving cars, and neither a dictionary nor a grammar guide have been supplied.
Which leads to, well, interesting statements by people when they report how their Teslas are getting around when running FSD-b. No, there's not a person inside the FSD-b computer, but a lot of people (including myself) describe the actions of the car as if there
was. (English majors call this the, "Pathetic fallacy".)
It gets worse than that. A very long time ago, I took the mandated 3-credit college course in Communications for non-majors and, no, this wasn't the one about putting transistors together. It was more about how
people communicate with each others. Now, that was a very long time ago; there have been a ton of brain studies on how the brain, memory, and all that actually
work, so what follows here may not match the current vision of reality by those who actually know what's going on. But there's this idea from the course: People, according to this course, when they claim they
know someone, have actually created a mental model of that person in
their own brain. The better one knows that person, the better the mental model becomes; and, using that mental model (which comes naturally to people) one can
predict what the known person may do, feel about something, or may be actually thinking about something.
This whole mental model building is supposedly not something that we actually
think about doing: It's pretty much instinctual. (And I don't want to get into the explanations of how a "subconscious" fits into concept, or what happens to owner of a mental model when somebody one knows well dies.) But it
bothers people quite a bit when somebody one knows changes: It takes a while to adjust that mental model and, until that mental model gets updated, the owner of the model
is going to make mistakes.
Back to Tesla and FSD-b. So, we can't help ourselves: We build mental models of what we
think the car is doing. Or what we think it is thinking. Then.. here comes the change from FSD-b 10.69.3 to 11.3.6. And we had neither a rule book of what was going on with 10.69.3
or 11.3.6.. except for the mental model we had for the first one that had to be adjusted for the second.
Yeah, the release notes help a bit. But that's only partial information. And people will make mistakes, that being what we do.
Might explain why so many people on this forum run around like there's steam coming out of their ears every time a new release shows up.
I wonder if, in the future when different FSD-type software may be be available from multiple car manufacturers, if drivers' ed classes will have to be segregated by the manufacturers of the FSD software that's out there?