Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Seriously, why does Elon feel the need to lie about even short-term fsd beta timelines.

This site may earn commission on affiliate links.
that mimics an animal's brain

No it ABSOLUTELY does not. This is an outdated model of how brains work from the 1940s that was disproven in the 1950s just before computer neural networks were created. Attempting to paint this as in ANY way mimicking a brain is incorrect on every level.

will likely be misinterpreted until the neural network is fed enough inputs to understand what to do when it sees them.

Because images of trains haven't been collected by the fleet since 2016? And regardless of whether it knew the vehicle was a train, it attempted to cut into the path of an oncoming vehicle that it displayed on screen. A problem that has existed since they stopped publishing v8 "fsd" software to the extremely early testers. This isn't an object detection problem, this is a path planning problem, and there's countless examples of it happening.

Tesla's FSD is something like a toddler in this respect.

Toddlers have the concept of self preservation. FSD does not. It doesn't work anything like any age human, and attempting to imply it does is why Josh Brown died in 2016 and why we continue to see fatalities with AP and crashes with FSD.
 
This isn't an object detection problem, this is a path planning problem, and there's countless examples of it happening.
This is very true. 10.12.2 has issues with the planner, as evidenced by the car picking the wrong lanes, or having trouble switching lanes at the last moment. I believe this is due to having several hard-coded functions swapped out to newly created neural nets, and deprecating/removing older/redundant neural nets. I expect to see this type of behavior as they switch more of the hard-coded function to neural nets, and then tweak those NNs with data from testers.
 
Tesla has chosen a neural network approach to its FSD functionality. This technology involves creating a network of pseudo-neurons that mimics an animal's brain and then trains the network on a series of inputs (video clips from Tesla vehicles, in this case) until it reliably produces the desired output (driving behavior, in this case). One implication of this approach is that unusual inputs (trains, airplanes on the ground, cruise ships) will likely be misinterpreted until the neural network is fed enough inputs to understand what to do when it sees them. Tesla's FSD is something like a toddler in this respect. Parents carefully watch their toddlers because they know that toddlers, not fully understanding the world, are likely to do dangerous things, like touch hot stoves or walk into the street. Once, I was walking with two of my friends and their toddler when a police car zoomed past at high speed, with flashing lights and siren. Amazed, the toddler asked "what was that?" Had we not been there, it's entirely possibly that my friends' child would have been killed by that police car. A Tesla on FSD that doesn't do the right thing when it's confronted by a train is like that toddler, but it has no way to exclaim, "what was that?". No, strike that; a Tesla's neural network is much less sophisticated than that of a human toddler. I haven't seen estimates in the last few years, but the last I heard, the best neural networks had a complexity comparable to that of an insect's brain. That's what Tesla's trying to do: It's trying to teach an insect to drive a car. That may sound ludicrous, but insects are pretty good at navigating the world, so it may well be an achievable goal, but it's not an easy task.

You've chosen to emphasize the word "TRAIN" in your message as if it were self-evident what one is; but you know what a train is, and you know how dangerous one can be, because you've seen them, in real life and in photos, movies, etc., for your whole life. Your own neural network is much more sophisticated than what's in a Tesla, so you can learn more quickly, and you deeply understand how dangerous it would be to stand in front of a train as it barrels down the tracks. A Tesla's neural network is a simple mapping of visual inputs to steering, braking, and acceleration outputs. It does not understand what a train is, or even what a car is -- although a Tesla can identify a car as such with good reliability, the Tesla doesn't really understand what it is. Much less a train. To the Tesla, a train is just an unidentified object, and maybe not even that; it might just register as an unknown mass of pixels.

As somebody with a background in both human cognitive psychology and computers, I am very impressed with what Tesla has managed to do with its FSD features. I also know, however, that it will take a lot more exposure to corner cases, like trains, before the system will be able to handle a wide enough array of driving situations for the system to be as safe as a human driver. Even then, there will be novel situations that may confuse it -- volcanic flows, helicopters landing on the street, chickens falling off a truck, etc. It'll be many years before a neural network like Tesla's will be able to reliably handle such situations.
Driving policy is written in traditional code.
 
@srs5694 well said. A logical, reasonable analysis. Now let's hear the "but I spent a lot of money..." and "but Elon said [insert phrase here] on Twitter..."
I was replying to @2101Guy's comment about FSD being unable to recognize trains or airplanes; I was attempting to explain why that's the case. I wasn't directly addressing the main point of this thread, which is a complaint about Musk's inaccurate timelines, or trying to either justify or complain about those inaccurate timelines. As a customer, I'm not happy with the sluggish progress on FSD development. Also as a customer, I went into it with my eyes open. I bought FSD for $5,000 in 2019, knowing that it might never amount to anything, and also knowing Musk's tendency to specify unrealistic timelines. (That Musk attribute was well known, even then.) For me, it was a gamble, and so far it has not paid off. It might yet pay off, but it'll be worth the $5,000 only if I get a working feature well before I sell the car (or if that eventually-working feature increases my car's resale value).
So are you implying that Elon (who also has a diverse educational/technical background like you) knows the last statement to be true as well, and is in fact intentionally misleading customers/potential customers by stating (repeatedly) timelines that are far different from what you (and 99% of everyone else) have come to conclude?

Or are you saying he literally is just that clueless and that far out of touch...constantly?
Neither. As I just said, I was simply commenting on the nature of neural networks in an effort to explain why they can do things that seem phenomenally stupid to humans.

I prefer not to draw any conclusions about Musk's state of mind or motives. There are a wide range of possibilities, and which of them is true doesn't really matter, unless maybe there's a lawsuit, in which case Musk's state of mind might be relevant. Of course, I'd prefer that Musk simply shut up about timelines and leave that to Tesla's PR department, except of course that Tesla doesn't have a PR department.
 
This is very true. 10.12.2 has issues with the planner, as evidenced by the car picking the wrong lanes, or having trouble switching lanes at the last moment. I believe this is due to having several hard-coded functions swapped out to newly created neural nets, and deprecating/removing older/redundant neural nets. I expect to see this type of behavior as they switch more of the hard-coded function to neural nets, and then tweak those NNs with data from testers.
Why would you use a neural net to plan out lane selection? That seems like the one thing that should probably be written in C++. I.e. if I need to turn left I should enter a left turn lane. It's not like it's some sort of probabilistic thing like vision where you need to train some model. I guess one explanation is the lanes are getting classified wrong because some neural network upstream of the planner didn't label them correctly, or didn't label them until it's already too late. Maybe the lane was occluded or something who knows.
 
srs5694 said:
that mimics an animal's brain
No it ABSOLUTELY does not. This is an outdated model of how brains work from the 1940s that was disproven in the 1950s just before computer neural networks were created. Attempting to paint this as in ANY way mimicking a brain is incorrect on every level.
It's not accurate in every detail, but it is accurate to a first approximation. Neural networks are intended to mimic the way a brain works, but that mimicry is not exact or precise. I studied them in graduate school in the 1990s, along with actual human psychology, so I understand the basic principles, although I'm nowhere near the level of expertise that Tesla has working for it on this problem. In particular, they're supposed to learn by exposure to inputs and by feedback, similar to the way animals learn, rather than by having rules and procedures guiding them, the way most computer programs are written. (Of course, the networks themselves are coded with conventional programming language rules, but the tasks that the networks themselves perform are not handled in this way.)
Because images of trains haven't been collected by the fleet since 2016? And regardless of whether it knew the vehicle was a train, it attempted to cut into the path of an oncoming vehicle that it displayed on screen. A problem that has existed since they stopped publishing v8 "fsd" software to the extremely early testers. This isn't an object detection problem, this is a path planning problem, and there's countless examples of it happening.
I confess I didn't watch the whole 17-minute video link that @2101Guy posted when I made my earlier reply. I did just now, and it's clear that the Tesla did detect the tram, and interpreted it as a vehicle, so this was not the sort of failure-to-detect issue that @2101Guy's description led me to believe it was:
2101Guy said:
Like...HOW...does FSD NOT see a TRAIN COMING DIRECTLY AT YOU in broad daylight?
What actually happened is a different problem. For those who don't want to watch the whole video, the incident in question takes place at about 16:15 in the video. The car tried to make a left turn into the path of an urban tram. The center display clearly shows the tram, and it's shown as such, so Tesla does seem to have enough data to have trained its neural network to correctly identify them, contrary to @2101Guy's summary. The light was green, but it was not a left-turn arrow. The video's resolution was too low for me to make out if the car correctly identified the solid-green light. Even if the tram had been another car, it would have been wrong for the Tesla to turn at that moment; but in the video, the car seemed to be about to make a turn straight into the path of the tram. Having watched the video, I can't say why the Tesla did what it did, but it was not a failure to recognize and identify the tram.
Toddlers have the concept of self preservation. FSD does not. It doesn't work anything like any age human, and attempting to imply it does is why Josh Brown died in 2016 and why we continue to see fatalities with AP and crashes with FSD.
Get real. My attempt to explain neural networks, however much you may disagree with my representation, does not put anybody at risk. Included in my explanation were multiple statements to the effect that the current state of Tesla's neural network was nowhere near human driving capabilities. How you get from that to my putting peoples' lives at risk is beyond me.
 
Why would you use a neural net to plan out lane selection? That seems like the one thing that should probably be written in C++. I.e. if I need to turn left I should enter a left turn lane. It's not like it's some sort of probabilistic thing like vision where you need to train some model. I guess one explanation is the lanes are getting classified wrong because some neural network upstream of the planner didn't label them correctly, or didn't label them until it's already too late. Maybe the lane was occluded or something who knows.
I don't know, but my guess would be that Tesla is trying to make the car drive in situations where maps aren't available or poor quality. Just as if I drop you into a totally unfamiliar location and ask you to drive. You use your visual scanners (I love using that phrase since I heard it in Star Wars EP4 - LOL) to drive and pick lanes based on markers on the road. You don't know that a mile up the street there will be a dedicated left turn lane, you just know that somewhere up ahead you'll need to turn left so you move your car to the left lane and see what comes up when you reach the Del Taco that someone told you to make a left at. :)
 
What actually happened is a different problem. For those who don't want to watch the whole video, the incident in question takes place at about 16:15 in the video. The car tried to make a left turn into the path of an urban tram. The center display clearly shows the tram, and it's shown as such, so Tesla does seem to have enough data to have trained its neural network to correctly identify them, contrary to @2101Guy's summary. The light was green, but it was not a left-turn arrow. The video's resolution was too low for me to make out if the car correctly identified the solid-green light. Even if the tram had been another car, it would have been wrong for the Tesla to turn at that moment; but in the video, the car seemed to be about to make a turn straight into the path of the tram. Having watched the video, I can't say why the Tesla did what it did, but it was not a failure to recognize and identify the tram.
As I mentioned earlier, driving policy is coded with traditional code. NN identified the tram, but for some reason, traditional code made the car to turn straight into the path of the tram.
 
believe this is due to having several hard-coded functions swapped out to newly created neural nets, and deprecating/removing older/redundant neural nets.

This behavior showed up in FSD v8 videos, so it's not new behavior.

but it is accurate to a first approximation.

It's not even close, and you're just showing that you don't understand.

I studied them in graduate school in the 1990s, along with actual human psychology, so I understand the basic principles

You very obviously don't. And that's okay as long as you don't try to pretend you do. BTW, college intro to psych doesn't mean you understand the inner functions of the brain whatsoever.

I confess I didn't watch the whole 17-minute video link

Right, so you didn't have the information necessary but still thought making a defense was necessary? Why? Why run off with half or none of the information about the situation and try to explain it away? That lets people discredit everything you've said out of hand because you've demonstrated you're willing to just make up an excuse. That doesn't help your cause, and only serves to cause all these side conversations.
 
  • Disagree
Reactions: srs5694
srs5694 said:
I studied them in graduate school in the 1990s, along with actual human psychology, so I understand the basic principles
You very obviously don't. And that's okay as long as you don't try to pretend you do. BTW, college intro to psych doesn't mean you understand the inner functions of the brain whatsoever.
Apparently you didn't understand what I wrote. I clearly specified graduate school, not college intro psych. I took graduate-level classes on neural nets, and I've done cognitive psychology and neuroscience post-doctoral research. I admit that my PhD is not in neural nets per se, but I understand them better than you think I do.

What are your qualifications in this area?
Right, so you didn't have the information necessary but still thought making a defense was necessary? Why? Why run off with half or none of the information about the situation and try to explain it away? That lets people discredit everything you've said out of hand because you've demonstrated you're willing to just make up an excuse. That doesn't help your cause, and only serves to cause all these side conversations.
Defense of what? I was not defending the car's behavior; I was trying to explain the limitations of a neural network -- it needs a lot of training to get even the basics right. It seems to me that you're the one who's guilty of going off half-cocked with your attacks on me.

It seems to me that this is basically a glass-half-full vs. a glass-half-empty situation. You're choosing to emphasize the differences between computer neural nets and animal brain structures, whereas I was emphasizing their similarities. Compared to, say, a LISP program, neural nets are quite similar to the way a human brain works. Compared to how an ant's brain works, a computerized neural net isn't very much like a human brain.
 
Nah, didn't show up for me until 12.2.

Were you a v8 testing youtuber? Because those videos _ABSOLUTELY_ showed this behavior. And it has appeared in every single version since then. You having not experienced it doesn't mean it isn't happening at all. Another issue that FSD faces- inconsistency.

took graduate-level classes on neural nets

Kay.

but I understand them better than you think I do.

Apparently not, because you're using a discredited mental model from the 1950s.

glass-half-full vs. a glass-half-empty situation

This is clearly the problem. You think there's even a glass. FSD is a parlor trick, it's a proof of concept. It's not a "beta" product, it wouldn't pass any other company's threshold for a product, even. It's a lie, and you fell for it, and you're attempting to make pessimism vs optimism comparisons. That right there is why Elon keeps lying, to get back to the original question. Because people like you come by an excuse it.
 
  • Like
  • Disagree
Reactions: am_dmd and TresLA
I would ask the opposite. Does it Really matter when the update comes out? It’s not like the car shut off and doesn’t run. Yet there are hundreds of posts and tweets asking “when is the next update and why is it taking so long?” My wife’s Ice car hasn’t had an update in 5 years since we bought it. Why are we So dependent on new releases?
Updates that fix bugs, completely understandable. Excited for bug fixes as well.
 
Any personal attacks will not be tolerated in this thread. Please be nice. It’s not that hard to be nice.
It’s a forum about electric cars, not a place for you to come and attack others (that you will never meet) to make yourself feel good.
Please read posts before you attack someone about it. (Friendly reminder that the original post here was about Elon and FSD timelines)
Thanks
 
  • Like
Reactions: PhysicsGuy
Any personal attacks will not be tolerated in this thread. Please be nice. It’s not that hard to be nice.
It’s a forum about electric cars, not a place for you to come and attack others (that you will never meet) to make yourself feel good.
Please read posts before you attack someone about it. (Friendly reminder that the original post here was about Elon and FSD timelines)
Thanks
That's ironic in this thread
 
Had it FSD on our used 2020 MX. Did NOT spend the $12k on our 2022 MX and don't miss it. See that's the problem. Don't get it working and when existing owners buy new, they won't spend the $ on wishes.
I don't see the problem. It's a perfectly valid strategy to wait for a product to be released before deciding on a purchase. Not everyone has the risk-tolerance to fund development efforts.