Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla FSD Beta Twitter Drama from Elon Musk

This site may earn commission on affiliate links.

Elon reacts to criticism from faithful FSD Beta tester @James Locke. We discuss with James to find out his side of the story.




This is a clip from Tesla Motors Club Podcast #24. The full podcast video is available here:


Podcast networks-
Apple Podcasts: ‎‎Tesla Motors Club Podcast on Apple Podcasts
Spotify: Tesla Motors Club Podcast
Amazon Music: https://music.amazon.com/podcasts/38bacc87-f8b7-4f5c-aa64-2db865214942
TuneIn: Listen to Tesla Motors Club Podcast on TuneIn
RSS: https://feeds.buzzsprout.com/1950101.rss
 
Really… 🙄

FSD has a long way to go and Tesla is not going to be the leader in autonomous driving development for very much longer if they can’t get over some fundamental issues (like relying solely on camera data and wholesale elimination of ultrasonic sensors, which makes no sense that I can fathom).

On a recent trip, I noticed that the onboard computer was identifying ALL speed limit signs as such and adjusting the vehicle speed to match, including those intended only for trucks (55 mph, sometimes 45 mph in a zone intended to be 65 mph for cars, same for “trailers and vehicles towing”), random phantom braking, multiple cameras unusable messages on dark roads, “freak out” going over a rise in the road casing a panic “take over steering manually NOW!” warning replete with loud screaming alert tone and kicking me out of autopilot for the remainder of the drive, etc.

Simply put, Tesla is over promising and under delivering. I don’t expect the autopilot to be foolproof 100% of the time but on a $50,000+ vehicle touted as being cutting edge, it was disappointing. On the upside I have had VERY good results with autopilot on another often-driven route, so there’s promise. Sure as heck hasn’t impressed me to the point where I have much interest in running out to pay $15k for more though, particularly given Elon’s well-documented pattern of over promising (*cough cough* Cybertruck anyone? *cough cough*)

I predict Mercedes is going to end up eating Tesla’s lunch when it comes to viable, FSD implementation. They just seem to have a more focused approach to it and are getting the results - recent updates are very encouraging.

Still love my Tesla, but results are what matters.
 
@nebusoft did caveat that he didn't have any inside knowledge of how Tesla works, but I was surprised by the comment suggesting that Tesla addressed the left turn "edge case" with a glorified if statement to work around deficiencies of the neural networks. Tesla has presented at CVPR and AI Day how the Occupancy network allows FSD Beta to reason about occlusions, and sure enough, this was part of FSD Beta 10.69 release in addressing Chuck's turn.

So indeed, the existing neural networks were insufficient, and the Tesla engineers/architects correctly wanted to solve multiple hard problems by introducing a new neural network -- a much more complicated but robust fix compared to a special case condition.
 
  • Like
Reactions: Phlier and nvx1977
I don't agree with Mike (on the right) that he paid to use FSDb, which is a limited-release closed beta. Yes, having the FSD package (either from lump sum or subscription) is a prerequisite, but getting into the closed beta was never guaranteed for any FSD owners/renters, regardless of how much they paid.

I also disagree with his assessments around how beta programs should work. Yes, people in the software development world have a certain expectation about the conventional ways betas work, but Tesla clearly is looking for a very particular type of feedback, and it's the type that doesn't require the participants to say/write anything. It's even more clear now that the report/snapshot button has been removed. The feedback is whatever telemetry they want to gather from your drives. If you feel the beta is a waste of time or inefficiently run, then drop out and wait for the GA release.

James' analogy comparing right turns with a house foundation also seems flawed to me. The car doing a left turn does not depend on being able to do a right turn, so the right turn ability is not foundational at all. Since the unprotected left is way more dangerous in general, it makes sense to put extra attention on that capability.

Overall, I found that clip to be a fairly unintelligent discussion with an unimportant guest. Didn't watch the whole video.
 
Really… 🙄

FSD has a long way to go and Tesla is not going to be the leader in autonomous driving development for very much longer if they can’t get over some fundamental issues (like relying solely on camera data and wholesale elimination of ultrasonic sensors, which makes no sense that I can fathom).

On a recent trip, I noticed that the onboard computer was identifying ALL speed limit signs as such and adjusting the vehicle speed to match, including those intended only for trucks (55 mph, sometimes 45 mph in a zone intended to be 65 mph for cars, same for “trailers and vehicles towing”), random phantom braking, multiple cameras unusable messages on dark roads, “freak out” going over a rise in the road casing a panic “take over steering manually NOW!” warning replete with loud screaming alert tone and kicking me out of autopilot for the remainder of the drive, etc.

Simply put, Tesla is over promising and under delivering.
I've been saying this part in red for at least..a year, probably longer. The rebuttals usually are from the fanboys who praise Elon for anything, no matter what, and excuse him for everything simply because of his SpaceX success and because FSD aside, the rest of the Tesla product is pretty great. And those things are great, But I dont view those things as excuses for the constant and what appears to be intentional deceptive marketing of the product. The constant over promising (1 million Robo taxis, Level 5 FSD by end of 2021, no hands on wheel will be needed by end of 2022, FSD will be completed by 12/31/2022, etc) is not accidental. The CEO is very hands on and highly intelligent. A visionary. He knows EXACTLY what the real world capabilities are of his FSD product, and what its not capable of doing anytime soon. We know it/see it...so to think he doesnt, is absurd.

(moderator note: I rarely come to Elon’s defense regarding his FSD claims, however the idea that “He knows…what its (FSD) not capable of doing anytime soon” overstates his abilities. None of us have predictive powers.)
 
Last edited by a moderator:
  • Like
Reactions: CHILLIBAJJI
James' analogy comparing right turns with a house foundation also seems flawed to me. The car doing a left turn does not depend on being able to do a right turn, so the right turn ability is not foundational at all. Since the unprotected left is way more dangerous in general, it makes sense to put extra attention on that capability.
I’m curious why you don’t see the relationship. To make a right turn you need to plan your path to the destination lane then evaluate the traffic coming from the left, estimating whether you have the time/space to make the turn.

To make a left turn you have to do the same, only double because you have traffic from 2 directions.

For the complex left turn you have to evaluate the traffic from the left, plan your turn trajectory and plan a stopping point.

The ‘skills’ requires for a left turn automatically include those needed for a right turn, and right turns are universally considered to be a more basic skill so James’ analogy and criticism is right on point.
 
  • Disagree
Reactions: Eto Demerzel
I’m curious why you don’t see the relationship. To make a right turn you need to plan your path to the destination lane then evaluate the traffic coming from the left, estimating whether you have the time/space to make the turn.

To make a left turn you have to do the same, only double because you have traffic from 2 directions.

For the complex left turn you have to evaluate the traffic from the left, plan your turn trajectory and plan a stopping point.

The ‘skills’ requires for a left turn automatically include those needed for a right turn, and right turns are universally considered to be a more basic skill so James’ analogy and criticism is right On point.

So training Chuck's left automatically means solving for the right turn by your logic, correct? And given right turns, particularly at a T-intersection, haven't improved (based on my observations) since Chuck's turn has been optimized, then there's something going on outside of our visibility or understanding that makes right turns unique from lefts.

With ML via NN, I don't see that hierarchical structure of a foundational base leading to more complex abilities. The capabilities are achieved independently of each other because it's not based on heuristics. Doing right turn A correctly does not imply the car can do right turn B, even though we think the rules and scenarios are practically identical.

From the human perspective, the 2nd half of Chuck's left is the mirror image of a right turn at a T-intersection. And as simple as it is for our brains to realize this connection, I suspect for the car, it's two completely different things. Optimizing a right turn likely won't help this mirror image situation, which is why we don't automatically see a benefit in right turn behavior post-Chuck's turn.
 
  • Like
Reactions: sleepydoc
So training Chuck's left automatically means solving for the right turn by your logic, correct? And given right turns, particularly at a T-intersection, haven't improved (based on my observations) since Chuck's turn has been optimized, then there's something going on outside of our visibility or understanding that makes right turns unique from lefts.

With ML via NN, I don't see that hierarchical structure of a foundational base leading to more complex abilities. The capabilities are achieved independently of each other because it's not based on heuristics. Doing right turn A correctly does not imply the car can do right turn B, even though we think the rules and scenarios are practically identical.

From the human perspective, the 2nd half of Chuck's left is the mirror image of a right turn at a T-intersection. And as simple as it is for our brains to realize this connection, I suspect for the car, it's two completely different things. Optimizing a right turn likely won't help this mirror image situation, which is why we don't automatically see a benefit in right turn behavior post-Chuck's turn.
That's a good point and you may be right. I would still argue that making a right turn is more important than Chuck's complex left, although in the end FSD needs to be able to do both so if they truly are independent of each other then the order doesn't actually matter (except to our perception).
 
That clip is a good representation of Tesla's challenge. It's all customer service based.

The podcasters have no concept of the technology (nor should they) and come at it with a lot of assumptions, false logic leapfrogging and frustration because of it.

Elon's tweet is essentially correct. They are using a beta program. If you're not willing to accept it's a beta system and not fully finished, don't sign up for it. You go into it knowing you are exposing holes and improve it.

James didn't even tweet @elon or @tesla. (I'm not on twitter but just a #FSDbeta in his tweet- is that genuinely giving Tesla feedback on the beta?)
To me that tweet from James looks like stirring the pot.

The last guy, Mike, complains he has the right to complain because he paid "good money".
But just before that, said "the buzz, I waited I wanted to get the beta".
Upfront, he knew he had to pay to use the beta. That's the deal. Why complain about the agreement afterwards?

Tesla maybe should be more communicative with beta users, even in that it may help develop it. But moreso to stop rampant misunderstanding of what's going on. Being opaque is probably counter productive.
However, there may be many layers of legal, technological, and customer interface hurdles in the way of that.



 
@nebusoft did caveat that he didn't have any inside knowledge of how Tesla works, but I was surprised by the comment suggesting that Tesla addressed the left turn "edge case" with a glorified if statement to work around deficiencies of the neural networks. Tesla has presented at CVPR and AI Day how the Occupancy network allows FSD Beta to reason about occlusions, and sure enough, this was part of FSD Beta 10.69 release in addressing Chuck's turn.

So indeed, the existing neural networks were insufficient, and the Tesla engineers/architects correctly wanted to solve multiple hard problems by introducing a new neural network -- a much more complicated but robust fix compared to a special case condition.
Sorry I took so long to respond. So to clarify my statement and why I made it... As I said I make no claim to having inside knowledge of Tesla as they have strong opinions on sharing data outside the company (NDAs, legal drama, etc). But I can say I have some friends/colleagues that work in this space for other companies, and it seems to be the common consensus/thread with how the industry is solving self driving as a whole. Neural nets don't just plug in together like legos, there is a bunch of glue code between how various parts of the system work. The notion that one super model does everything is a common misconception. When they layer lots of models together, there is a common pattern for one-off exceptions to be handled with special circumstances. So for example, one of the Tesla engineers in the Q&A at AI Day 2 (at the very end) briefly mentioned about a model specifically for navigating parking lots being released. If your car detects you're in a parking lot, it will likely use code specifically for that environment rather than a general set of decision making that it would make on a normal street.

While I do think Tesla has done a ton of great work in improving FSD beta (I'm a satisfied customer and always impressed with the tech), there are very likely parts of the system where they go: "oh if you're in this mode, use this code over here instead." They unified/centralized parts of their perception network (such as occupancy like you mentioned), but that's only part of the picture. I think over time Tesla will continue to improve here, but even other companies such as Waymo, Cruise, etc... the idea that there is a magical neural net that does everything is very far fetched. It's lots of different models and pieces glued together. In some domains, as you try to broaden your neural net to more and more edge cases, it can cause your models to perform better in some cases and worse in others. It sometimes is easier to split off special handling modes of operation to their own pieces and simply toggle/flag those modes of operation.

You're right though that I don't know which part of the networks were having trouble with Chuck's left turns. It's possible it was purely in the occupancy network and it was just a simple update of having more/better labeled data. Which is why I gave the qualifier :) I'll try to be more careful in the future when being critical of their approaches when they haven't given me concrete examples of what they're doing, as speculation can certainly lead folks astray :)
 
  • Like
Reactions: kabin and Dewg
It's probbaly going down a bit of rabbit hole but the glued pieces idea is almost a certainty as FSDb gets bigger and more complicated. I assume high priority things like pedestrians/bicycles/collision processing are always running but there's only so much h/w core space so lower priority NNs and code load and execute like the Navy uses hot racks. Then it takes time for the data pipeline to fill, the scene to be processed, the results to have statistical confidence, and the next thing you know a few seconds have passed. Enabling it all to happen might include crutches like stopping short of an intersection and creeping forward for extra processing time. If not enough time is available one could experience general indecisiveness or bugs like blowing through intersection stop signs or not seeing traffic/signs/etc.
 
one of the Tesla engineers in the Q&A at AI Day 2 (at the very end) briefly mentioned about a model specifically for navigating parking lots being released
Are you referring to this part of the Q&A?


That seems to be describing a single stack that is capable of driving city streets, highways and parking lots as opposed to separate stacks that have special behaviors for each situation?
 
That is the part I was referring to. I suppose it depends on how you want to interpret the definition of the word "stack" in this context. The parking lot stack will be added to the "FSD Stack." The FSD stack makes it sound like stack = 1 thing. The "stack" is many things layered and glued together. Not all of the parts of the stack are necessarily used for all scenarios. If it was just "one model" then parking lot wouldn't be added to FSD, FSD would just be "trained better" for parking lot by adding more labels and data to train the existing models. To me it sounds like there are special models/components used for parking lot which they are embedding into what they call the "FSD Stack" (which is a collection of all of the various things). I expect there are other scenarios like this, but again without looking at the code myself I can't confirm that :) I've both built my own ML systems and worked on many other ML systems built by other engineers (consulting for dozens of companies building ML applications / systems for various industries), and while we often would refer to an application or system as "one thing", there were many subsystems and components that may be specialized for various scenarios. But again, the big caveat, Tesla may be doing it differently than what I've seen in the past.
 
I’m curious why you don’t see the relationship. To make a right turn you need to plan your path to the destination lane then evaluate the traffic coming from the left, estimating whether you have the time/space to make the turn.

To make a left turn you have to do the same, only double because you have traffic from 2 directions.

For the complex left turn you have to evaluate the traffic from the left, plan your turn trajectory and plan a stopping point.

The ‘skills’ requires for a left turn automatically include those needed for a right turn, and right turns are universally considered to be a more basic skill so James’ analogy and criticism is right on point.
I think what they were getting at is right turns are not smooth. It turns,maybe jerks back and forth a few times then complete the right turn, which I can easily do with my right finger without thinking
 
That clip is a good representation of Tesla's challenge. It's all customer service based.

The podcasters have no concept of the technology (nor should they) and come at it with a lot of assumptions, false logic leapfrogging and frustration because of it.

Elon's tweet is essentially correct. They are using a beta program. If you're not willing to accept it's a beta system and not fully finished, don't sign up for it. You go into it knowing you are exposing holes and improve it.

James didn't even tweet @elon or @tesla. (I'm not on twitter but just a #FSDbeta in his tweet- is that genuinely giving Tesla feedback on the beta?)
To me that tweet from James looks like stirring the pot.

The last guy, Mike, complains he has the right to complain because he paid "good money".
But just before that, said "the buzz, I waited I wanted to get the beta".
Upfront, he knew he had to pay to use the beta. That's the deal. Why complain about the agreement afterwards?

Tesla maybe should be more communicative with beta users, even in that it may help develop it. But moreso to stop rampant misunderstanding of what's going on. Being opaque is probably counter productive.
However, there may be many layers of legal, technological, and customer interface hurdles in the way of that.
I am 76 and not a math major--or a tech expert. We purchased our 2020 Tesla Model 3 in good faith and signed on with extra pay for the FSD. This was early on in development. We were in the Beta program awhile back but not driving enough to get decent scores. We got frustrated and dropped out. Now we all have access to FSD. I am willing to try it out, but am a bit apprehensive after reading and listening to the discussions . I am totally into safety. I do not see the point of trying to "cheat" the system and keep my hands on the wheel. I guess I am frustrated with it all--but will persist. I wish there was not so much blaming and attacking when folks discuss their experiences. I come to this forum to hear about others' experiences and perhaps to get some tips and suggestions.
Stay safe, everyone.