Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

The catastrophe of FSD and erosion of trust in Tesla

This site may earn commission on affiliate links.
t there was a definition, or that the industry has been working on this for quite some time.

Yes, and as you were already corrected on- that standard is explicitly not for testing L2 systems

So it's weird you keep bringing something up that tells you in the description it's the wrong tool for this job.



I can claim I know vastly more than you, which has been my claim all along, and evidence has born out.

It really has not.



No, they bashed out the NN long ago and have been adjusting parameters ever since. Now they're tuning weights and biases and handing it over to us testers.

Not only is this grossly false, you need not look further than autonomy day and then AI day to know it- where they explained the multiple, fundamental, architectural changes to the entire system they've made.

Versus your nonsensical claim they just wrote "the NN" (like you don't understand there's more than one) and have just been tweaking weights for the last several years.


You clearly have no idea how any of this works but keep talking like you're sure you do.






. So, we can either look at this like the J3018 document

Yes- you really need to- since you clearly have no idea what it actually says or what it's actually for.



.
Not only can I say that about them, but their disengagements are measures in the hundreds of thousands and millions of miles

MEANWHILE BACK IN REALITY...



AutoX managed 50,100 miles per disengagement for the best performance of any company reporting....and mind you, they only had 50,100 total miles of testing so you have to take that with a bit of a grain of salt since it's a single disengagement. Still missing a 0 or two from your nonsense claims.

Cruise achieved 41,700 miles per disengagement and was 2nd best....and unlike AutoX actually had a legit amount of testing miles, over 20x their disengagement rate.

Waymo only managed 8,000 miles per disengagement- but also the most, hardest, testing miles.... which again seems to be a problem for your claims.


In case you're not big on math, that's all a fair bit less than "hundreds of thousands and millions of miles" between disengagements as you said they were doing.



. Why ruin your argument by making wrong points based on lack of knowledge?

#projection
 
Last edited:
Not only can I say that about them, but their disengagements are measures in the hundreds of thousands and millions of miles, where Tesla is measured in singles of miles. The fact you don't realize this is a problem. So, we can either look at this like the J3018 document and perhaps accept that you're coming from a position of lack of knowledge and accept that you should go look this up before attempting to debate me, or I can yet again show you're wrong and we no longer have a productive conversation. Why ruin your argument by making wrong points based on lack of knowledge?
I'll skip the ad hominem attacks as they are childish. My point was always that you have made unsubstantiated claims about Tesla, and that remains, despite your smoke-screen attempts. You arguments boil down to "these assertions are true because I'm clever" (argument from authority), which is at best weak and at worst disingenuous.

Clearly there is nothing to be gained from continuing this argument. It will be interesting to see which of us is correct over the next 12-18 months.
 
Why don't I clear this up by saying that none of us know what's going on inside the Tesla. No one. No amount of "I've got years of experience in x, y and z", or "I work in this industry" or "I have a PhD in [insert relevant school of thought here]" can give anyone certainty about how these systems operate.

The only person or persons that are qualified to talk about these issues are people that work at Telsa directly with the FSD department. Everyone else is just guessing - albeit educated guesses from some people, but guesses none-the-less.

That being said - is there anyone on TMC that works in the FSD department at Tesla and can chime in to help us understand how the system works, why there are struggles with basics such as Phantom Braking, how the NNs are designed and learning? Otherwise, it's just more sophistry and logical fallacies, backed up with assumptions. :)
 
Why don't I clear this up by saying that none of us know what's going on inside the Tesla. No one. No amount of "I've got years of experience in x, y and z", or "I work in this industry" or "I have a PhD in [insert relevant school of thought here]" can give anyone certainty about how these systems operate.


The other problem is most of those insisting they have X YEARS OF SOFTWARE KNOWLEDGE are talking about traditional coding, which has almost no relevance at all to what Tesla is trying to do with NNs, AI, and machine learning.


It's the same reason you had folks like Bob Lutz, with his DECADES OF CAR BUILDING EXPERIENCE constantly be 100% wrong about Tesla as a car company.

They were doing things he fundamentally did not understand, but he kept talking like he did because he assumed his specific knowledge applied to anything with 4 wheels.
 
  • Disagree
Reactions: 2101Guy and cwerdna
My point was always that you have made unsubstantiated claims about Tesla,

Except I've substantiated my claims, and I even gave you something to go look up to demonstrate what I was talking about.

despite your smoke-screen attempts.

Smoke screening with facts? That's not smoke screening. But I see you trying to move those goalposts again.

You arguments boil down to "these assertions are true because I'm clever"

Literally no. My argument boils down to "very smart experts have worked on this for decades, and Tesla by ignoring decades of safety research has been unsafe in their rollout of their low quality software to public roads". In no way am I taking credit for that work, because I'm one of those kooks that defers to actual experts.

Why don't I clear this up by saying that none of us know what's going on inside the Tesla. No one.

Because you haven't cleared anything up, and you've ignored the fact that Tesla insiders have shared what's going on internally.
 
Except I've substantiated my claims

You've done exactly the opposite.

You cited an SAE doc that does not apply to L2 systems but you didn't bother to read it to know that- then lambasted others for not reading it.

You claimed orders of magnitude better disengagement rates from autonomous car companies than their actual rates

You claimed Tesla is using the same NN (singular) as they wrote years ago, and just tweaking the weights- something laughably wrong as demoed and proven by tons of sources.


In short, not a single claim you've made appears remotely connected to reality.



I even gave you something to go look up to demonstrate what I was talking about.

And when anyone did- they discovered you were wrong. Again.
 
I think that this dramatically oversimplifies the issue. I understood when I purchased FSD Beta in 2018 that it was something that wouldn't be "feature complete" for some time, things would evolve, and "beta" testing (not a beta test) would occur. I also haven't ever believed it would be L5 - L4 at best, but likely L3 by design. However, I didn't think it would be "generally unreliable and ever evolving" forever! I thought we would eventually get a workable FSD product that could, under ideal conditions, drive you from your driveway to work, drop you off at your front door, and then go park itself. And I didn't think this because I was stupid, naive, didn't understand the "software development process", or any of that mess that's frequently thrown at critics of FSD progress. I thought this because of the messaging I was getting from Tesla's website and Tesla's CEO both before and after my purchase. Remember these:
  • Nvidia Conference 2015 - "It's not something I think is very difficult. To do autonomous driving that is to a degree much safer than a person, is much easier than people think... I almost view it like a solved problem," estimating "complete autonomy" by 2018.
  • October 2016 - "all new vehicles come with the necessary sensors and computing hardware for future full self-driving"
  • October 2016 - Elon says "by the end of 2017, a Tesla will be able to drive from New York City to Los Angeles without the driver having to do anything."
  • February 2018 - "The upcoming autonomous coast-to-coast drive will showcase a major leap forward for our self-driving technology”
  • February 2019 - Elon says "Tesla's Full Self Driving capability will be "feature complete" by the end of 2019. "Meaning the car will be able to find you in a parking lot, pick you up and take you all the way to your destination without an intervention. This year. I would say I am of certain of that, that is not a question mark." Sometime after that, Tesla's order page for "Full Self-Driving Capability" stated "Coming later this year"
  • Autonomy Day 2019 - Musk says Full Level 5 Autonomy by end of 2019 and estimated that by the middle of 2020, Tesla’s autonomous system will have improved to the point where drivers will not have to pay attention to the road. “We will have more than one million robotaxis on the road,” Musk said. “A year from now, we’ll have over a million cars with full self-driving, software... everything... These cars will be Level 5 autonomy with no geofence, which is a fancy way of saying they will be capable of driving themselves anywhere on the planet, under all possible conditions, with no limitations."
  • etc., etc., etc.
I think there is definitely at least one more camp of people that understood the software would initially be "beta" but would eventually culminate with delivery of an at least L3 system in some reasonable timeframe. That was the expectation set by Tesla and Elon, and that is is what we haven't seen.
Excellent summary of the last few years. Thank you for posting that.
 
  • Like
Reactions: DocRob
Being an engineer and having been way over enthusiastically out over my skis on many occasions, I'm willing to forgive a lot of Elon's hype.

Being an engineer and having been way over enthusiastically out over my skis on many occasions, I never once considered FSD on any of my or my wife's S purchases.

Elon sells it because he truly wants it to work and people buy it because they truly want it to work. I may buy it when it works. Seems simple to me and I only have myself to blame if I am disappointed.
Nice to see you Lola. Used to discuss at the erstwhile Tesla forums. 🙂
 
FSD has become a catastrophe, and a dangerous one at that. While most of us wait in the dark, Elon tweets about 10.12 being a game changer yet 90% of the beta fleet has not received 10.11 and remains two upgrades behind with zero indications of why this is... could it be that none of those problems, especially PB, are fixed in 10.11 or 10.11.1 ? Highly likely. I am on the verge of filing a NTSB complaint if Tesla does not come clean here, and I never thought I would see such a day. I have always patiently supported the product, but enough is enough, and the most recent Elon tweets have pushed me nearer the edge.

I could go on about the dangers and regressions we have seen with the beta, 10.10.2 in particular, but they are well documented in this forum and elsewhere. So if you are driving yourself nuts with safety scores thinking you want the beta, be advised that the reason it is not coming is that the existing releases are not just bad, they are catastrophically bad, and dangerous as hell.

In the midst of this, Elon tweets about possibly starting a new social media platform due to Twitter "echo chamber" mis-information, but how are his comments that ignore the obvious elephant in the room any different? The Emperor has no clothes.

I hope I am proven wrong in the weeks to come, but my trust is quickly eroding.
The fact that to get FSD gou need a 90+ safety score is indicative that TESLA AI wants to be trained by the safest drivers.
 
  • Funny
Reactions: cwerdna and KArnold
Gonna go out on a limb and suggest none of your software experience is with neural networks or machine learning based on that remark.






To be clear, the product sold for 12k only promises 7, specific, things.

6 of them you get immediately upon paying for FSD.

Only 1 of them, autosteer on city streets, an explicitly L2 feature, remains undelivered... (and is the thing being tested in the FSDBeta program currently)

Even with NN you can have non-ML unit & feature tests to confirm the performance. Eg you don’t just need to trust the testing ML data set. That can be followed up with more traditional tests.

Additionally there are a bunch of regressions that don’t seem to be ML based. Spotify not loading. Heaters not turning on. Poor contrasts choices in some of the UI. Laggy backup camera launches. Hangs. …

I get some of the problem is “hard”. And some is hard to test, but it’s pretty clear it would be possible to do a better job at gluing down the reliability of existing features/metrics with a good testing methodology.

It’s one of the things that disturbs me about Telsa. Here’s an area that’s obvious a problem which could be improved on and yet not much seems to happen. I know .. they sell every car they can make. And even with these problems I would still buy the car today.
 
  • Like
Reactions: B@ndit and BrerBear
Even with NN you can have non-ML unit & feature tests to confirm the performance. Eg you don’t just need to trust the testing ML data set. That can be followed up with more traditional tests.

And they do test before releasing to any "regular" people. But the sheer # of possible inputs to the NNs is not possible to fully test this way. That testing is why they're doing a beta with tens of thousands instead of a tiny test team.


Additionally there are a bunch of regressions that don’t seem to be ML based. Spotify not loading. Heaters not turning on. Poor contrasts choices in some of the UI. Laggy backup camera launches. Hangs. …

Everything you listed is run on the media computer, not the FSD computer. Entirely different computers, sets of code, and teams.... (though the FSD computer DOES feed some data TO the driving computer, so you can sort of partly blame them for the backup camera one at least)

If you wanna argue the infotainment/UI team is pretty terrible, they absolutely are

But they have not much of anything to do with FSD behavior.


It’s one of the things that disturbs me about Telsa. Here’s an area that’s obvious a problem which could be improved on and yet not much seems to happen. I know .. they sell every car they can make. And even with these problems I would still buy the car today.

Yup.

There's no shortage of things they could do better- honestly I think I'd put communication (both internal and external- they're terrible at both) at the top. But I'd still buy another Tesla if I needed a new car.
 
So when it comes to AP and FSD you always hear things like, "in this update the phantom braking seems a little better", and then a few weeks later in an other update, "phantom braking is back and now it is worse than ever." then, "the AP acceleration when when traffic starts to move it too slow now, creating too big a gap and people getting annoyed" then "the car brakes to late and too hard when it sees traffic ahead, makes me nervous"

I am not dogging on Tesla, but my concern is how can you realistically build an AI system that is going to drive just the way everyone like it to? Most people have a particular style of driving that is more comfortable to them. As a human we can anticipate situations better. For example, while the FSD make have access to all these cameras and have 360 degree view, it doesn't know that in order to make a left at the light ahead we need to start getting over two lanes to the left much sooner than if I let the FSD system do it. I mean I can see that there is a bus ahead and it is going to stop to pick up some people so I anticipate and get from around the bus much sooner, will FSD do such things? That is where I struggle with this system. Until it actually drives as good or better than I do then it is just more work for me to babysit. Maybe it will meet my expectations at some point in my lifetime but I am skeptical.

When it comes to the car itself though, big fan. Very pleased with the ownership the past 3 years.
 
  • Like
Reactions: Ramphex and DocRob
Tesla / Elon can learn from companies that offer satisfaction guarantee, like Costco.

I like the subscription model for fsd for tesla for this reason. Someone (at tesla) will be able to datamine attract & attention rates etc.. It should be pretty clear if people are valuing the feature or not in a much shorter timeframe then bundling it with the car sale.
 
  • Like
Reactions: Terminator857
And they do test before releasing to any "regular" people. But the sheer # of possible inputs to the NNs is not possible to fully test this way. That testing is why they're doing a beta with tens of thousands instead of a tiny test team.




Everything you listed is run on the media computer, not the FSD computer. Entirely different computers, sets of code, and teams.... (though the FSD computer DOES feed some data TO the driving computer, so you can sort of partly blame them for the backup camera one at least)

If you wanna argue the infotainment/UI team is pretty terrible, they absolutely are

But they have not much of anything to do with FSD behavior.




Yup.

There's no shortage of things they could do better- honestly I think I'd put communication (both internal and external- they're terrible at both) at the top. But I'd still buy another Tesla if I needed a new car.

I purposely stayed away from fsd with that because I think it points out a more obvious (and possibly systematic) problem.

For fsd they could have “traditional” tests that run in a simulation and determine if forward collision warnings work, could score “jerkiness” and make sure turns are smooth. Any number of performance metrics could be invented, scored, and tested for in a serious number of conditions on a larger cluster. Assuming there’s a good simulation setup. (Turning into the proper lane. Stopping/starting at the right time/speed. Stopping in the right spot. Turning at an appropriate speed. … add in edge cases.. stopped cars, emergency vechiles, obstructed roadway, …)

The point of the non-fsd regressions show they don’t really do this. At least for the non-fsd parts. Since that’s an easier problem I find it skeptical that they do much on the harder fsd problem either.
 
I am starting to believe the entire "beta" is a a bit of, bordering on totally contrived, sham. I've had FSD, safety score of 99 to 98 for weeks. No beta. Today a car pulls in front of me, I'm further away than the autopilot would be, and the forward collision warning goes off. Now my score is 97. It's virtually impossible to maintain a perfect score, even while driving almost too conservative. I am now believing it's rigged, so only a few while get the beta, and the company dangles the beta to keep a firestorm at bay. Love the car, but the FSD "beta" is a farce and now believing the company is playing on our trust and love of the unrealized but promoted and monetized potential.
 
So when it comes to AP and FSD you always hear things like, "in this update the phantom braking seems a little better", and then a few weeks later in an other update, "phantom braking is back and now it is worse than ever." then, "the AP acceleration when when traffic starts to move it too slow now, creating too big a gap and people getting annoyed" then "the car brakes to late and too hard when it sees traffic ahead, makes me nervous"

I am not dogging on Tesla, but my concern is how can you realistically build an AI system that is going to drive just the way everyone like it to? Most people have a particular style of driving that is more comfortable to them. As a human we can anticipate situations better. For example, while the FSD make have access to all these cameras and have 360 degree view, it doesn't know that in order to make a left at the light ahead we need to start getting over two lanes to the left much sooner than if I let the FSD system do it. I mean I can see that there is a bus ahead and it is going to stop to pick up some people so I anticipate and get from around the bus much sooner, will FSD do such things? That is where I struggle with this system. Until it actually drives as good or better than I do then it is just more work for me to babysit. Maybe it will meet my expectations at some point in my lifetime but I am skeptical.

When it comes to the car itself though, big fan. Very pleased with the ownership the past 3 years.
here is where i think a lot of people go awry and get frustrated. I think people hear "AI" and want to believe the car is kind of like a person and so they think it should learn and behave like a person would. the problem it's not going to learn like a human would in the slightest lol. when it learns something new, that new skill might cause unexpected regressions in other skills for any number of reasons. giving it the appearance that it "forgot" how to do something basic, when the truth is far more complex than that.

the issue is then further compounded by the fact that they have to focus on training the core skills of the car and, to an extent, have to disregard things that "feel" wrong but technically worked. with something like this i'd imagen it would "feel" very rough right up until the end. it would only be in the final stages of development that they would have the ability to focus on things like comfort, so we'll see an FSD Beta that feels just kind of ok, but technically works some of the time, with incremental improvements until they reach a kind of critical mass and then it will get very good very quickly in terms of comfort (in theory lol).

its like when i'm coding something and i'm not sure if what i want to do is possible, i first focus on proof of concept, that it CAN actually work, no matter how ugly that is. only once its working can i then go back and optimize it, clean it up and make it pretty. by comparison the latter is vastly less complicated.
 
  • Like
Reactions: Ramphex