Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

The Major Problem With FSD That Tesla Won’t Acknlowledge

This site may earn commission on affiliate links.

TL;dr - Apple team wanted iPhone to be plastic because it will drop often and break if made of glass. But plastic got micro scratches over time. Steve Jobs said “if it’s plastic and gets scratches it’s our fault. If it’s glass and breaks it’s their fault. They will accept that more.”

Now imagine where “fault” can mean someone dies.

Elon talks about how FSD is x times safer than a human driver. Even if you take away all of the controversy surrounding the context of what constitutes those miles, that fundamentally doesn’t matter at the end. What matters is “will Tesla feel confident enough to take responsibility?” When you drive a car and get into a crash, whether you’re distracted or drunk or something, that’s your fault. If you die because of something wrong with a driverless car, that’s the car’s fault. It takes control and culpability away from the customer, which brings a whole different level of scrutiny.

For comparison, you have a higher statistical chance of dying from walking on a sidewalk than an airplane.

Tesla for years has talked about vision as being the core problem. Now it’s openly admitted to moving toward decision making through neural nets. Then eventually the talk will be about the March of 9s. But at the end of the road, there’s “is tesla willing to take legal liability for a robotaxi?”

It’s hard to imagine someone surrendering their car to a robotaxi if a crash can occur every 50,000 miles, let alone the 50 or so today (without an intervention). Imagine if someone had to take legal liability for a crash? I wouldn’t sign my model 3 up for that. So Tesla has to assume it.

The day Tesla has that discussion, I will know there’s a serious timetable we can put on FSD. Until then, at least 3 years in perpetuity.

Counter argument: people die in Ubers every year. True, and I’m sure there’s legal preparations around those scenarios, but fundamentally those are still attributable to human beings at direct fault to paint as a villain. With an AI machine owned by a polarizing CEO, the media will have a much different view.

Just throwing this out there because it’s not mentioned enough here but on any non-Tesla self driving group it’s one of the most important milestones.
 
I agree 100%. I’ve always thought that until ALL vehicles are connected through a neural network and the highways themselves are a part of this too, I would quote Ralph Nader’s, “Unsafe at any Speed”. There are just for too many variables, affecting far too few constants.

Just my 2¢. I respect the folks who swear by it and enjoy it, so please don’t slag me over this. It’s just not for me.

Peace out - JP
 

TL;dr - Apple team wanted iPhone to be plastic because it will drop often and break if made of glass. But plastic got micro scratches over time. Steve Jobs said “if it’s plastic and gets scratches it’s our fault. If it’s glass and breaks it’s their fault. They will accept that more.”

Now imagine where “fault” can mean someone dies.

Elon talks about how FSD is x times safer than a human driver. Even if you take away all of the controversy surrounding the context of what constitutes those miles, that fundamentally doesn’t matter at the end. What matters is “will Tesla feel confident enough to take responsibility?” When you drive a car and get into a crash, whether you’re distracted or drunk or something, that’s your fault. If you die because of something wrong with a driverless car, that’s the car’s fault. It takes control and culpability away from the customer, which brings a whole different level of scrutiny.

For comparison, you have a higher statistical chance of dying from walking on a sidewalk than an airplane.

Tesla for years has talked about vision as being the core problem. Now it’s openly admitted to moving toward decision making through neural nets. Then eventually the talk will be about the March of 9s. But at the end of the road, there’s “is tesla willing to take legal liability for a robotaxi?”

It’s hard to imagine someone surrendering their car to a robotaxi if a crash can occur every 50,000 miles, let alone the 50 or so today (without an intervention). Imagine if someone had to take legal liability for a crash? I wouldn’t sign my model 3 up for that. So Tesla has to assume it.

The day Tesla has that discussion, I will know there’s a serious timetable we can put on FSD. Until then, at least 3 years in perpetuity.

Counter argument: people die in Ubers every year. True, and I’m sure there’s legal preparations around those scenarios, but fundamentally those are still attributable to human beings at direct fault to paint as a villain. With an AI machine owned by a polarizing CEO, the media will have a much different view.

Just throwing this out there because it’s not mentioned enough here but on any non-Tesla self driving group it’s one of the most important milestones.
ok - I'll bite
first the click bait thread title - really?
Its not a major problem at all because its only L2.
second is that FSD is a level 2 system which will never require Tesla to accept liability for anything, especially when they have now updated the marketing to say exactly that.
The only people mentioning robotaxis now is threads like this, it's certainly not Tesla in any legally enforceable manner.
So no, Tesla will never take responsibility and will not need to because FSD is just L2.
Anything else is just vague handwaving.
 
I feel that if ADAS can save lives, it's worth it. But can society accept AVs killing people? Like you said, if a human kills someone in an accident, there is someone to vilify and direct anger and grief towards. When a computer kills someone, people may be more outraged, even given the axiom of the needs of the many outweighing the needs of the few.
 
Where are you getting l2 only, from?

“Musk has been promising that Tesla is going to make all its vehicles built since 2016 “full self-driving” through software updates. The CEO went as far as mentioning “level 5 SEA self-driving.”
 
  • Disagree
Reactions: finman100
I agree 100%. I’ve always thought that until ALL vehicles are connected through a neural network and the highways themselves are a part of this too, I would quote Ralph Nader’s, “Unsafe at any Speed”. There are just for too many variables, affecting far too few constants.
Tell me that you don’t know what a neural network is without saying ‘I don’t know what a neural network is’.

Or a constant for that matter.
 
"What matters is “will Tesla feel confident enough to take responsibility?”" @Xepa77

^^^
Why would you want or expect Tesla to take responsibility for your driving?

Appears you are confusing full autonomy, which doesn't exist, with a degree of semi-autonomous/ enhanced driver aids.

Don't start with "Waymo...." or other companies who limit their deployment to specific geographic areas and weather.
 
Tesla sells insurance for their own cars, which I'm guessing is a tiny part of their business for now, but as it grows, FSD improvements could correlate to reduced claim costs for Tesla. So all the incentives are aligned for Tesla to make as many safety improvements as possible. So if they roll-out say, an improved version of AEB, they should be able to see it reflected in the accident claims. You could attach a $ dollar cost savings to every feature shipped.
 
Here's the deal. Everyone is up in arms about how AI will take over. Just not true and I'll tell you why in a second. Tesla's FSD won't be full self-driving for at least 10 years. Maybe longer. Just like the AI art and GPT chat tools can only go just so far.

Let me explain. This all started back in the '50s with Marvin Minsky at MIT and his group who started working in earnest on AI. They soon developed specific languages to program an AI, chief among them was Lisp and Prolog. The simple way to explain them is that these were languages that helped people create very long fancy if-then statements. They ended up calling successful implementations Expert Systems.

Neural nets were created as software tools and hypothetically they mirrored how the human brain processes information. Unfortunately, parallel processing was required and there were no good computer systems until the super high-end GPU graphics cards came out, like Nvidia. Now people could take advantage of these deep learning models using affordable computational power.

But still, if you look at any of these AI tools, including Tesla, you'll see that they are programmed using humans. And humans are responsible for continually refining them as well. I'm not talking about some of the batch stuff that they can do but specific strategic direction.

And, until AIs can reliably program themselves, there's not a lot more that can be done other than some very cool magic tricks. You will see AI starting to infiltrate into just about all avenues of life and business using the same deep learning/pattern matching models. So the hockey puck curve everyone expects AI to be on, is currently leveling out because humans are still required to build and program the AIs.

Once the AIs figure out how to *RELIABLY* program themselves, then we will see huge improvements, not only in driving but all avenues of life. I don't see this happening any sooner than 10 years. Of course, then, it may be time to start worrying. 😉
 
  • Disagree
Reactions: nvx1977
I just got my Tesla and have been using self driving the free version on highway and areas I’m very familiar with that have stretches of no lights or stop signs. It scares the sht out of me. So far it works very well. I am alert hands on wheel and ready to brake. I still don’t trust it but will use it and be cautious.
 
  • Like
Reactions: Mullermn
I just got my Tesla and have been using self driving the free version on highway and areas I’m very familiar with that have stretches of no lights or stop signs. It scares the sht out of me. So far it works very well. I am alert hands on wheel and ready to brake. I still don’t trust it but will use it and be cautious.
You need experience of using the self driving tools to get the most out of them. Once you understand what they do and can predict how they’re going to behave it gets less stressful and is genuinely far easier than driving yourself.
 
But there will be a day where a reasonable FSD fleet will be on the road, statistics will be available to general public and decision makers. Some people will start to think why do we even allow those stupid humans who drive if they are checking their phones while driving? Then it will be regulated same as ABS or traction control. Will be a safety feature required on new delivered cars.

Until there tesla (or any other brand) will have a hard time. Regardless on how long it will take, it is good that tesla is pushing the boundaries and some folks are giving 15k of their hard work money "for the cause".
 
Last edited:
I consider this absolutely ridiculous given the fact that you are still in control of the vehicle, and that it requires a driver at all times. As others have said, it’s absolutely a moot point given that this is truly semi autonomous driving not autonomous driving. The people from the Super Bowl commercial or a bunch of scumbags simply trying to damage Tesla; they bring nothing useful to the conversation because they are anti-self driving in any capacity. The assumption that any level to system will discriminate in such a manner that it will not require any driver intervention at all is utterly ridiculous.
 
ok - I'll bite
first the click bait thread title - really?
Its not a major problem at all because its only L2.
second is that FSD is a level 2 system which will never require Tesla to accept liability for anything, especially when they have now updated the marketing to say exactly that.
The only people mentioning robotaxis now is threads like this, it's certainly not Tesla in any legally enforceable manner.
So no, Tesla will never take responsibility and will not need to because FSD is just L2.
Anything else is just vague handwaving.
Are you inferring that FSD will never advance beyond L2? I would argue that, in reality, FSD is more like L3.4. After all, my Basic Autopilot is L2—both its lane keeping and it’s adaptive cruise control. FSD obviously still requires human interventions often enough, but it still outperforms Basic AP. Hence, FSD is not merely L2.
 
Tesla sells insurance for their own cars, which I'm guessing is a tiny part of their business for now, but as it grows, FSD improvements could correlate to reduced claim costs for Tesla. So all the incentives are aligned for Tesla to make as many safety improvements as possible. So if they roll-out say, an improved version of AEB, they should be able to see it reflected in the accident claims. You could attach a $ dollar cost savings to every feature shipped.
And everyone knows, that lawsuits and payouts are all part of every insurance company's MO. They build in those as simply costs of doing business.
So if Tesla can use all the data they collect and it shows them that they could pay, say, $1 Million to every family of someone killed by an FSD Tesla, and they'd still be making lots more money, then they could do it.
But of course they aren't going to volunteer that. They will pay off politicians to pass laws so they can just avoid blame altogether.
Just look at what the Cincinnati Bengals are trying to get passed in Ohio. Trying to get the workmans compensation laws changed so they don't have to pay them.
Greed is everywhere. It's in the human DNA. And it can't be legislated out of society.
Sorry, I won't start ranting. Not the place. Off topic.

(the days of banks are numbered. Better start learning about alternatives. Swanbitcoin.com/ThePaleys)
 
Like many, I believed we'd have true full self driving by now, Waymo if not Tesla. Now I wonder if we'll have have L5 driving from a technical or legal standpoint within 30 years.

I don't believe car makers will ever assume liability for crashes, even as relatively uncommon as they may become. I'll be surprised if we reach L4 everywhere (all roads and conditions), and that would be with the owner/driver being responsible, not the car maker.

As for the technical, between all the variations and ever changing road conditions/markings (not even standardized or enforced), rain-snow-ice and completely irrational humans on foot, bike or motor vehicle - it's beyond my comprehension how software will safely navigate it all. If all vehicles suddenly became L4, with no human drivers, we'd have a better chance - but there won't be any way to avoid the interim when self driving vehicles have to contend with stupid humans. It becomes a little easier without humans...
 
Are you inferring that FSD will never advance beyond L2? I would argue that, in reality, FSD is more like L3.4. After all, my Basic Autopilot is L2—both its lane keeping and it’s adaptive cruise control. FSD obviously still requires human interventions often enough, but it still outperforms Basic AP. Hence, FSD is not merely L2.

You don't apply the SAE levels subjectively based on how well the car typically does if you were to hypothetically not pay attention. FSD requires drivers to pay attention. By definition, it's L2. There are no wishy washy fractional levels like L3.4 or whatever.

FSD can be L3 tomorrow if Tesla decides on an ODD where the driver can tune out. Whether they'll ever do that is unknowable. I personally consider it unlikely. They will keep FSD L2 while constantly improving its capabilities.
 
  • Like
Reactions: Genie