Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

The Major Problem With FSD That Tesla Won’t Acknlowledge

This site may earn commission on affiliate links.

TL;dr - Apple team wanted iPhone to be plastic because it will drop often and break if made of glass. But plastic got micro scratches over time. Steve Jobs said “if it’s plastic and gets scratches it’s our fault. If it’s glass and breaks it’s their fault. They will accept that more.”

Now imagine where “fault” can mean someone dies.

Elon talks about how FSD is x times safer than a human driver. Even if you take away all of the controversy surrounding the context of what constitutes those miles, that fundamentally doesn’t matter at the end. What matters is “will Tesla feel confident enough to take responsibility?” When you drive a car and get into a crash, whether you’re distracted or drunk or something, that’s your fault. If you die because of something wrong with a driverless car, that’s the car’s fault. It takes control and culpability away from the customer, which brings a whole different level of scrutiny.

For comparison, you have a higher statistical chance of dying from walking on a sidewalk than an airplane.

Tesla for years has talked about vision as being the core problem. Now it’s openly admitted to moving toward decision making through neural nets. Then eventually the talk will be about the March of 9s. But at the end of the road, there’s “is tesla willing to take legal liability for a robotaxi?”

It’s hard to imagine someone surrendering their car to a robotaxi if a crash can occur every 50,000 miles, let alone the 50 or so today (without an intervention). Imagine if someone had to take legal liability for a crash? I wouldn’t sign my model 3 up for that. So Tesla has to assume it.

The day Tesla has that discussion, I will know there’s a serious timetable we can put on FSD. Until then, at least 3 years in perpetuity.

Counter argument: people die in Ubers every year. True, and I’m sure there’s legal preparations around those scenarios, but fundamentally those are still attributable to human beings at direct fault to paint as a villain. With an AI machine owned by a polarizing CEO, the media will have a much different view.

Just throwing this out there because it’s not mentioned enough here but on any non-Tesla self driving group it’s one of the most important milestones.
 
There are several ways for Tesla to combat this, from a liability standpoint -

  1. Statistically speaking, FSD will be much safer than human drivers in accidents. If your robotaxi is in an accident, and you are not at-fault - then the other persons insurance is to blame. That one is easy.
  2. The accidents that FSD could possible get into when the car is at fault, would likely lead to very minimum damage to occupants. The car is not going to get in an accident driving over the speed limit, and the car has the safest crash test rating. A crash would be very likely to happen at normal driving speeds, so majority of crashes are easily survivable at normal speeds.
  3. If the data proves that accidents with the computer are much safer, and accidents that do happen are mild and lead to very little collateral damage - then all insurance companies will insure the FSD computer. It doesn't have to be just Tesla.

If your car is used for robotaxi purposes, and you're a paying customer, I'd imagine a waver signing required by each occupant stating that they understand in the event of an accident as the at fault vehicle, they are only entitled up to X amount.

Keep in mind that if one occupant even takes a seatbelt off, the car would likely be programed to slow down and pull over safely. Or, the car won't even start the journey. How many of us, when someone isn't wearing a seatbelt, go to that extra length to pull over?
That is speculation not data. How can FSD be statistically safer when it only exists in beta and in the case of my MX would cause an accident within minutes if not for intervention.
 
That is speculation not data. How can FSD be statistically safer when it only exists in beta and in the case of my MX would cause an accident within minutes if not for intervention.

I just watched a V11 video, the guy goes ~20 miles no intervention. Seemed very smooth and humanlike, so it is possible. Not saying it is reliable yet, but after seeing V11 I feel much more confident of it's ability in ideal weather conditions.

Regardless, my initial bullet points still stand. I know a few drivers that would scare me and drive way more unsafe in those 20 miles than the FSD computer just demonstrated in the above video.
 
Last edited:
Concur… around here, I get FSDb disablements either DURING driving or when trying to enable FSD even when its only what a mid-westerner would call “spitting” rain. Sometimes there is no rain FALLING, but there are rain droplets ON the windscreen is enough to force no engagment.
FSD, disengagements due to rain may be more related to preprogrammed thresholds rather than a lack of ability to continue performing. It is possible that Tesla hasn't put in effort to address rain performance yet, so chose very conservative levels for now. I've seen many notices that FSD beta 'might be degraded' in rainy weather, yet never have noticed any change in performance.
 
FSD, disengagements due to rain may be more related to preprogrammed thresholds rather than a lack of ability to continue performing. It is possible that Tesla hasn't put in effort to address rain performance yet, so chose very conservative levels for now. I've seen many notices that FSD beta 'might be degraded' in rainy weather, yet never have noticed any change in performance.
A little rain can make it unavailable, or degraded, or bounce between autopilot and fsb. Happens to me every time it rains.
 
  • Informative
Reactions: pilotSteve
That's assuming it is cheaper than improving FSD. If, on the other hand, improving FSD will make more money the FSD will continue improving.
Small incremental improvements aren't going to sell that many more cars. With the market of EV's growing more every year, I don't see buyers looking at FSD now vs a bit better later being a big factor in their decision. If they want it, they'll get it now and not wait til it's a little better.
But if it takes a major improvement type of leap, where Tesla can win the race to autonomous taxis, that's where they can really profit by selling fleets to Uber or cab companies.
 
Small incremental improvements aren't going to sell that many more cars. With the market of EV's growing more every year, I don't see buyers looking at FSD now vs a bit better later being a big factor in their decision. If they want it, they'll get it now and not wait til it's a little better.
But if it takes a major improvement type of leap, where Tesla can win the race to autonomous taxis, that's where they can really profit by selling fleets to Uber or cab companies.

I guess I needed to quote more... You have completely missed the point of my remark - the original post was regarding the insurance and not the sales. So let me rephrase that: as long as improving FSD reduces the cost of your insurance business more than the expense of said improving there's an incentive to improve it.
 
One problem that cannot be solved by FSD is the human belief that "I'm an excellent driver"! Why should I stay home when my need for a fresh six-pack supersedes adverse conditions, a travel advisory, and road closures? FSD will never be able to do "everything a human can", because humans are allowed to use bad judgement (what the kids call "f00k around and find out").

I know vision-based FSD can't drive in whiteout conditions, but neither can you. At least the computer is smart enough to stay home lol
 
That is speculation not data. How can FSD be statistically safer when it only exists in beta and in the case of my MX would cause an accident within minutes if not for intervention.
He was offering ideas on dealing with liability once FSD is at L4/robotaxi level. Obviously there is no data for this since FSD beta is not even close...
 
It's currently really bad. I've tried it several times this winter with roads completely covered in snow and it is dangerous.

I find that otherwise FSDb performs fine most of the time, but still needs work in some situations.
I doubt Tesla have spent much effort trying to train it for snow at present, getting it feature complete and major-bug-free on daytime, moderate weather driving has got to be the priority. Even then, how good are human drivers on roads completely covered in snow?
 
I think there's a lot of truth to this. The hockeystick curve isn't coming anytime soon and until then, it's just trial and error. Fixing one thing that breaks another thing. It's like building anything, the first 80% goes quick but the last 20% takes forever. Just like charging a Tesla battery, lol. In FSD's case, it is not safe nor is it even that advantageous to use until it works 100%. And we have a ways to go until then. It will be a monumental task and incredible achievement when it happens, but I'd be shocked if we have 100% fully autonomous driving where I can game on my Tesla screen while it drives me around within the next 10 years.