Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

The catastrophe of FSD and erosion of trust in Tesla

This site may earn commission on affiliate links.
I bought 2 Teslas with FSD, without any surcharge for it, speculating that once "released", not beta, it will be worth an extra 2k ish if I ever have to sell them. I see it as Bang an Olufsen sound system upgrade to a vehicle. I'm actually surprised so many people paid so much for something non-existent and undefined. I guess that suits some people just fine but I predict disappointment in the end for a lot of those buyers. From what I read FSD will not get beyond L2 and will probably not be ready for another 2 to 3 years at least. Beta doesn't count. Musk's latest comments shows 10x beta users increase by end of year. That's progress but nowhere near the finish line. With phantom breaking persisting in my weekly drives, every week, along with new phantom breaking events I've limited my use of AP for purely straight, flat highway drives. I see the same future for final FSD. Useful in some cases but the majority of time it's more stressful to monitor the road and how the car is reacting instead of just monitoring the road and driving myself.
 
I wonder if anyone has studied to some quantitative result the time difference and maybe physiological affects of:

1. No follow cruise control or lane keeping (common driving):
Monitor road, react, physical input to steering, accel, breaking.

2. Active follow cruise control with lane keeping (AP)
Monitor road, monitor vehicle reactions, react, physical input to steering, accel, breaking

And in number 2 if there's no reaction necessary there's still 2 monitoring functions your brain has to do. That's what I find stressful with AP unless I'm on a well known, straight road where I am somewhat confident of the vehicle behavior.
 
From my perspective, it doesn't matter whether they use lidar, radar, sonar, particle beams, magnetic waves, ultrasound, etc. The problem is they're never in our lifetimes going to make software competent to handle all the rare and unpredictable occurrences that the human brain can handle. E.g.: Something falls on the road in front of you and you don't have time to stop, what do you do? The human brain will try to figure out what it is, how dangerous it would be to drive through it, how dangerous it would be to swerve, etc. and make a good decision. The Tesla will be programmed: "Object ahead; Brake immediately."

Full self driving will never happen until we have software capable of passing the Turing test. It's not going to happen in our lifetimes.
 
  • Like
Reactions: sjg98 and 2101Guy
It's quite possible that lidar-quality environment data can be reliably and consistently be constructed from pure vision. The problem is that it takes hundreds of milliseconds to do so, whereas with lidar you have the 3D data instantly. That's a huge latency difference, and is why Tesla is constantly working to shave off milliseconds here and there. Perhaps HW4 or HW5 will cut this to tens of milliseconds, which would close the lidar gap considerably. I generally agree with Tesla's overall approach, but I think Elon's timeline ("safer than a human by the end of the year") is hopelessly optimistic. (Oxymoron notwithstanding.)

The main thing wrong with Tesla's approach is they sold something incapable of doing what was promised. Not just what was originally promised, but Elon has consistently over the years oversold the promise of what the hardware could do.

This has resulted in a ton of owners being invested in either what we have now or what we feel is realistic for Tesla to upgrade on our cars.

The most popular of those (in my own head) is the idea that the HW4 computer plus upgraded vision sensors will somehow give the car the hardware sensor suite necessary to accomplish full self driving. It will certainly improve it, but it likely won't fulfill the original promise. I don't question that Tesla can close the gap with faster computer plus better vision sensors, but there will still be the gap inherent in how the two technologies work.

The sensor world is also constantly evolving with new technologies or approaches, but were stuck. We're anchored to a 2016 promise, and HW that doesn't meet the needs of AP let alone L4 FSD. Our cars love telling us how the sensors are blinded or that it needs to reduce speed because of a little rain.

We're so stuck in the old HW that there are lots of us with 2018 vehicles who aren't taking advantage of the current market situation to get a newer car for not that much more money (relatively speaking). I haven't because what I have now changes on a fairly constant basis. That would be really enjoyable if the manufacture had good customer engagement to make sure those were positive changes, but they don't so a lot of those things are negative. Things like the autolights no longer turning on in the rain during the day like they used to.

If I'm really being honest with myself the only reason I still have my Tesla is due to the pandemic. From a surviving the pandemic point of view Tesla's approach was spectacular and the sales numbers show this. They were also able to delay introduction of new HW without much consequence, and they had additional time to work on FSD while their customer base was stuck.

In 2023 I wouldn't be the least bit surprised if Tesla changed the Sensor suite with HW4 to such a degree that it made HW3 obsolete, and unable to upgrade. Where they already know now the wall they're going to hit with HW3. It will be the right call, but it will piss off a lot of existing customers.
 
Full self driving will never happen until we have software capable of passing the Turing test. It's not going to happen in our lifetimes.
Current robotaxi systems (Waymo and Cruise) get around this by having remote assistance. Though the car still must recognize what it can't handle which isn't an easy problem either.
In Waymo's safety data they did not report hitting road debris in 6 million miles of driving. They did get rear ended 11 times so that could be a reason why...
 
From my perspective, it doesn't matter whether they use lidar, radar, sonar, particle beams, magnetic waves, ultrasound, etc. The problem is they're never in our lifetimes going to make software competent to handle all the rare and unpredictable occurrences that the human brain can handle. E.g.: Something falls on the road in front of you and you don't have time to stop, what do you do? The human brain will try to figure out what it is, how dangerous it would be to drive through it, how dangerous it would be to swerve, etc. and make a good decision. The Tesla will be programmed: "Object ahead; Brake immediately."

Full self driving will never happen until we have software capable of passing the Turing test. It's not going to happen in our lifetimes.

You're greatly exaggerating human capability. A properly equipped car can not only see the object on the road, but can plan avoidance before the human brain can even react.

What computers can't do well is to deal with unknowns like idiot human drivers doing dumb things, and all the rule breaking that humans do. Plus all the ridiculous things weather does to punish us for our sins.

L4 FSD allows us to limit what the autonomous car has to deal with, and this is realistic for autonomous cars to do. We already know they can do this because they already do.

I wish ALL the focus would be on L4 because L4 is the sweet spot.

L2 isn't scalable
L3 has a hand off issue and doesn't achieve what autonomous driving can really offer
L5 is a pipe dream because humans are only L5 because we reduce the safety threshold in bad situations. We also exaggerate our capabilities. Like I say I'm L5, but I would suffer an anxiety attack if I drove in Paris.

What I like is L4 with a licensed human driver who can take over in situations where the human knows the machine might struggle. Where it either tells me ahead of time that I have to take over before point X or its going to pull into rest stop. Or where I simply don't feel comfortable with how it handles certain situations. This act of taking over is a very human thing to do. We likely have family members where were comfortable with them driving in some situations, but not all situations.
 
Last edited:
  • Like
Reactions: pilotSteve
I wish ALL the focus would be on L4 because L4 is the sweet spot.
Yeah that's all good information and interesting to readers like me but in regard to Tesla "FSD" I don't read anywhere that's going to be a reality. From what I understand "FSD" will fall into the definition of L2. At the same time Tesla never promised "L" - anything. They only said "FSD", define as you wish.
 
Yeah that's all good information and interesting to readers like me but in regard to Tesla "FSD" I don't read anywhere that's going to be a reality. From what I understand "FSD" will fall into the definition of L2.

FSD as sold to customers prior to ~March 2019 was explicitly (at least) L4 by definition of capability. You could possibly argue L5.

FSD sold after that date was never promised by Tesla as more than L2.

This has been discussed and explained and sourced pretty exhaustively previously in this and other threads if you're unclear or unsure of the above being facts.
 
In 2023 I wouldn't be the least bit surprised if Tesla changed the Sensor suite with HW4 to such a degree that it made HW3 obsolete, and unable to upgrade. Where they already know now the wall they're going to hit with HW3. It will be the right call, but it will piss off a lot of existing customers.
For consistently reliable L4, I expect they will need at least the following improvements:

1. A method of automatically cleaning the glass in front of the cameras. (Lasers, air jets, mini-wipers, something.)
2. More setback of the camera elements from the glass, and/or moveable articulating cameras, to minimize the impact of individual raindrops or dirt spots.
3. Higher-res cameras, and/or an articulating telephoto camera with OIS to resolve specific details at a distance and read small print on road or street signs.
4. A lot more compute, and faster compute paths to reduce latency and improve reaction time.
5. Modeless voice interaction with the driver to resolve non-immediate ambiguities. ("Shall I look for a parking spot here?")

Some of this could probably be retrofitted onto HW3 cars, though it runs the risk of creating intermediate Franken-configurations that might be hard to support. But I expect there will probably be at least "HW3+" (retrofitted) and "HW4" flavors.
 
I see the same future for final FSD. Useful in some cases but the majority of time it's more stressful to monitor the road and how the car is reacting instead of just monitoring the road and driving myself.
This is where I am. I have disabled AP totally as I find it much more stressful/tiring to be on a constant hair-trigger to take over. The human brain is not wired to maintain this kind of constant alert level.

I love using cruise control and still have to deal with a lot of phantom braking even with that. I wish there was a "dumb" cruise control option you could set.
 
  • Like
Reactions: SalisburySam
This is where I am. I have disabled AP totally as I find it much more stressful/tiring to be on a constant hair-trigger to take over. The human brain is not wired to maintain this kind of constant alert level.

I love using cruise control and still have to deal with a lot of phantom braking even with that. I wish there was a "dumb" cruise control option you could set.
In my case, I find using AP (NOA) on highways to be quite enjoyable with virtually no need to take over control except in a few cases when NOA is active on a two-lane high speed road.

I disagree with your claim that the brain is not suited to be alert while riding. If you are driving manually, you must be just as alert in addition to having to manage the controls of the vehicle. Riding as the safety driver, you are relived of operating the controls except when you disagree with what the car is doing. So, you actually have less of a workload.
 
In my case, I find using AP (NOA) on highways to be quite enjoyable with virtually no need to take over control except in a few cases when NOA is active on a two-lane high speed road.

I disagree with your claim that the brain is not suited to be alert while riding. If you are driving manually, you must be just as alert in addition to having to manage the controls of the vehicle. Riding as the safety driver, you are relived of operating the controls except when you disagree with what the car is doing. So, you actually have less of a workload.

The reduction of cognitive load is night and day when your regular commute is bumper to bumper traffic. I had that same commute in my old ICE car, and I would be grumpy and tired getting to work. Big change once I had my Model 3 and let AP worry about all the stop and go decisions. Stuck in a slow lane? Who cares.

And when I drive my wife's old S85 without AP, I keep wanting to use it.
 
The main thing wrong with Tesla's approach is they sold something incapable of doing what was promised. Not just what was originally promised, but Elon has consistently over the years oversold the promise of what the hardware could do.
Given enough time and effort it is possible the hardware can meet the goals, although very unlikely. Perhaps 20 years of intense research. Obvious to me there is a need to upgrade the hardware, which Tesla has done in the past, and I expect (perhaps wrongly) will be done in the future.
The most popular of those (in my own head) is the idea that the HW4 computer plus upgraded vision sensors will somehow give the car the hardware sensor suite necessary to accomplish full self driving.
Agree, neither will HW5 achieve full self driving, but hopefully HW4 will do L4 to some degree.
 
Last edited:
  • Like
Reactions: Ben W
Something falls on the road in front of you and you don't have time to stop, what do you do? The human brain will try to figure out what it is, how dangerous it would be to drive through it, how dangerous it would be to swerve, etc. and make a good decision. The Tesla will be programmed: "Object ahead; Brake immediately."

You are hugely underestimating ML. In this particular case, I am sure ML will quickly become superior to any human: Assessing size, weight and trajectory of a fallen object (or 100 fallen objects) and plan the best course of action. ML is pretty good in such tasks already. Key thing is that ML has totally superior reaction time and wider context (where other cars, obstacles, road, ... around it are) compared to any human.

There probably are tasks self driving cars continue to be inferior to humans indefinitely: like two way negotiations with other drivers, pedestrians and construction personnel through facial and hand expressions.
 
  • Like
Reactions: Ben W
To the naysayers who routinely say it cannot be done, my questions is this: Then why is anyone trying? What's the point of FSD (L2), or Waymo (L4 and L5), Cruise, etc.?

A few years ago, we knew nothing about space travel - we had barely gotten satellites in orbit, and a US president said we'll land a man on the moon in 10 years. Many thought it would never happen - but it did - in just ten years! People laughed at the Wright Bros when try said they could get us a flying machine - they started research in 1899 and had the first flight in 1903 (4 years).

Those that say it will take 20+ years - how was our technology 20 years ago (the year 2000)? Push button phones and cell phones (Nokia and Motorola were kings), box computers with CRT box monitors. CPUs just hit 1Ghz. USB Flash drives were just released and had amazing storage of 128 megabytes.

Based on how fast technology is evolving, I doubt it'll take that long to get FSD working.
 
Yeah that's all good information and interesting to readers like me but in regard to Tesla "FSD" I don't read anywhere that's going to be a reality. From what I understand "FSD" will fall into the definition of L2. At the same time Tesla never promised "L" - anything. They only said "FSD", define as you wish.

When it was introduced the promise of it was robotaxi's and you can't have robotaxi's unless its L4/L5. For quite a long time Elon kept saying how people could make money with their vehicles, and how it was an appreciating asset.

Then in 2019 Tesla shuffled around the EAP/FSD feature sets

Basic AP was added to the standard configuration of the vehicle and vehicle price went up by $2K
EAP was removed, and the remaining features in EAP was moved to FSD
The wording on the website was watered down
FSD pricing was set at $5K

All in all it gave the appearance like Tesla was walking away from any promise regarding unsupervised autonomous driving. That the best we could hope for is FSD L2 (unified stack FSD Beta), and just maybe an apology from Tesla for teasing us with that whole autonomous driving promise thing.

The problem with FSD L2 is it simply asks too much from a human being. We can't take all the responsibility for the driving while also not doing the driving task. We simply aren't equipped to do that.

FSD L2 is diabolical workaround for the biggest obstacle autonomous driving has. That obstacle is humans don't want to give kill allowances to robots. The workaround is to blame the humans behind the wheel for those deaths.

it hides it as long as no one takes a look, but the NHTSA has put a big microscope on it.

There is very much the feeling like L2 is just begging to have limits placed on it like the Europeans do.