Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Opinion: what I think Tesla should do to "solve" FSD

This site may earn commission on affiliate links.
First, I want to acknowledge all the hard work of the Tesla FSD team. Tesla has spent years building a sophisticated vision-only system. And the perception part is very advanced. I am not saying that Tesla Vision is perfect. There are still gaps in the perception system. But I feel like Tesla has build a pretty good foundation for FSD. I am not suggesting Tesla start from scratch. On the contrary, I think Tesla should continue to build on that vision-only foundation.

But here are 3 things that I think Tesla should do in order to deploy a more reliable and more robust FSD system.

TLDR Tesla should copy Mobileye.

1) Crowdsourced maps
Tesla has a big fleet of vehicles on the road. They could leverage the vision system in every car on the road to crowdsource detailed maps, similar to what Mobileye is doing. With such a large fleet of vehicles, Tesla could map large areas pretty quickly. Tesla could probably map every road in the US in a relatively short time. And with such a large fleet of vehicles on the road, Tesla could also update the maps pretty easily too since there would always be a Tesla somewhere checking the maps. A lot of the errors that FSD beta makes seem to be due to poor map data. Crowdsourcing could really help solve those issues since there would be a Tesla likely checking that spot pretty regularly. And detailed maps could help make FSD more robust. With crowdsourcing, only the first car would need to drive the road mapless, other cars that encounter the road later, would have the benefit of a map as a prior. And detailed maps can provide useful non-visual info like slowing down for a bend in the road that you can't see because of obstacles or preferred traffic speed different from actual speed limit.

2) Driving Policy
Tesla has done a lot of work with perception but one area where FSD Beta is very weak, is driving policy IMO. For example, FSD beta is poor at knowing when to change lanes when traffic is dense to avoid not missing an exit. FSD beta can wait too long and then miss its chance to merge because of dense traffic. Also, FSD beta can be overly cautious at intersections when there is no traffic at all. FSD beta can also be too shy when going from a stop sign or too aggressive when making unprotected left turns. These are issues that better driving policy would help with. It would improve the driving decisions of the car and make for a safer and smoother ride. Mobileye has a very good RSS policy that helps the car drive safely. So I think Tesla needs to focus more on driving policy. I think FSD Beta would benefit greatly from a driving policy.

3) Sensor redundancy
I think Tesla is smart to focus on vision-only. This is important as a foundation for perception. And I think vision-only will work great for L2 "door to door". So what I am proposing is that Tesla continue to do vision-only for L2 but work on a lidar-radar subsystem that could be added on top of the existing vision-only FSD system to provide extra reliability and redundancy, that could help get the system to "eyes off". This is essentially what Mobileye is doing and I think it is smart. I think vision-only is fine for L2 but having radar-lidar as a back-up is crucial for "eyes off". This because in order to do "eyes off", you really need to be able to trust the system to be super reliable in all conditions. Vision-only cannot do that. With vision-only, if the cameras fail, the entire system will fail or need to pull over. But with cameras, radar and lidar, your system is less likely to fail if the cameras fails. I think having extra sensors as back-up will really help to reach that extra reliability.

46071715365_d36a6e2bf4_b.jpg

"Full Self Driving Tesla" by rulenumberone2 is licensed under CC BY 2.0.
Admin note: Image added for Blog Feed thumbnail
 
Last edited:
TLDR except for the first post. Just wanted to chime in with my input in case it wasn't already covered by another post in this thread. I was under the impression that the fancy neural system would learn from mistakes and make improvements. That is NOT happening at all in my experience.

Every day at the exact same intersection, my M3LR scares me by swerving hard to the right. I catch it every time and nowadays I just hold the steering wheel tight as I go through. The car will try to swerve to the right, and break free of EAP because I won't let it. Once past, I have to reengage EAP. Why is Tesla not seeing this repeated errors and fixing their auto pilot accordingly? Why can't it just freakin' go straight? HW2.5 w USS. I don't get it.

One thing I would LOVE added to FSD/EAP/AP and/or the mapping software - speed hump/bump locations so the car knows to slow down for these. Even if I need to mark them like with the corvette (for the lift system). Because I don't plan to keep this car for much longer and am actively shopping for a newer car, I've stopped disengaging my EAP and just let the car smash and fly off of the speed humps on my commute at speed if there isn't already another car in front to slow down mine first.

-Paul
 
TLDR except for the first post. Just wanted to chime in with my input in case it wasn't already covered by another post in this thread. I was under the impression that the fancy neural system would learn from mistakes and make improvements. That is NOT happening at all in my experience.

Every day at the exact same intersection, my M3LR scares me by swerving hard to the right. I catch it every time and nowadays I just hold the steering wheel tight as I go through. The car will try to swerve to the right, and break free of EAP because I won't let it. Once past, I have to reengage EAP. Why is Tesla not seeing this repeated errors and fixing their auto pilot accordingly? Why can't it just freakin' go straight? HW2.5 w USS. I don't get it.

One thing I would LOVE added to FSD/EAP/AP and/or the mapping software - speed hump/bump locations so the car knows to slow down for these. Even if I need to mark them like with the corvette (for the lift system). Because I don't plan to keep this car for much longer and am actively shopping for a newer car, I've stopped disengaging my EAP and just let the car smash and fly off of the speed humps on my commute at speed if there isn't already another car in front to slow down mine first.

-Paul
The car doesn't and will never learn on it's own.

You keep saying EAP, do you mean FSD? You understand they are 2 different systems, right?

My car does see speed bumps and rough roads and lifts for them...It's really rare, but it's programmed in at FSD Beta 11.4.6. It also likes to ramp them and railroad tracks...so a bit like Russian Roulette, but it's still pretty early in development (FSD Beta) at this point.
 
The car doesn't and will never learn on it's own.

You keep saying EAP, do you mean FSD? You understand they are 2 different systems, right?

My car does see speed bumps and rough roads and lifts for them...It's really rare, but it's programmed in at FSD Beta 11.4.6. It also likes to ramp them and railroad tracks...so a bit like Russian Roulette, but it's still pretty early in development (FSD Beta) at this point.
Speed bumps are hit and miss even at the same locations

What do you mean it lifts for them? You mean the suspension?
 
TLDR except for the first post. Just wanted to chime in with my input in case it wasn't already covered by another post in this thread. I was under the impression that the fancy neural system would learn from mistakes and make improvements. That is NOT happening at all in my experience.

This is a common misconception. Our individual cars do not learn. What happens is that Tesla collects data from the fleet in general and uses that data to train the overall NN. So our cars will get improved NN over time with new software updates. If Tesla happens to collect data on a problem you are experiencing, it is possible that a future software version will handle that problem better. So there can be "learning" over the course of many software versions. But it is from Tesla improving the software over time, not from your car learning on its own.
 
  • Like
Reactions: E90alex
Wasn’t that the whole selling point of FSD and why Tesla’s implementation was supposed to be superior to others? All this talk about neural network this and AI that? Each car was supposed to contribute to fleet wide learning that benefits every other car.

I think the idea was that Tesla would get more data than the competition since they have a much bigger fleet of cars on the road. The thought was that big data would allow Tesla to solve FSD faster. But I don't think the claim was ever that individual cars would learn on their own and share that learning with the rest of the fleet.
 
  • Like
Reactions: legendsk
I think the idea was that Tesla would get more data than the competition since they have a much bigger fleet of cars on the road. The thought was that big data would allow Tesla to solve FSD faster. But I don't think the claim was ever that individual cars would learn on their own and share that learning with the rest of the fleet.
Right, it was specifically said during AI day that the cars do not learn on their own and they do not have the capacity to ever do so.
 
I was under the impression that the fancy neural system would learn from mistakes and make improvements.

Why is Tesla not seeing these repeated errors at the same location every day for many months and fixing their auto pilot accordingly?
Above edited for brevity
This is a common misconception. Our individual cars do not learn. What happens is that Tesla collects data from the fleet in general and uses that data to train the overall NN. So our cars will get improved NN over time with new software updates. If Tesla happens to collect data on a problem you are experiencing, it is possible that a future software version will handle that problem better. So there can be "learning" over the course of many software versions. But it is from Tesla improving the software over time, not from your car learning on its own.

Correct, that is my understanding as well and what I meant. I didn't mean for my car to individually learn on it's own and only apply the correction to my car. I was under the impression that the errors would be sent to Tesla's neural system to advance the AP software and then updates would be sent out to all cars so that this error doesn't happen again at this intersection.

That has not happened at all. Makes me think that it's all hype and lies. I would love to be proven wrong, but until I see that my car stops trying to crash at the same spot every day, I think Tesla is lying to us, or misleading us. I really want AP/EAP/FSD to work and meet the promises that were made.

-Paul
 
Above edited for brevity


Correct, that is my understanding as well and what I meant. I didn't mean for my car to individually learn on it's own and only apply the correction to my car. I was under the impression that the errors would be sent to Tesla's neural system to advance the AP software and then updates would be sent out to all cars so that this error doesn't happen again at this intersection.

That has not happened at all. Makes me think that it's all hype and lies. I would love to be proven wrong, but until I see that my car stops trying to crash at the same spot every day, I think Tesla is lying to us, or misleading us. I really want AP/EAP/FSD to work and meet the promises that were made.

-Paul
It's more complicated than that and things aren't changed until updates roll out.
 
Above edited for brevity


Correct, that is my understanding as well and what I meant. I didn't mean for my car to individually learn on it's own and only apply the correction to my car. I was under the impression that the errors would be sent to Tesla's neural system to advance the AP software and then updates would be sent out to all cars so that this error doesn't happen again at this intersection.

I think the issue is that up to now, Tesla did not have a supercomputer powerful enough to process all the data from the entire fleet. So not every single error from the entire fleet was automatically going to the Tesla NN to be trained. Tesla was likely only processing a small amount of the entire data. That is why you are not seeing the fix on your issue. It is possible that your error was not collected. Tesla hopes to change that with their new supercomputer called DOJO that is much more powerful. So with DOJO, we might see the fleet "learn" faster. But still, with such a large fleet, the amount of data would be insanely big. It might take even DOJO a lot of time to train on all the data.
 
I think the issue is that up to now, Tesla did not have a supercomputer powerful enough to process all the data from the entire fleet. So not every single error from the entire fleet was automatically going to the Tesla NN to be trained. Tesla was likely only processing a small amount of the entire data. That is why you are not seeing the fix on your issue. It is possible that your error was not collected. Tesla hopes to change that with their new supercomputer called DOJO that is much more powerful. So with DOJO, we might see the fleet "learn" faster. But still, with such a large fleet, the amount of data would be insanely big. It might take even DOJO a lot of time to train on all the data.
To add to that, some issues are edges cases that an encompassing rule won't work to fix. There are millions of variables to the point where Tesla can't just enter a line of code saying "don't do this" and it's solved and doesn't brake a list of other things.
 
  • Like
Reactions: diplomat33
TLDR except for the first post. Just wanted to chime in with my input in case it wasn't already covered by another post in this thread. I was under the impression that the fancy neural system would learn from mistakes and make improvements. That is NOT happening at all in my experience.

Every day at the exact same intersection, my M3LR scares me by swerving hard to the right. I catch it every time and nowadays I just hold the steering wheel tight as I go through. The car will try to swerve to the right, and break free of EAP because I won't let it. Once past, I have to reengage EAP. Why is Tesla not seeing this repeated errors and fixing their auto pilot accordingly? Why can't it just freakin' go straight? HW2.5 w USS. I don't get it.

One thing I would LOVE added to FSD/EAP/AP and/or the mapping software - speed hump/bump locations so the car knows to slow down for these. Even if I need to mark them like with the corvette (for the lift system). Because I don't plan to keep this car for much longer and am actively shopping for a newer car, I've stopped disengaging my EAP and just let the car smash and fly off of the speed humps on my commute at speed if there isn't already another car in front to slow down mine first.

-Paul
Oh that’s heart pounding scary. So is city driving though. I want Tesla to improve summon, come to me and smart park
 
Oh that’s heart pounding scary. So is city driving though. I want Tesla to improve summon, come to me and smart park
It's wild to me that Tesla still sells, this very second, Enhanced Autopilot and lists out a feature that unequivocally it does not have (below screenshot from a moment ago). At least with FSD they have that silly verbiage about regulators, but nothing within enhanced autopilot readily tells a person that they are buying something that does not exist.

1712150343912.png
 
  • Informative
Reactions: mspohr
TLDR except for the first post. Just wanted to chime in with my input in case it wasn't already covered by another post in this thread. I was under the impression that the fancy neural system would learn from mistakes and make improvements. That is NOT happening at all in my experience.

Every day at the exact same intersection, my M3LR scares me by swerving hard to the right. I catch it every time and nowadays I just hold the steering wheel tight as I go through. The car will try to swerve to the right, and break free of EAP because I won't let it. Once past, I have to reengage EAP. Why is Tesla not seeing this repeated errors and fixing their auto pilot accordingly? Why can't it just freakin' go straight? HW2.5 w USS. I don't get it.

One thing I would LOVE added to FSD/EAP/AP and/or the mapping software - speed hump/bump locations so the car knows to slow down for these. Even if I need to mark them like with the corvette (for the lift system). Because I don't plan to keep this car for much longer and am actively shopping for a newer car, I've stopped disengaging my EAP and just let the car smash and fly off of the speed humps on my commute at speed if there isn't already another car in front to slow down mine first.

-Paul
I've had a very favorable experience with FSD over the last couple of years. However, there are locations where the vehicle behaves erratically. I usually disengage for those areas and don't normally try again until the next update (improvements, even from the neural system, will only come with updates btw). Those iffy areas have gradually disappeared and most of my drives are now intervention free. Just got v12.3.3 today and even in this rain is see significant improvements in driving smoothness and comfort. I would like to see more aggression though, especially when driving in NYC

What version are you at?
 
What version are you at?
13.3.1 and then 12.3.3 - The last time it drove itself between new two lanes at a stop light, the steering wheel weaved back and forth more slowly as it tried to decide which lane to use before I took over and picked a lane. It is indeed an improvement over 12.3.1. where it was jerking the wheel back and forth quite violently. However, this should not be happening at all. It should be able to pick a lane on it's own and not waffle back and forth while centering itself between two lanes.
 
I want Tesla to improve summon, come to me and smart park
I want summon BACK. I had it in my first two Teslas and I'm pissed I don't have it in my newest one. Even if it was just "Actually Stupid Summon" just so that I can properly park all the way into my garage and retrieve it daily. No fancy steering or anything. Just forwards and backwards. I don't understand why they couldn't provide this to the current hardware cars while they sort out/figure out "Actually Smart Summon" and get it released.
 
  • Like
Reactions: mspohr
13.3.1 and then 12.3.3 - The last time it drove itself between new two lanes at a stop light, the steering wheel weaved back and forth more slowly as it tried to decide which lane to use before I took over and picked a lane. It is indeed an improvement over 12.3.1. where it was jerking the wheel back and forth quite violently. However, this should not be happening at all. It should be able to pick a lane on it's own and not waffle back and forth while centering itself between two lanes.
Improvements are coming very quickly now. Hopefully, your issue in that area will be fixed soon
 
  • Like
Reactions: kpanda17