Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
I've been testing FSD since Jan 2022. The software improved from Jan 2022 to approx Nov 2022. Since then, it's been one disaster after another, often the newer release fixed a problem created by the prior release. The latest is no exception. Here's a summary of the latest disasters:

11.4.7.3 New problems
  • Defeat lane change on command requests
  • Couldn't follow it's own route. For the first time in 21 months, my Model 3 entered the 5 freeway, didn't attempt to merge, and was forced to exit the freeway. Traffic was light.
11.4.7.2 New problems
  • Screen blacked out twice while car was in motion.
  • Waffling back and forth between 2 left turn lanes
  • Moved into median turn lanes when the navigation said to go straight forward
  • To make a left turn, car moves from left to right lane and then back left, often missing the turn. All within .3 miles.
11.4.7 New problems
  • Stopped at cross walk when no one's there
  • Right turn signal on a left turn
  • Accelerated to 50mph between 2 slow lanes on a city street
  • Occupied 2 left lanes at red light because it couldn't decide which left turn lanes
Problems still not fixed on earlier versions
  • False turn signals
  • Unnecessary lane changes in the city, even with light traffic. Right, left, right, left, and then back to the original right lane, all within one mile.
  • Prematurely entered into wrong left turn lanes
  • Excess turn radius that overshot left turn guide lanes
  • Excessive offset to the left prior to right turn . Excessive offset to the right prior to left turn.
  • The need to get out of the rightmost lane
  • Veered into left turn cut outs
  • False chimes
  • Phantom braking

46071715365_d36a6e2bf4_b.jpg

"Full Self Driving Tesla" by rulenumberone2 is licensed under CC BY 2.0.
Admin note: Image added for Blog Feed thumbnail
 
Karpathy leaves and a few months later it goes to *sugar*. Karpathy is a great scientist.

Ashok Elluswamy seems to me more of a Elon-style booster.

And Elon's public decompensating and mandating in-office 5 days a week, and declining stock price, are not at all helpful to attract and retain these kinds of 1% employees who are stunningly talented in a hot field and stunningly recruited by employers who are nicer and pay much more.
 
Tesla is focused on v12. It’s obvious. They even announced they were changing course and now pursuing the end-to-end approach.

I’d rather they focus on that and get us v12 sooner than spend development resources on a version which will die with the next release anyway.

For those who bought FSD without understanding what they were getting, We are pushing new frontiers here. Nobody in the world has an autonomous vehicle yet. Nobody. And definitely nobody that takes a generalized non-geofenced approach. If you’re not prepared that FSD will be an evolving program over time and that there will be changes in direction, you probably didn’t research your purchase well enough.
 
If you’re not prepared that FSD will be an evolving program over time and that there will be changes in direction, you probably didn’t research your purchase well enough.
I'm not being glib. I am genuinely wondering if FSD is evolving at all. It's definitely changing, but is it getting better? In some threads those who have it report it to not be any better at certain basic tasks than it was years ago.

I really wonder if, in five years, somebody could take everything @Eddie123 wrote in his first post, replace version 11 with version 16, replace 2022 with 2027, and the entire post would remain accurate.
 
Tesla is focused on v12. It’s obvious. They even announced they were changing course and now pursuing the end-to-end approach.

I’d rather they focus on that and get us v12 sooner than spend development resources on a version which will die with the next release anyway.

The new-technology V11 was supposed to be that version that they focused on for deployment and abandoned V10. I remember this same discussion happening previously. And non-FSD autopilot (the one most people are using) was supposed to be merged a while ago and that isn't happening. V11 FSDb is now decent enough on divided freeways except for the lane selection prior to exit in traffic (too optimistic it can merge into the proper lane at last moment).

There are many fundamental technical unsolved problems with the V12 end-to-end type of system, particularly on controllability (making it actually obey rules when you don't hand write those rules in).

For those who bought FSD without understanding what they were getting, We are pushing new frontiers here. Nobody in the world has an autonomous vehicle yet. Nobody. And definitely nobody that takes a generalized non-geofenced approach.
There's a reason the autonomous vehicle makers geofence and it's no longer because they have fixed ultra HD maps for localization with lidar (they do all machine learning with occupancy nets or the like now)---it's because of the consequence of mapping errors in the databases which are apparently common. Tesla doesn't bother and you have poor performance with mapping errors everywhere. And this hurts our performance all the time everywhere.

The other top vendors could turn off the geofence and still get good performance but with some errors. Those level of errors are not acceptable for L4 as a commercial product. Tesla will need to do the same thing when it gets close there too.

If you’re not prepared that FSD will be an evolving program over time and that there will be changes in direction, you probably didn’t research your purchase well enough.
I knew it would be evolving over time. One expects the evolution to be 75% improvement. It is not. The OP's complaint that there have been many regressions of basic driving behavior is not wrong, I have also experienced no sustained improvement in a year. It should be 'beta' and not 'alpha' or 'dev build' which it is now.

I think it's not sufficiently well managed and suspect there is internal turmoil and insufficient investment.

The current car fleet will never be L4. Design and deploy a product which has a good enough definition and movement towards reliability in what it actually can do (i.e. high L2 to highway L3), with internal R&D on closer to autonomy on L4 (stereoscopic high resolution cameras, new gen compute, and a few imaging radar sets). Gather fleet data to do first crack at fixing mapping errors.

Realistically Tesla's FSDb is at the same operating parameters as MobileEye Supervision: Mobileye SuperVision™ | The Bridge from ADAS to Consumer AVs

And that is going into more cars soon. And will be a more reliable and consistent product that gives value to drivers and won't be $12000.
 
There are many fundamental technical unsolved problems with the V12 end-to-end type of system, particularly on controllability (making it actually obey rules when you don't hand write those rules in).
Good point. Troubleshooting and correcting specific outliers in an end-to-end neural net is not the same level of effort as, for example, changing a couple of parameters in a C++ following distance subroutine.
 
The parallels between 2016-2019 and today are nearly identical. "Autonomous driving this year'.

My fault for believing this scheister when I bought 6 years ago. Why would ANYONE believe him now? Even a little bit? Elon has zero credibility. And since he is personally the "communications department" that means the company has an equal share in the lies.

People buy Tesla for many valid reasons but ADAS/FSD by Tesla is a fool's product. I keep hoping I'm wrong but there is zero evidence otherwise especially considering the competition who is much further ahead.

FSD from Tesla in any comparable form including Level 4/5 remains years away. I'll likely never see it.
 
Good point. Troubleshooting and correcting specific outliers in an end-to-end neural net is not the same level of effort as, for example, changing a couple of parameters in a C++ following distance subroutine.
There's nothing that says they can't lay rules on top of the E2E solution that overrides the control output from the E2E approach. Tesla still can identify objects in the scene and act on that if they want to.

Fact is, nobody has solved autonomy. And I'm pretty sure nobody here is an expert. So declaring that something will happen or will not happen is just an opinion. Lots of people said SpaceX would never reliably land rockets on a ship in the ocean either.
 
The parallels between 2016-2019 and today are nearly identical. "Autonomous driving this year'.

My fault for believing this scheister when I bought 6 years ago. Why would ANYONE believe him now? Even a little bit? Elon has zero credibility. And since he is personally the "communications department" that means the company has an equal share in the lies.

People buy Tesla for many valid reasons but ADAS/FSD by Tesla is a fool's product. I keep hoping I'm wrong but there is zero evidence otherwise especially considering the competition who is much further ahead.

FSD from Tesla in any comparable form including Level 4/5 remains years away. I'll likely never see it.
I hear you and in many respects I share your skepticism and frustration - and this is coming from a relatively new Tesla owner (3/30/2023). The "FSD this year" mantra has been strong every year it seems. Musk stated that v12 will be the first non-beta FSD version. When asked about why it has taken so long to get FSD to market, he essentially stated that every time they thought they were getting closer, they'd hit a wall that would require up-leveling the entire system and codebase to account for an altogether different approach to solve certain problems. This seems to be exactly what has occurred again with v12 - they are throwing out the 300k lines of code in 11.4.x and starting fresh with an E2E AI based system for v12 - which seems to be another significant up-level process that will introduce new uncertainties and new unknowns given it's for the most part an altogether new approach and a new FSD system (and IIRC this new system was actually originally written for Optimus - so they are simply adapting the bot codebase for EVs). I don't have high hopes that we're really going to see a true production version in 2024 given we're again dealing with a new FSD codebase/system for all intents and purposes. This increasingly feels like a moment where a hybrid of the two systems will be needed - kinda like the three laws of robotics type thing - but adapted to AI based vehicles. It's likely that there will be some laws that are immutable - perhaps these are already hardcoded into the much smaller codestream - no one knows for sure.

It has made little sense to me why Tesla is pouring huge amounts of money and resources into achieving L4/L5 - they should be working on getting a system out that provides real world L2/L3 capabilities into production in 2024/2025 - and then focus on figuring out how to move to a true L4/L5 system. This approach is why their TACC/AP systems aren't as good as they could be - and continue to enjoy increasing legal and regulatory scrutiny that otherwise would not be the case - or at least not to the extent that they currently face.
 
I went to search, plugged in FSD, with a "older than" date of three years ago and came up with this.


This is why so many people have given up on any kind of timeline. I still believe it will happen in my life (I hopefully have a few decades left), but next year? Or the year after? I'd bet my retirement that it won't.
 
There's nothing that says they can't lay rules on top of the E2E solution that overrides the control output from the E2E approach. Tesla still can identify objects in the scene and act on that if they want to.
Actually that's really really difficult. With end to end training there isn't this ability to easily identify objects and make rules. And there is not a computational budget to run two entire full featured ML models simultaneously. They already do heroic tuning to reduce latency and time jitter.

Fact is, nobody has solved autonomy. And I'm pretty sure nobody here is an expert. So declaring that something will happen or will not happen is just an opinion. Lots of people said SpaceX would never reliably land rockets on a ship in the ocean either.
Many more organizations are working on autonomy than on rockets. SpaceX was rapidly improving rockets and was leading early in Falcon development. Tesla FSDb is not especially compared to competitors.
 
.........

It has made little sense to me why Tesla is pouring huge amounts of money and resources into achieving L4/L5 - they should be working on getting a system out that provides real world L2/L3 capabilities into production in 2024/2025 - and then focus on figuring out how to move to a true L4/L5 system. This approach is why their TACC/AP systems aren't as good as they could be - and continue to enjoy increasing legal and regulatory scrutiny that otherwise would not be the case - or at least not to the extent that they currently face.
This
 
With end to end training there isn't this ability to easily identify objects and make rules.
I'd love to better understand this. Sounds like Asimov's Three Laws of Robotics cannot be implemented in a neural net E2E AI engine. If that's true (although you said "hard", not "impossible") that would be disappointing.

Let's say that DoJo watches 10b people making a right turn at an open stop sign. But only 4 of them came to a full stop. Would every FSD instance roll that stop? Most likely it's very safe but clearly the NHTSA will not permit that for an autonomous vehicle.

Or speed limit? Would the driver/passenger no longer have options for speed limit offset? Is that similarly difficult?

Heck, ideally there should be a low/medium/high level of personalization aggressiveness somehow - speed, follow distance, lane change, etc. Even better, adjust based upon my real life driving style as it is consistent to my preferences. It could probably make such a profile while the cameras calibrate but with continuous updates.

Bottom line - there will have to be a way to implement rules based on human intelligence without AI override. Maybe hard but it doesn't feel like reality otherwise. I hope they find an answer.
 
I'd love to better understand this. Sounds like Asimov's Three Laws of Robotics cannot be implemented in a neural net E2E AI engine. If that's true (although you said "hard", not "impossible") that would be disappointing.
That's science fiction. The translation from linguistic generalities to policy in a complex perception world decades away.

Let's say that DoJo watches 10b people making a right turn at an open stop sign. But only 4 of them came to a full stop. Would every FSD instance roll that stop? Most likely it's very safe but clearly the NHTSA will not permit that for an autonomous vehicle.

With end to end training you don't even have a defined internal representation for 'stop sign' assigned by humans, it will somehow generate something that you can post-hoc identify in some examples as a stop sign, but without lots of explicitly labeled image pieces (the Karpathy solution they're getting away from) you don't even know which of the millions of video clips has a stop sign and people stopping. You could make some filter with ML and some human labeled starting data but even then that won't be accurate and rules based.

Or speed limit? Would the driver/passenger no longer have options for speed limit offset? Is that similarly difficult?
That might be a little bit easier as external speed limits get put in through maps and you can modify that value being inserted with deterministic non-neural computer code.


Heck, ideally there should be a low/medium/high level of personalization aggressiveness somehow - speed, follow distance, lane change, etc. Even better, adjust based upon my real life driving style as it is consistent to my preferences. It could probably make such a profile while the cameras calibrate but with continuous updates.
And all that is very very difficult with end-to-end training of perception plus policy, because there isn't any place to put those inputs in naturally. The end to end training does not necessarily generate representations that external computer code can go in and modify reliably for 'speed, follow distance, lane change'.

A human driver might follow some verbal directions but even then you can't guarantee they will behave exactly as you want them to, and definitely no way to twiddle their driving neurons to make them do the right behavior.

Bottom line - there will have to be a way to implement rules based on human intelligence without AI override.
Yeah, that's the current system V11 now, where ML is mostly in the perception side, and the planning is tons and tons of hand tweaked rules and a classic mathematical optimizer setup.

If you go to end to end training you will gain quite a bit in natural behavior right away (from using human drivers) at great cost to interpretability and controllability which will be very hard to win back.
Maybe hard but it doesn't feel like reality otherwise. I hope they find an answer.
It's harder than most people imagine. Everything in autonomous driving is proving to be much more difficult than even machine learning practitioners thought. Modern machine learning is smashing through go, text summarization, translation of many language pairs, understanding law and medical licensing exams, image generation. But driving is substantially harder---and most of all, the allowable error rates so much smaller.
 
  • Like
Reactions: QUBO
I expect that V12 will be even more of a headache. V11 uses human coding to control the vehicle. When issues crop up they reprogram to try and correct them. With v12 and end to end AI, they can't reprogram. They have to retrain which will take much more time and effort to correct problems. It's possible that some of the persistent problems are because they are controlled partially by AI right now and they've not been able to retrain them away.
 
Actually that's really really difficult. With end to end training there isn't this ability to easily identify objects and make rules. And there is not a computational budget to run two entire full featured ML models simultaneously. They already do heroic tuning to reduce latency and time jitter.
Tesla is identifying objects with their neural networks with rules *right now*. There’s no reason they can’t be running any of those networks in parallel with an e2e network.

I’m curious how you think you know what the computational budget is, and how close Tesla is to the limit of that budget? How many clock cycles were freed up by eliminating all that C++ code? How many other networks do not need to run *in series* in an end-to-end solution? Do you have a working e2e network yourself? If not you’re just making stuff up.

Tesla has done things to reduce latency—one example is bypassing the signal processing of the image data coming from the camera sensors—but that is to reduce the overall delay from photons in to processing and to remove unnecessary delays that neural networks don’t need. Processing adds noise relative to the raw signal input, so there’s a benefit too.

But you have zero evidence whatsoever that they will be at the limit of their computational budget and will be unable to have any layers running on top of the e2e network.
 
  • Informative
Reactions: APotatoGod