Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
One of the probable changes in FSD progression is that v12 will address a range of cases that Tesla has pretty clearly avoided to date, or kept low on the priority list. This is both good and bad.

----------
TLDR: with v12, Tesla may have a breakthrough, but the very things that allow it to work surprisingly well with little human generated code, also represent a reduction in Tesla's ability to control how it decides to behave. And Tesla's past ability to decide when and how much to emphasize certain scenarios, like school zones, is now possibly unavailable or much more difficult. It may prove necessary to provide the ML network with a useful data beyond the navigation map info, giving it a chance to perform better in scenarios that Tesla can no longer easily bypass or forbid it from attempting.
----------

Consider for example, the whole issue of stopping for school buses and behavior in school zones. This has been discussed off and on (here is a 2022 thread), but my conclusion is that Tesla has avoided addressing it because of the unfavorable risk-to-reward ratio. If they attempt to deal with this complex problem, and mostly but not completely succeed, it gives FSD drivers a false impression that they can probably let the car handle it - then with particularly harsh consequences if it doesn't.

Of course, this false confidence problem exists it nearly all aspects of FSD operation. We could point to any number of other driving cases that are presumably on the to-do list, but the issue of school bus / school zone behavior is a particularly complex and sensitive example.

With the prior major versions of FSD, there was a clear opportunity for the Autopilot team to make choices, coding whatever level of partial effort or stopgap band-aids they deemed necessary to to keep these issues on the shelf while they work on the rest of the problem.

But if we take Elon and Ashok's explanations as a marker for the continuing nature of v12 end-to-end FSD, they won't be writing a module to recognize school buses or school zones at all, whether for the purpose of improving them or deliberately ignoring them. It would seem that the training set will will include them, and the learning network will try to mimic what it sees.

I think this could work reasonably well for the problem of school buses with flashing lights and flip out stop signs. As noted in prior discussions, it's more challenging to deal with school zones that command altered behavior " When Children are Present" or "When School is in Session". To me, the latter suggests that the map data would helpfully include this information in a form that's available and understandable to the neural network. As a counter-argument, perhaps the system will essentially teach itself to read signs and conclude when school is in session from looking at school parking lot and traffic patterns.

Again this is only an example. The question for v12 edge cases is whether just providing an enormous quantity of training data, covering many past examples over many calendar dates, will prove to be sufficient. Or does Tesla need to work on providing more inputs regarding the local community environment, giving it at least a chance to draw the correct conclusions despite its lack of general real-world knowledge?

And even supposing Tesla were to make a developmental policy decision to deprioritize school zones for now, how could they do that?
By auto-curating the training clips to remove school-in-session scenarios? That seems counterproductive and possibly dangerous.
By writing special-case code to bypass the issue and let the L2 driver handle it, like prior versions? That runs quite counter to the philosophy described, and becomes a slippery slope back to tens or hundreds of thousands of lines of code to handle special cases and operational guardrails.

I maintain my optimism for the v12 approach, but as I continue to think about it, I do think it needs more attention to the availability of information that humans ingest by just living in society. The weather report, the school schedule, recognition of scheduled or impromptu public events and so on.

Most of us agree that it's too much to expect high level AGI to emerge from under the glove box. I'm just saying that the system may need more sources of information to draw upon then just the nav route and the camera inputs. And I'm saying that I think Tesla will find it challenging to pick and choose scenarios that they want to deprioritize in the meantime. They already noted the problem of the system learning human behavior at stop signs, behavior that displeases NHTSA and runs counter to the teaching high school Drivers Ed videos.

In the coming months, I expect there to be a number of tweets from Elon about wonderful and surprising good behavior emerging from the system training. The question is, how much unacceptably and surprisingly bad behavior could come because the system doesn't know, and presently has no way of being told, information that human citizens take for granted? And further, in what ways could non-real time, non-visual information be made available, enabling it to become more intelligent, even if not "generally" intelligent?

Finally, I note that any such information, if added to the FSD computer inputs during operation, must also be included in the set of training data around each clipped scenario - otherwise the system can't train on how to associate it and use it in conjunction with the camera data. Hopefully, Tesla's data and Telemetry infrastructure is flexible enough to allow experimentation with these kinds of concepts. Ashok mentioned that they were looking at the system being able to take verbal suggestions or directives from the human operator/passenger. That makes me optimistic that they could extend the data and telemetry set to include other, yet to be determined forms of information.
 
Similar thing happened to me today while ramping onto highway on the right lane along a high right-handed curvature, although I didn't let it go far to see if it would correct itself. I think it was caused by a faded right lane line.

These sorts of lane drifts are very predictable for me though and aren't an issue because I'm paying attention.

There are some fsd versions where I get too frustrated by the issues and avoid using it at all, but 11.4.7.3 has been great for me as a L2 driver assist, so I use it at every opportunity, despite these failures.
It totally ignored a red light after the off ramp at the end because it thought a stop sign that was meant for another lane was the relevant thing to watch for... Over reliance on map data got Tesla here. The company that supposedly hates maps.
 
  • Funny
Reactions: GSP
Yes, mentioned in the other thread. Backseat passenger sounded like he nearly needed a change of underwear.

Not an assist. Can’t really imagine this getting magically fixed in v12. But hopefully it will be!

View attachment 985213
There is just not enough visibility to complete this ULT 100% of time. The cross traffic moves along far too fast and the front cams don't see far enough to the left (or to the right, for that matter) to reliably execute this turn.
 
The majority of drivers, if they choose to do this turn.

I think people give human drivers far too little credit for their incredible capability. They rarely make (potentially fatal) mistakes. Of course there are drivers who would never attempt this.
Funny you should say this.

The reason most attempt such ULTs is that they are risky. By avoiding such risky turns, humans are safer !
 
There is just not enough visibility to complete this ULT 100% of time. The cross traffic moves along far too fast and the front cams don't see far enough to the left (or to the right, for that matter) to reliably execute this turn.
Yet Tesla, who certainly knows exactly how well their cars can see, identify and track approaching traffic, chooses to waste their time and money paying employees to test this intersection.
 
Yet Tesla, who certainly knows exactly how well their cars can see, identify and track approaching traffic, chooses to waste their time and money paying employees to test this intersection.
Even the most casual observer who has reviewed Chuck Cook's video library knows that the present sensor mix on the existing Tesla fleet is inadequate to reliably handle that ULT. Placing side-facing cameras in the headlight units might eliminate the blindspots, as would LIDAR and radar. Unfortunately, Elon Sisyphus keeps pushing the same boulder up the same hill only to have it roll back down. The most elegant NN can't predict what it can't sense.
 
Even the most casual observer who has reviewed Chuck Cook's video library knows that the present sensor mix on the existing Tesla fleet is inadequate to reliably handle that ULT. Placing side-facing cameras in the headlight units might eliminate the blindspots, as would LIDAR and radar. Unfortunately, Elon Sisyphus keeps pushing the same boulder up the same hill only to have it roll back down. The most elegant NN can't predict what it can't sense.
Fortunately, Tesla is not the most casual observer.
 
Is there any actual evidence that a V12 exists, beyond someone saying, "When we figure out something that works, we're going to name it V12."

Ashok Elluswamy retweets the Elon FSD v12 livestream with this comment:
This end to end neural network approach will result in the safest, the most competent, the most comfortable, the most efficient, and overall, the best self-driving system ever produced. It’s going to be very hard to beat it with anything else!

Unless you are claiming Ashok, Elon and Tesla are outright lying to investors, please take your FUD elsewhere.

I can understand skepticism regarding autonomy: will it be possible on current hardware? Will it be possible withing X years/ever?

But the "fraud" argument is bull*sugar* and comes from people that have themselves achieved nothing and only like to criticise others.
 

Ashok Elluswamy retweets the Elon FSD v12 livestream with this comment:
This end to end neural network approach will result in the safest, the most competent, the most comfortable, the most efficient, and overall, the best self-driving system ever produced. It’s going to be very hard to beat it with anything else!

Unless you are claiming Ashok, Elon and Tesla are outright lying to investors, please take your FUD elsewhere.

I can understand skepticism regarding autonomy: will it be possible on current hardware? Will it be possible withing X years/ever?

But the "fraud" argument is bull*sugar* and comes from people that have themselves achieved nothing and only like to criticise others.
The tweet is just marketing. "It’s going to be very hard to beat it with anything else!" Why? Parking solutions with no neural networks but with USS are still beating Vision hands down, a full year after the decision was made to ditch the USS, so no proof so far that even this trivial task of self-parking is even possible with Vision let alone superior to a decade-old USS solution..
 
The tweet is just marketing. "It’s going to be very hard to beat it with anything else!" Why? Parking solutions with no neural networks but with USS are still beating Vision hands down, a full year after the decision was made to ditch the USS, so no proof so far that even this trivial task of self-parking is even possible with Vision let alone superior to a decade-old USS solution..


I think it's totally fair to be dubious that V12 is, or once ready for public release will be, unbeatably good- esp. given all the previous hyperbole that didn't pan out.

Hell we're 2.5 years out from the original "who needs radar" vision change and AP top speed still is not back to parity with the old HW2 system.

But that's a far cry from doubting V12 exists
 
The tweet is just marketing. "It’s going to be very hard to beat it with anything else!" Why? Parking solutions with no neural networks but with USS are still beating Vision hands down, a full year after the decision was made to ditch the USS, so no proof so far that even this trivial task of self-parking is even possible with Vision let alone superior to a decade-old USS solution..
That was not the discussion and I took no such position.

This tweet is proof that the livestream (during which Ashok was a passenger by the way) featured an FSD build ("v12") that used end to end neural networks. No more, no less.
 
  • Like
Reactions: GSP
Similar thing happened to me today while ramping onto highway on the right lane along a high right-handed curvature, although I didn't let it go far to see if it would correct itself. I think it was caused by a faded right lane line.

These sorts of lane drifts are very predictable for me though and aren't an issue because I'm paying attention.

There are some fsd versions where I get too frustrated by the issues and avoid using it at all, but 11.4.7.3 has been great for me as a L2 driver assist, so I use it at every opportunity, despite these failures.
I find the wipers super annoying. Tesla needs to at least let us turn them off. Lane drift also annoying, even though I always stop it I get tired of it. End result is that I use FSD less. It was getting to the level that I could use it with passengers, but no more.

I also get tired of auto hi beams flashing on and off, and not turning on when appropriate and needed. They used to work better, IIRC.

I guess Tesla has all teams working on V12, instead of these quality of driver experience issues. Hopefully V12 will have these fixed, but I am sure there will be new bugs as well. I hope FSD eventually will be pleasant to use, reducing stress on drivers and passengers. Just like HW1 autopilot was on limited access divided highways in my 2015 Model S.

GSP
 
Ashok Elluswamy retweets the Elon FSD v12 livestream with this comment:
This end to end neural network approach will result in the safest, the most competent, the most comfortable, the most efficient, and overall, the best self-driving system ever produced. It’s going to be very hard to beat it with anything else!
So they’re beating Waymo now? Doesn’t seem very likely. What about a goal of going 30 city miles per DE instead? That would be a 100% improvement. Seems more realistic than going from 15 miles to 30000 miles.
Unless you are claiming Ashok, Elon and Tesla are outright lying to investors, please take your FUD elsewhere.
Have you been living under a rock the six last years? Robotaxi in 6-12 months since 2016(tm). Trust is earned. Show me the DE data from the fleet and get a robotaxi testing permit from the DMV.
 
Last edited:
  • Like
Reactions: kabin
I make a Chuck-level ULT (6 lanes, ~50mph) less than once a year while driving all over the west coast.

FSDb's main problems are lane selection and lane drift. I'd much rather have 99% reliability in those than ULT reliability.
FSD still can't figure out roundabouts properly that 100% of people here do. Definitely much easier than CULT.

CULT is a failure of urban planning - not a every day situation FSD should handle.