Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
It's hanging out behind a semi-truck on a 5-lane highway, I don't need another cracked windshield. Manual turn signal doesn't influence lane change, kind of disappointing.
Lane change requests are ignored about 5 to 10% of the time on my car. Perhaps that's what happened. Immediately cancel the request (half press of the stalk) and request it again.
Almost missed my exit because it's too conservative on the merging, had to take over. I might try 'Aggressive' behavior setting and see what happens.
The car will sometimes pass a car despite an upcoming exit. If you don't interfere, you'll probably find that it has time to get back in the right lane and make the exit.

The biggest adjustment a new FSD user needs to make is learning that the car doesn't drive like you do, but that's usually okay.
 
What’s nice about where we’re at with FSD is we’re very close to going from the “frequently disengaging” regime to the “rarely disengaging” regime. Once you solve a core set of problems and the march of nines begins, disengagements become rare. This has several implications:

1. As Chuck Cook and others have pointed out, extra vigilance is ironically required now that it drives much better. Human nature dictates that when a something is right 99% of the time, you’re less ready for the 1% failure vs a system that is only right 80% of the time. That 19% makes a huge difference. It’s the difference between a system that messes up once every five minutes vs a system that messes up once every hour and a half or more.

A decade or more ago I remember hearing that Waymo (Google at the time) chose not to develop their driving system with customers at the wheel because of this very concern of complacency.

2. The utility, comfort, and enjoyment of using FSD as you go from 80% to 99% reliability is not like a 19% improvement. It’s very nonlinear, and one might say the 99% accurate FSD12.4 (or whatever) is several hundred times more useful and enjoyable than 11.4.9. This is why Elon chose around this time to start showcasing FSD to potential customers.
 
The biggest adjustment a new FSD user needs to make is learning that the car doesn't drive like you do, but that's usually okay.
A fair comment. I don't have V12 yet but very little in V11 had me thinking it was usually okay to let the car make the decisions.

One thing that has me hopeful for V12 is the slow speeds. As someone who aims for the speed limit and finds herself forced to go faster at times because of the flow of traffic, I am looking forward to seeing how I feel about V12's speed choices. It could be what so many of you see as a flaw will be the positive that converts me to using FSD outside of controlled access, divided, highways.

All along, I've thought that the people who post in TMC wanting the AV future to come NOW will be very disappointed when it actually comes and the cars actually follow the traffic laws and safe driving practices like signaling, leaving plenty of space ahead, looking far down the road to plan for smooth stops and starts and traveling the speed limit.

What is unclear is how many of the general tesla driving population share the "have to get to my destination seconds faster than anyone around me so need to tailgate in case someone slides between me and the car in front" and the "having a limit of 85mph on FSD is BS" attitudes that I frequently see amongst posters on TMC. If they don't, and in fact are very cautious in their driving, that may be why the NN has selected very cautious behaviour, especially in regards to speed.

Which means, the biggest adjustment those vocal FSD users are going to have to make is that they don't like that the car drives like the rest of us.
 
What if this whole "based on millions of video clips of humans driving" thing doesn't work and Tesla has to to back to explicit coding?

The whole "hand coding" thing is a strawman because nobody is arguing for 100% hand coding as a solution to FSD. So no, Tesla is never going back to explicit coding everything. And e2e does "work" since it is able to produce neural nets that can drive the car from A to B. The only real question is how safe/reliable can the current FSD hardware and e2e get and in what ODD? Can it get to be safe enough to remove driver supervision? Maybe Tesla needs better cameras or more cameras or radar, or maybe Tesla needs to add some small coding to "tweak" the e2e to handle some special edge case or maybe Tesla needs some new ML technique to actually get to eyes-off. But regardless, Tesla is never going to back to explicit coding for everything.
 
A fair comment. I don't have V12 yet but very little in V11 had me thinking it was usually okay to let the car make the decisions.

One thing that has me hopeful for V12 is the slow speeds. As someone who aims for the speed limit and finds herself forced to go faster at times because of the flow of traffic, I am looking forward to seeing how I feel about V12's speed choices. It could be what so many of you see as a flaw will be the positive that converts me to using FSD outside of controlled access, divided, highways.

All along, I've thought that the people who post in TMC wanting the AV future to come NOW will be very disappointed when it actually comes and the cars actually follow the traffic laws and safe driving practices like signaling, leaving plenty of space ahead, looking far down the road to plan for smooth stops and starts and traveling the speed limit.

What is unclear is how many of the general tesla driving population share the "have to get to my destination seconds faster than anyone around me so need to tailgate in case someone slides between me and the car in front" and the "having a limit of 85mph on FSD is BS" attitudes that I frequently see amongst posters on TMC. If they don't, and in fact are very cautious in their driving, that may be why the NN has selected very cautious behaviour, especially in regards to speed.

Which means, the biggest adjustment those vocal FSD users are going to have to make is that they don't like that the car drives like the rest of us.

I thought about this as I was using v12 for the first time this morning. One thing that made me uncomfortable in past releases was the car speeding down residential streets at speeds that were technically legal but felt too fast based on my surroundings. I didn't experience any of that this morning. Separately, there were a few times where the car didn't drive exactly like I would have driven, but it was technically fine, because in those moments (and there were only a few, surprisingly), the car's driving still felt natural, just different.
 
Driving on FSD Beta, on the roads that FSD Beta is typically driven on, with the disengagements that drivers have been making, is safer than driving a Tesla without active safety features, on the roads those Teslas are typically driven, with the driver attributes of those without them.
The split data that you shared earlier of Autopilot vs FSD Beta vs no active safety came from 2022, so there's a bit of bias of who and when would these systems be engaged. In particular back then it was 10.x so potentially a larger percentage of that audience were hyper-vigilant enthusiasts accumulating miles, and now there's many reports of typical owners newly trying out 12.x where they've kept it engaged for whole trips vs previously they turned it back off in less than a mile. Similarly, even now FSD Beta warns of detecting poor weather, so that probably results in different types of usage and drivers who would even make the trip in say snow let alone choose to engage FSD Beta.

I do think it is likely that even focusing on the various situations FSD Beta had been engaged, say clear weather straight city roads, the same drivers manually driving would have had more accidents than if they had engaged FSD Beta, but that's impossible data to collect as that specific driver and situation would happen only once with it either engaged or not.

But even lacking that directly comparable data, it seems likely 12.x will result in much higher usage and hopefully much lower accident rate.
 
Looks like Elon expects all cars with HW3/4 to be able to:
"3 million cars will be able to drive themselves with no one" "if not this year, no later than next year"

So looks like Robotaxis are coming for sure sometime in 2026. Then 5 million cars, then 10 million. :eek:

Sorry guys, that I was too pessimistic about this. Guess I was wrong. In retrospect I should have expected that the ChatGPT moment would get us there.

I was too simple minded to understand the enormous power of AI. Mea culpa.

This should all speed the transition to sustainable energy though. So we are all winners here.

And I apologize for misreading the Tesla safety data too. In the end, what is important is that they are on the path to safety.
 
Last edited:
What if this whole "based on millions of video clips of humans driving" thing doesn't work and Tesla has to to back to explicit coding?
Then it's never going to work.
There isn't enough storage nor compute in the universe to code the proper response to every possible combinations of inputs.
The only reason any approach could be feasible is that things can be generalized. Rather than deal directly with every combination of pixels that can be a car, the NN outputs show how "carish" an object is. Then the next layer(s) takes those approximations and acts on them.

For hand coding to work, it needs accurate and complete source data (categorization). If those layers are working, then there are still too many variable combinations to hand code and more approximation is needed.

NN training is slower and 'fuzzier' than saying "if x>0.567" , but what if the code needed to be "if x>0.571"? Apply that across all values and it's beyond human ability in a useful amount of time, but a big block of training compute can tweak all the parameters simultaneously while checking for regression.
 
  • Like
Reactions: FSDtester#1
1. As Chuck Cook and others have pointed out, extra vigilance is ironically required now that it drives much better. Human nature dictates that when a something is right 99% of the time, you’re less ready for the 1% failure vs a system that is only right 80% of the time. That 19% makes a huge difference. It’s the difference between a system that messes up once every five minutes vs a system that messes up once every hour and a half or more.

A decade or more ago I remember hearing that Waymo (Google at the time) chose not to develop their driving system with customers at the wheel because of this very concern of complacency.
Chuck Cook and 40-50 years of research ;)

When FSD gets 10x better from here (and still is 100x less safe than the average human), it will likely kill people if this research is correct (and I believe it is, based on observations from other sectors).

Phil Koopman has a good text on this:

Chris Urmson's TED talk:
 
  • Like
Reactions: sleepydoc
Chuck Cook and 40-50 years of research ;)

When FSD gets 10x better from here (and still is 100x less safe than the average human), it will likely kill people if this research is correct (and I believe it is, based on observations form other sectors).

Phil Koopman has a good text on this:

Chris Urmson's TED talk:
Waymo was able to deploy robotaxis without major safety problems during the testing phase. Maybe Tesla will have to get more strict about kicking people out of the beta program to make it work.
 
  • Like
Reactions: FSDtester#1
Kinda says something, don’t you think?

Edit: I’ve noted this before but Alan really seems to obsess over small nuances of driving style and with that seems to have difficulty appreciating that there is a range of normal driving and that a giving style, while different from his own preference, may still be completely acceptable.

Now I will readily admit that there seems to be a fair amount of variability in FSD behavior between cars, locations, etc, so it is possible that his car actually is behaving abnormally but from the totality of his posts I tend to think it’s more perceptual on his part.
The feds have identified around 30,000 driving styles so take your pick.
 
  • Like
Reactions: sleepydoc
When FSD gets 10x better from here (and still is 100x less safe than the average human), it will likely kill people if this research is correct (and I believe it is, based on observations from other sectors).
So many assumptions:
FSD being 10x better than now and also 100x worse than a generalized driver. What's the paredo of driver crash probability? If 10% of drivers cause 80% of accidents it may be statisticly impossible to be better than the average driver since the average driver doesn't crash. Thus, crash rates per miles driven.

FSDs failures result in fatalities. To not have fatalities: Don't drive into (or off) stationary objects and don't drive into the path of moving objects. That's a lizard brain level system.

L5 is a pipe dream. There will always be an ODD for an autonomous system.
Then L4 with a large ODD, same thing. ODD just carves out what the system still fails at (while simultaneously being able to detect the situation ahead of time... somehow...)
 
Looks like Elon expects all cars with HW3/4 to be able to:


So looks like Robotaxis are coming for sure sometime in 2026. Then 5 million cars, then 10 million. :eek:

Sorry guys, that I was too pessimistic about this. Guess I was wrong. In retrospect I should have expected that the ChatGPT moment would get us there.

I was too simple minded to understand the enormous power of AI. Mea culpa.

This should all speed the transition to sustainable energy though. So we are all winners here.

And I apologize for misreading the Tesla safety data too. In the end, what is important is that they are on the path to safety.
Wut