Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Can someone explain the end of trip UI to me?

FSD takes me to a point. It says before we get there I can press the accelerator to continue my trip. I don't want to go anywhere else so I let it bring the car to a stop. There doesn't seem to be any way to exit FSD at this point that doesn't trigger a why did you disengage prompt.

I've tried just waiting a minute, but nothing happens.

I've tried pressing the right pedal (long skinny one on the right) and that still gets me into a why did you disengage as soon as I turn the wheel.

I've tried hitting the brake pedal. Instant why why did you disengage.

I've looked for something to press on the screen and the only button that looks like it might be it is "End Trip" but no joy, I still end up at why did you disengage after that.

Am I missing a step or is there no way to avoid the why did you disengage at the end of a FSD trip?
Nope. Just ignore it.

Previous versions wouldn’t give you the prompt if you were close to your destination but they changed it. My guess was so they could get feedback on issues that occurred right at the end of the trip.
 
  • Like
Reactions: cyborgLIS
About speed limit sign reading, it does not read (or interperet) signs like "End of 25 Speed Limit".

We drove from Santa Rosa, out to Jenner, and up to Gualala via the twisty-turny Hwy 1. Through Jenner the speed signs say 25, which FSD read and obeyed. (Auto Speed Setting is off.). Leaving town is the End of 25 speed limit sign, but the car continues to display the limit as 25 for many miles.

My understanding is that here in Calif, the end of speed limit sign means that the limit is 55, the default on open roads unless posted higher.

Scrolling up to 55 is easy and worked fine, but the car needs better sign understanding.

Again, I did not try the auto speed setting option.

FSD 12.3.6 handled the drive along the Russian River and up the coast very nicely, slowing for the hairpins and accelerating for the straight stretches. Quite amazing. I did drive manually for some of it to fit in better with other drivers, and 'cause Hwy 1 is fun to drive!
 
  • Like
Reactions: tivoboy
The slower speed issue has bothered me less lately, in part because I've learned how to deal with it. The key is to tap the accelerator pedal whenever it's driving below your MAX setting, but be ready for it to drive too fast around upcoming turns. Often it will seem to take the turn too fast, but if you keep an eye on it, it's tolerable (although perhaps a little scary).
That’s what I’ve been doing but i didn’t have to do that on v11 it went to my max speed setting instantly
 
My opinion about all of the above: Tesla development is madly working on getting 12.x/Robotaxi fully functional. Sweating poor visibility, beyond making sure that users transition more-or-less gracefully into them doing the driving, is probably not their highest priority. Once the March of 9's truly begins (and, yeah, there are posters here who think that that's never going to happen), there'll be staff freed up to handle poor visibility.. which is something that I think NNs can handle without too much trouble, just like we handle poor visibility without too much trouble.
If Robotaxi is to be completed on anything remotely approaching Elon's timeframe, the team had better be furiously working right now on ALL aspects of the problem: poor visibility as well as driving correctness. In fact, the former is arguably more urgent, because it directly informs the sensor suite requirements for Robotaxi, and that decision must be made at least a year or two before the control aspect of the software would need to be market-ready. (Even if it turns out that pure vision with 8 cameras is sufficient, they will need to prove this by example.) The last thing Tesla can afford to do is to put Robotaxi into production with a sensor suite that isn't up to the task. That's also why all other autonomy-focused manufacturers are overbuilding their sensor suites, rather than underbuilding them. Software is much more easily retrofittable than hardware.

Fortunately, with the E2E approach, solving for weather (or at least understanding the sensor suite's limitations) amounts to gathering more training data. Synthetic data may not work; I'm not sure photorealistic water-blurred camera feeds could be synthesized accurately enough to model how it "really looks", and there are endless ways for real-world image quality to be compromised. But if it turns out that e.g. A-pillar cameras are essential for L4 superhuman driving ability, let alone lidar/radar, Tesla will first need to build these sensors into a large fleet (e.g. HW5), then gather real-world training data from such equipped cars, in order to solve poor weather issues for similarly-equipped Robotaxis. That's why I think Robotaxi is still several iterations and several years away.

FWIW, We handle poor visibility by sitting far back from the windshield and moving our heads, to minimize the impact of any particular raindrop or dirt splotch. Fixed cameras pressed up against the glass don't have either of those advantages. That's a big part of why I think a different approach may ultimately be needed to match skilled human driving adaptability, let alone exceed it.
 
Doubt it. 8/8 is an announcement of something coming, not a release of anything.
Oh, of course; no one is expecting a working (L4-demonstration-capable) Robotaxi prototype on 8/8, just a design prototype and preview. By "Elon's timeframe", I meant his chronically overoptimistic predictions that e.g. the FSD fleet will be driving at superhuman levels by the end of the year, and so forth. I'm not sure when he actually expects to be able to flip the switch on L4 (either for Robotaxi or the rest of the fleet), but if it's to be any sooner than 2030, they need to be tackling the problem from all sides yesterday. They don't have the luxury of figuring out e.g. control first and bad weather second.

From Elon's public statements and optimism, I would guess he expects the switch to be flipped by 2026-2027, and that's what I had in mind for "Elon's timeframe". But from the rate of progress I've seen, and my own knowledge of the complexity and difficulty of the tasks they face, I think success by 2027 (usable L4 in the field, by any nontrivial percentage of the fleet) is extremely unlikely. Sure, they may front-load a pilot trial in a carefully curated geographic area, with a few carefully maintained cars, under carefully restricted environmental conditions, but I don't realistically expect Robotaxi ridership to broaden and make a meaningful contribution to Tesla's bottom line until at least 2030, probably 2032-2034. And if that's how it plays out, then my fundamental concern is whether their non-Robotaxi product roadmap will be sufficient to tide them over until then, and at that point what fraction of the customer-owned fleet will inherit Robotaxi capability, and what will be left behind.
 
Last edited:
  • Funny
Reactions: STUtoday
It would have driven 38mph most of the time and there would be nothing you could do.

Another win for manual mode.
At 38 mph through the hairpin turns and cliffs over the ocean, well, lets just say I would probably have been unable to post the story. I'm kidding, of course. I assume the automatic speed setting would not cause redwood tree collisions or attempted cliff launches, but that road is not a great setting to see if it gets the speed wrong.

Our original 2021 Enhanced Autopilot worked pretty well on highways, and we became comfortable with the auto (blinker initiated ) lane changes and scroll wheel adjusted speed. So with FSD, Minimal Lane Changes and manual speed adjustments feel pretty "natural". Testing Auto Speed Setting and what we might Excessive Lane Changes is a bit stressful because it is hard to anticipate the car's behavior and hard to double check that the lanes are clear and to understand why it chooses the speed it does. Add in not wanting to antagonize other drivers, and I just don't spend much time using those features. Just enough to report the problems, and then check for improvements with each release.
 
  • Like
Reactions: JB47394
TeslaFi says there are over 6,000 on 12.3.6, and only 600 on 12.3.4 plus a few smaller number on earlier 12 variants. There are still nearly 1,000 on the 2023.44.30.x 11.4.9 versions, and I wonder why. Perhaps not ready to chance version 12?

I am curious about the "free trial" folks. Did they just enable 11.4.9 or one receive or wait for one of the V12 variants?

I guess I'm wondering when FSD 12 will replace 11.4.9 in the production branch, and it the V11 highway version will be replace with V12 code.
 
  • Funny
Reactions: Ramphex
TeslaFi says there are over 6,000 on 12.3.6, and only 600 on 12.3.4 plus a few smaller number on earlier 12 variants. There are still nearly 1,000 on the 2023.44.30.x 11.4.9 versions, and I wonder why. Perhaps not ready to chance version 12?

I am curious about the "free trial" folks. Did they just enable 11.4.9 or one receive or wait for one of the V12 variants?

I guess I'm wondering when FSD 12 will replace 11.4.9 in the production branch, and it the V11 highway version will be replace with V12 code.
Know a number of the free trial folks, including the SO and her 2021 MY. They all ended up with the current shipping versions of 12.3.x, getting updated with everybody else as per usual. They all started with Tesla loads that had 11.4.x loads built in, but not enabled, from 2023.
 
If Robotaxi is to be completed on anything remotely approaching Elon's timeframe, the team had better be furiously working right now on ALL aspects of the problem: poor visibility as well as driving correctness. In fact, the former is arguably more urgent, because it directly informs the sensor suite requirements for Robotaxi, and that decision must be made at least a year or two before the control aspect of the software would need to be market-ready. (Even if it turns out that pure vision with 8 cameras is sufficient, they will need to prove this by example.) The last thing Tesla can afford to do is to put Robotaxi into production with a sensor suite that isn't up to the task. That's also why all other autonomy-focused manufacturers are overbuilding their sensor suites, rather than underbuilding them. Software is much more easily retrofittable than hardware.

Fortunately, with the E2E approach, solving for weather (or at least understanding the sensor suite's limitations) amounts to gathering more training data. Synthetic data may not work; I'm not sure photorealistic water-blurred camera feeds could be synthesized accurately enough to model how it "really looks", and there are endless ways for real-world image quality to be compromised. But if it turns out that e.g. A-pillar cameras are essential for L4 superhuman driving ability, let alone lidar/radar, Tesla will first need to build these sensors into a large fleet (e.g. HW5), then gather real-world training data from such equipped cars, in order to solve poor weather issues for similarly-equipped Robotaxis. That's why I think Robotaxi is still several iterations and several years away.

FWIW, We handle poor visibility by sitting far back from the windshield and moving our heads, to minimize the impact of any particular raindrop or dirt splotch. Fixed cameras pressed up against the glass don't have either of those advantages. That's a big part of why I think a different approach may ultimately be needed to match skilled human driving adaptability, let alone exceed it.
I think I’m going to disagree with you, here. Image recognition with impairments is more like detecting a signal in the presence of random noise than it is, I dunno, calculating the precise impairment of droplets of water, then subtracting that impairment out of the image to get the underlying image.

As it happens, I’m very aware that, inherently, NNs are a tool that that extracts underlying image information from a noisy image exceedingly well. Kind of like how double Markov chaining of noisy auditory data allows for speaker-independent voice recognition; the differences between different speakers (different vocal tracts, size, age, pitch) are all modeled as noise, and out pops the diphthongs!

Famously, NNs trained on finding giraffes by showing the NNs pictures of giraffes right side up, backwards, forewords, facing left, right, and upside down are known for finding giraffes 90% obscured by brush, trees, high grass, and what-all. Without specific training on the obscuring vegetation. The right tool for the right job; this is what NNs are good at.

I strongly suspect that whatever Tesla’s vision recognition stuff that’s in there now probably didn’t need much work to get to its current level. As I said, rather than continuing to work on that, the engineers probably bailed on that work to work on the more critical stuff, like not killing cyclists and handling unprotected left turns. Once the driving in clear weather is under control, my guess is that circling back to improve obscured, noisy vision is what they’ll do.

Asking for everything ready Right This Second is a stance, I suppose.
 
Last edited:
I think I’m going to disagree with you, here. Image recognition with impairments is more like detecting a signal in the presence of random noise than it is, I dunno, calculating the precise impairment of droplets of water, then subtracting that impairment out of the image to get the underlying image.
The question is whether the "random noise" is strong enough to disrupt the ability of the system to reliably infer what it needs to to accomplish the driving task with extreme reliability: not only _what_ it's looking at, but more or less precisely _where_ as well. Spatial distortion due to refraction can affect the _where_ much more strongly than, say, blur caused by offgassing residue on the glass. Can the neural net compensate well enough to still extremely reliably solve the driving task, for all such image degradations encountered in real-world environmental conditions? I don't know. Maybe not. Tesla had better find out for sure before they lock in the Robotaxi sensor specifications.
As it happens, I’m very aware that, inherently, NNs are a tool that that extracts underlying image information from a noisy image exceedingly well. Kind of like how double Markov chaining of noisy auditory data allows for speaker-independent voice recognition; the differences between different speakers (different vocal tracts, size, age, pitch) are all modeled as noise, and out pops the diphthongs!

Famously, NNs trained on finding giraffes by showing the NNs pictures of giraffes right side up, backwards, forewords, facing left, right, and upside down are known for finding giraffes 90% obscured by brush, trees, high grass, and what-all. Without specific training on the obscuring vegetation. The right tool for the right job; this is what NNs are good at.
They will probably remain quite good at detecting the 'what', but the 'where' is much tougher when there's significant refraction involved. NN's are a great tool for the job, but again the question is whether the image degradation will be stronger than the NN can overcome in order to perform with near-perfect reliability. Finding 90% or even 99% of the noise-obscured giraffes (or, say, red lights) is impressive, but not nearly good enough. Miss one per million miles and that's a showstopper.
I strongly suspect that whatever Tesla’s vision recognition stuff that’s in there now probably didn’t need much work to get to its current level. As I said, rather than continuing to work on that, the engineers probably bailed on that work to work on the more critical stuff, like not killing cyclists and handling unprotected left turns. Once the driving in clear weather is under control, my guess is that circling back to improve obscured, noisy vision is what they’ll do.

Asking for everything ready Right This Second is a stance, I suppose. Um. Do you want a pony, too? 😁
Er, I'm not asking for "everything ready Right This Second". Only that the two complementary tasks should be worked on in parallel [by the existing very large team], not serially, which I think is essential if Elon's extremely optimistic timeframe (L4 by 2026-2027 or so, or even by 2030) is to have any hope of being met. That's not an unreasonable position, I don't think. Looked at another way, Elon is asking his team for "everything ready Right This Decade", and I don't think he's going to get it, particularly not if his team takes the slower serial approach. Elon did say "balls to the wall", not one ball at a time. 🙃
 
Last edited:
  • Like
Reactions: Tronguy
Can someone explain the end of trip UI to me?

FSD takes me to a point. It says before we get there I can press the accelerator to continue my trip. I don't want to go anywhere else so I let it bring the car to a stop. There doesn't seem to be any way to exit FSD at this point that doesn't trigger a why did you disengage prompt.

I've tried just waiting a minute, but nothing happens.

I've tried pressing the right pedal (long skinny one on the right) and that still gets me into a why did you disengage as soon as I turn the wheel.

I've tried hitting the brake pedal. Instant why why did you disengage.

I've looked for something to press on the screen and the only button that looks like it might be it is "End Trip" but no joy, I still end up at why did you disengage after that.

Am I missing a step or is there no way to avoid the why did you disengage at the end of a FSD trip.
Just simply ignore the prompt the same as when you disengage for other reasons unrelated to safety.
 
Nice find. How did you figure that out, by accident, it's not documented is it?
Found it by accident. Not sure they are documenting all capabilities anymore.

On 12.3.6 nothing mentioned the autopark improvement in showing all the spots. It wasn't until I pulled in to a parking lot that I saw they now displayed. Didn't see that on 12.3.4 or see it mentioned in the release notes. I may have missed it in the release notes.
 
  • Like
Reactions: zoomer0056
TeslaFi says there are over 6,000 on 12.3.6, and only 600 on 12.3.4 plus a few smaller number on earlier 12 variants. There are still nearly 1,000 on the 2023.44.30.x 11.4.9 versions, and I wonder why. Perhaps not ready to chance version 12?
If my wife's M3 were being counted by TeslaFi (if it's something one needs to sign up for, she hasn't), she'd be in that last category.