Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
I went back and looked at the terms of the bet and it's unclear if this current beta of 10.69 beta should count. It has known issues.
Haha, that’s a lot of levels of beta. I guess for our next bet we’ll have to be more clear on the version. There was never any ambiguity about how this would roll out though. You had to take that into account when making the bet. Sad.
 
Haha, that’s a lot of levels of beta. I guess for our next bet we’ll have to be more clear on the version. There was never any ambiguity about how this would roll out though. You had to take that into account when making the bet. Sad.
What I've learned is that it's very difficult to set the terms of a bet about FSD beta. It's hard to even define what FSD beta is.
 
What I've learned is that it's very difficult to set the terms of a bet about FSD beta. It's hard to even define what FSD beta is.
I think on 10.69.2, or whatever goes to wide-ish release next, it will still be below 90% on Chuck’s turn. However, for this 90%, this time I would not count scenarios where no traffic shows on the visualization in the relevant directions during the turn. It could do that before and it can do it now.

I just think, as any FSD Beta user knows, FSD is so interestingly inconsistent in how it drives, that we’ll see stutter steps, weird stopping in the wrong place, etc. even with the final version. I am not 100% sure Tesla knows why the car does this. It may be too far under the hood.

If they were programming they car to make the turn, presumably this would not happen. But it doesn’t appear that they are doing that.
 
  • Like
Reactions: edseloh
The task of developing a fully autonomous vehicle ( no steering wheel or peddles) is almost infinite. Everyone is looking for a vehicle that drives exactly as they do and we all drive differently. In addition there will always be that new edge case that no one has encountered before. This will also be the case for insurance based on driving performance. Yesterday while driving in full manual mode I was hit with several major warnings as I drove through a very convoluted construction area at 10 mph. The route involved getting close to cones and barriers and crossing double lines multiple times. I know Elon is committed to FSD and it will eventually get better than the average human driver, but I think his aggressive commitments are hurting his loyal customer base. Asking people who just paid $60K for a vehicle to pay another $12K for a promise is a tough pill to swallow.
 
I think on 10.69.2, or whatever goes to wide-ish release next, it will still be below 90% on Chuck’s turn. However, for this 90%, this time I would not count scenarios where no traffic shows on the visualization in the relevant directions during the turn. It could do that before and it can do it now.

I just think, as any FSD Beta user knows, FSD is so interestingly inconsistent in how it drives, that we’ll see stutter steps, weird stopping in the wrong place, etc. even with the final version. I am not 100% sure Tesla knows why the car does this. It may be too far under the hood.

If they were programming they car to make the turn, presumably this would not happen. But it doesn’t appear that they are doing that.
Ok, it's a bet!
I wonder if the randomness is to achieve the stated goal of achieving zero intervention drives. If you do the same thing every time then you may fail every time but if you do a random thing every time then sometimes you will succeed.
 
  • Funny
Reactions: AlanSubie4Life
In one of the 10.69 videos published so far (don't remember which one, as usual 😅), a video has confirmed that this has not been fixed. Of course, that doesn't mean 10.69.1 won't fix it, but at least on their latest published code, it's not.

It is annoying - when FSD is on, and when I see a warning sign saying it is about to go from 80 to 60, I usually end up manually reducing the max right there, so that by the time it reaches the new lower limit, I'm not at a speed-ticket range...

May be reporting this more frequently may bump up the priority of this fix? Must be a trivial fix...
I've used to report it every time it happened over the past year. Now it's just another circumstance that I don't bother using FSDb for.
There doesn't seem to be a point to driving the same road twice a day and reporting the same stupid moves over and over - it's just easier to leave FSDb disengaged. So my mental map of where to use FSDb only gets updated on new releases. It has been slowly shrinking as I get less tolerant of its foibles.
 
  • Like
Reactions: VanFriscia
I think that as FSD Beta develops, this understanding of what makes the human driver nervous will become more and more important. This is because as the system gets better, the most dangerous thing won't be bad behavior by FSD that requires a driver intervention, it will be technically safe behavior from FSD that elicits a disengagement from a nervous driver in a dangerous way.

And I think we're already starting to see this. Take for example this article from Electrek: We tested Tesla Full Self-Driving Beta in the Blue Ridge Mountains, and it was scary where he describes a "scary disengagement." But if you frame-advance through the video, you can see that this is the point where FSD disengaged (far before it even entered the turn):

View attachment 844942

And on the previous frame, you can clearly make out his foot on the brake:

View attachment 844944

So while I cannot say with certainty that it happened this way, I think it's a possibility that he was nervous going into the turn at 23 MPH, subconsciously tapped the brakes, and disengaged FSD himself.
I remember seeing another video of someone driving through the same section of Blue Ridge mountains and had a great experience with Autopilot.
 
The task of developing a fully autonomous vehicle ( no steering wheel or peddles) is almost infinite. Everyone is looking for a vehicle that drives exactly as they do and we all drive differently. In addition there will always be that new edge case that no one has encountered before. This will also be the case for insurance based on driving performance. Yesterday while driving in full manual mode I was hit with several major warnings as I drove through a very convoluted construction area at 10 mph. The route involved getting close to cones and barriers and crossing double lines multiple times. I know Elon is committed to FSD and it will eventually get better than the average human driver, but I think his aggressive commitments are hurting his loyal customer base. Asking people who just paid $60K for a vehicle to pay another $12K for a promise is a tough pill to swallow.
For me what it really comes down to is: what would make you comfortable with taking your eyes off the road and trusting the system to not plow you into something?

I don't mean just glancing away from the windshield but actually taking your focus off the road and committing your mental resources to something else. Can't speak for anyone else but doing this just feels inherently unsafe, even when you're on an open straight road and merely have cruise control locked in. Heck you can be looking forward and still be thrown for a loop if an ADAS does something you don't expect, I experienced this when trying out the lane keeping in a Ford F-150 last year and it would veer to the right when approaching merge lanes with worn down lane lines.

All it takes is one boo-boo and you're gone, this will require an extreme level of trust. Put other people in the vehicle with you and the trust requirement increases.
 
For me what it really comes down to is: what would make you comfortable with taking your eyes off the road and trusting the system to not plow you into something?
I would be perfectly comfortable in a Waymo or Cruise robotaxi. Waymo has published safety data that suggests reasonable safety, with Cruise I'd just be trusting that their fear of massive lawsuits means they're pretty sure it's safe. When Tesla says that I can sit in the backseat I'll be comfortable with that too.
 
  • Like
Reactions: AlanSubie4Life
I remember seeing another video of someone driving through the same section of Blue Ridge mountains and had a great experience with Autopilot.

OK but the real question is why the heck you’d be using FSD on the blue ridge parkway… talk about a driver’s road! The last time I had the M3P on a road through those mountains (501 near Natural Bridge), I noticed a bunch of motorcycles zooming down the road, then when they caught up to someone, pulling off at one of the small parking areas on the side and waiting for traffic to clear before pulling out again and zooming ahead. I have to say… I joined them. :)
 
OK but the real question is why the heck you’d be using FSD on the blue ridge parkway… talk about a driver’s road! The last time I had the M3P on a road through those mountains (501 near Natural Bridge), I noticed a bunch of motorcycles zooming down the road, then when they caught up to someone, pulling off at one of the small parking areas on the side and waiting for traffic to clear before pulling out again and zooming ahead. I have to say… I joined them. :)
In fact there is a widely seen video of a Tesla going up Pikes Peak on Autopilot. No issues at all
 
The navigation logic is still deficient in Teslas. It invariably takes routes that make no sense, especially preferring back roads to highways. Focusing on Chuck's ULT is a great exercise for the AD team, but ignoring the flaws in routing is an example of not seeing the forest for the trees. I get it, the ULT "challenge" is S3XY and fixing navigation is boring.

At least let us either have access to altering the chosen route and saving the modified route for future use or if navigation is turned of, allow the use of turn signals to guide FSD beta at intersections. It's been months since Tesla added navigation waypoints but they never committed to having them actually work properly. I find that most of my forced disengagements are due to the route being counterintuitive. Sure, I can disengage and then let the car reroute, but that's not autonomous driving.
 
  • Like
Reactions: Jeff N
And now for a reality check with 10.69 doing poorly. This is what happens when you have really old road layouts in a small New England town/city, Newport RI. I live near a similar road layout city (Lowell, Mass) where certain sections cannot be handled by FSD. (10.12). It will be a long time until FSD can handle these edge case cities/towns in the old northeast part of the country. And once you have snow piles it gets a whole lot worse. I wonder if Elon has ever driven in areas like this?
And Europe is even a bigger challenge...
 
The tools still use spice format for netlists but there are have been much faster simulators for decades. One nice thing about FSD simulations is that they can run in realtime (or way faster if you're not including the perception stack) whereas circuit simulations are millions to billions of times slower than real life.

I'm agreeing in general but not for this case. I think what makes FSD simulations difficult is the modeling of actions and reactions of other drivers but that is not the case in Chuck's ULT.
Speaking as a practicing EE who spends 'way too much time mucking with SPICE.. Yes, people still use it. One uses a simulator that, hopefully, is matched to the kind of work one is doing. If one's doing transmission lines, s-parameter design, stripline design, antenna design, and (for lots and lots and lots of fun) mixed-signal VHDL simulations of multimega-gate ASICs, there's simulators for all of those.

The problem, generally, with any simulation tool is the accuracy of the models. Sometimes, one can take the first-order characteristics of, say, an inductor, and emulate it with a simplistic ideal R in series with an ideal inductor, both in parallel with some kind of small, but ideal capacitor. Now, try and do that at, say, 500 MHz; people who know about inductors will now fall over laughing. Inductors are distributed circuit elements that are, no kidding, very difficult to model mathematically. Capacitors tend to be just Evil. Resistors stop resisting above a GHz or two and become distributed devices as well.

S-parameters (and similar) attempt to measure some $RANDOM device, then, if one simulates the measured model using the S-parameter description of some device, hooked in with other devices, one might get somewhere. Or not.

Thing is, simulation of circuits can be useful. Below 10 MHz or so the first and second order descriptors of how a component might work will actually yield useful results, and it's a heck of a lot faster to modify a simulation (especially when one is playing with electrical transmission lines and distributions on a circuit board) than to play cut-and-try, 1940's style. But, it's all about GIGO: Feed Garbage In to a Simulator, and expect Garbage Out. What they pay those of us who do this kind of work for is, well, not knowing, but suspecting where the garbage might lie.

And if that's not enough to give designers ulcers, then there's the cross-talk problem. One might get one's spiffy new circuit to work beautifully in sim, and even on that nifty test board that the manufacturer helpfully sells one, and it all looks golden. Put it in with a zillion other circuits with signals that radiate out the wazoo and one's spiffy new circuit falls on its face. I think it was last year that, in the space of three months, I found a half-dozen of these kinds of problems. If somebody had run around with a 'scope or had been properly paranoid, most of these problems would have been detected, early. But training up to the right level of paranoia isn't something easy to do.

So, as a general rule: Simulation has its place. But the best design practice is to run back and forth between simulation and the real world, improving the simulations as one goes, and finding bugs in the hardware that Nobody Would Have Expected. Major point: Mathematical models, CS or otherwise, are Not The Real World. One ignores that at one's peril.

Not a joke: When there's a hundred independent variables, one has to consider not just one variable at a time, but what all these variables do when they line up/don't line up and so on. This makes simulating large, multi-signal ASICs an interesting trip; and they pay Smart People to dream up simulations that actually exercise all the different features. Add CPUs to all of this, neural and othewise, with variable processing times from data input to data output, and one's life just gets more difficult.

Come to think of it: It's a wonder that these two-legged, two-armed, two-eyes ambulatory creatures with a complex neural net on top don't fall over all the time. Ha. They do, don't they?

One actual example of all this: Boeing and the ULA or whomever were trying to get a space capsule up to the ISS a year or so ago. They darn near lost the capsule and people are glad that that capsule got nowhere near the ISS. Why? Software that was firing the attitude thrusters was on the fritz, firing the wrong thrusters at the wrong time. And there was a time-of-day fault as well. Why did this all occur? They builders simulated everything, in batches, and never ran a full wet-dress rehearsal. Presumably to save costs. I know people in Aero: They were jumping up and down, screaming: Are they idiots!?! Of course you run full dress rehearsals, with as close as one can get to flight hardware as one can!!!

So, I'm not surprised that Tesla showed up in force in Texas and monitored that ULT like mad. They're not talking, but I'll bet a plugged nickel that, Dojo or no Dojo, they had simulations of that guy's failures that showed Success! a lot more than the guy was getting. And a guaranteed failure mode of a complex system is worth its weight in gold: Monitor the heck out of it, figure what's going wrong, and go back with Real Data.

Simulation will get one somewhere, but it's very positively not the end-all and be-all.

For fun: See Stanislaw Lem's Cyberiad, where the protagonists, Trurl and Klapacius, independently build whole-universe simulators to get Answers to Questions that they want answered. Good read, good flight of fancy.
 
they had simulations of that guy's failures that showed Success!
That would be a major problem and a common one. When I encounter such a problem my design flow is to fix the simulation so that it matches real life.
People seem to be getting the idea that I'm saying simulation will solve self-driving. That is not the case. I'm saying that I don't think it's possible to solve self-driving (with current technology) without being able to reproduce and detect failures in simulation after you've found them in real life. This is necessary both to test many variations of the same failure mode and ensure that future versions don't reintroduce old failure modes.
 
The task of developing a fully autonomous vehicle ( no steering wheel or peddles) is almost infinite. Everyone is looking for a vehicle that drives exactly as they do and we all drive differently.
Perhaps but I'm not sold on that thought. Consider the new "creep line". On Chuck's 2nd video (I think) me said something like "um, ah, that feels closer than I want to be to traffic". His natural reaction was he was much too close. But assuming the vehicle is accurate when creeping to that line (which does appear suspect on occasion), I'd suspect that Chuck (and me) would be more comfortable with the car creeping to the stop line. In other words, the driver's perception adjusted to FSD, not FSD to the driver's perception. In retrospect, on blind UPL the car has creeped past my "comfort zone" and I intervened but it could very well have been a safe maneuver. I have zero desire for a car that drives as I do - clearly that's not possible. Even today there are times where in my opinion FSD is either "too fast" or "too slow". I can't change that other than trying to gain consensus on the specifics of the situation and hopefully the engineers pay attention. But the car will *never* drive like I do. If someone expects that they have already lost the game.

Havng said that, personally I'd love to have a "green light chime" kind of signal when the car decides it's "go time". It's hard for me to tell when it's still creeping and when it has (perhaps mistakenly) decided to go forward. It's like a "minimum descent altitude" - let autopilot do it's thing on a non-precision approach but it sure better let you know if you decend below minimums. Forgive me if that's a bad analogy - I haven't piloted in a lot of years.
 
What they pay those of us who do this kind of work for is, well, not knowing, but suspecting where the garbage might lie.
Great read, Tronguy. As someone who both used simulation and designed simulation tools from my arrival at Intel in 1974 until decades later, I'd like to expand on the point I quoted.

It seems to me that more people have at least a guess of sources of error in simulation than there are people who don't realize how vulnerable success is to whether the right question is posed to the simulator, and whether the use of the output actually notices a problem which was accurately simulated.

To others reading this (not to Tronguy), if either of those points seems trivial, you truly don't have a clue as to the problem.
 
  • Like
Reactions: wknickless
That would be a major problem and a common one. When I encounter such a problem my design flow is to fix the simulation so that it matches real life.
People seem to be getting the idea that I'm saying simulation will solve self-driving. That is not the case. I'm saying that I don't think it's possible to solve self-driving (with current technology) without being able to reproduce and detect failures in simulation after you've found them in real life. This is necessary both to test many variations of the same failure mode and ensure that future versions don't reintroduce old failure modes.
Agreed. This fits the, "simulate, test the real thing, fix the #$%& simulation, simulate some more, test" lather, rinse, repeat method of design.

All I was pointing out: Simulation will get you so far. Then you have to get out and walk. And: It's still faster to simulate, walk, simulate, walk, then to just walk.

As a rule: Management wants to see the walk. But it's a combination of sim & walk that gives one confidence that the design won't break.
 
With regards to AV simulations I'm only going by industry presentations. They talk about simulation of scenarios like Chuck's ULT with millions of variations. It's hard for me to imagine that they would bother with all that if it didn't work. Nobody seems to be able explain what about Chuck's ULT is uniquely hard to simulate. It's really not a highly complex and dynamic system since if the maneuver is done correctly there is no interaction with other agents (at least in light traffic situations where we still see failures).
I'm sure they DID do simulations .. but when they passed muster it was time to take the real thing out to verify it .. which is what Chuck observed them doing. Just like you .. once your circuit simulations pass muster and you DO commit to a chip, I'm sure you dont ship the chip to customers before doing testing on the actual device :)