Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
I've used to report it every time it happened over the past year. Now it's just another circumstance that I don't bother using FSDb for.
There doesn't seem to be a point to driving the same road twice a day and reporting the same stupid moves over and over - it's just easier to leave FSDb disengaged. So my mental map of where to use FSDb only gets updated on new releases. It has been slowly shrinking as I get less tolerant of its foibles.
I could be a minority, but I do prefer to keep FSD on, and just scroll down the max speed for these occasions, rather than turning FSD off all together. When max speed changes again, it will pick it up automatically, so it's just one-off action. I can live with it, until they fix it.

I guess I too will report this moving forward in hope that this will boost priority.
 
The tools still use spice format for netlists but there are have been much faster simulators for decades. One nice thing about FSD simulations is that they can run in realtime (or way faster if you're not including the perception stack) whereas circuit simulations are millions to billions of times slower than real life.

I'm agreeing in general but not for this case. I think what makes FSD simulations difficult is the modeling of actions and reactions of other drivers but that is not the case in Chuck's ULT.
I think a better analogy would be logic verification (RTL verilog or VHDL) and fault coverage when testing. The adage goes: "If you don't test it, it is broken." There isn't enough time to make sure every node on the chip is not stuck high or low, and you can't get every combination of patterns through the circuits. So you do your best and hope some screwy unique case doesn't cause issues (like Intel's multi-million dollar floating point bug in the 90s).
 
I think a better analogy would be logic verification (RTL verilog or VHDL) and fault coverage when testing. The adage goes: "If you don't test it, it is broken." There isn't enough time to make sure every node on the chip is not stuck high or low, and you can't get every combination of patterns through the circuits. So you do your best and hope some screwy unique case doesn't cause issues (like Intel's multi-million dollar floating point bug in the 90s).
And, since all of us simulation zanies are dogpiling this discussion, one more comment from me before I disappear over the horizon.

Synchronous state machines, where there's a finite number of inputs, and they're known stable once the synchronous trigger event occurs, are (shudder) relatively easy to deal with. So long as they're not too large.

Asynchronous state machines, where anything can happen at any time and there's lots of different state machines running at cross-purposes with signals passing between them (think: Neural networks, and, no, I'm not joking) with varying delay times in processing.. People who play with PCs are using von Neumann architectures, but the general idea is that there's lots of interrupts: DMA cycles, keyboard clicks, storage units NAK/ACK stuff, Ethernet port interrupts, and, by gum, the system timer interrupt, and they all occur at any time in any place.

One would like to think that a CPU can be emulated. Ha. Certain things can.. But it's very much not like, say, simulating a linear component bandpass filter. I know people who run simulations on system software. It's sure not like playing VHDL/Verilog, which have their own foibles.

I stand in a certain amount of awe at the ability of Tesla to run ridiculously monster simulations from the Dojo setup, which is so far past what we consider to be a supercomputer that somebody needs to come up with a different name: superultracomputer? XtremeSuperComputer? Dunno. Because, even with that, (and there's a lot of that in the Dojo), there's just Too Darn Many Moving Parts in the driving computer for a complete solution. This is where I'm showing my age: I suspect that it's possible that there is no deterministic solution for the set of problems that the driving computers are set to solve.

Having said that: Engineering with issues like this falls into the 9's category. Every improvement hopefully adds a 9. At some point, the possibility of an error in driving says that, with a population of millions of cars, there might be an error every third year on one of them.

And we use stuff, every day, that has error rates like that. Long-haul transmission systems user forward error correction, where, with a 1% raw error rate (or greater!) on the received data, the FEC can correct for that error rate to the point where the probability of an actual data error affecting customer communications is, like, once every dozen years or so. CDs, DVDs, and spinning hard drives all use technology like that.

And this is why Musk talks about that people would be idiots, at some point, not to use a self driving car. When the car's error rates are such that it's a hundred times a safer driver than an unenhanced human, why would we allow humans to drive?
 
Looks like a regression with the new occupancy network causing phantom swerving for speed bumps detected as an obstacle in the road:
phantom swerve occupancy speed bump.jpg


Seems like a tricky problem as visually a speed bump does look like some object in the road except it's designed to be driven over. I suppose that's another reason for the existing static object prediction needs to be kept around to provide special meaning to certain objects.
 
Looks like a regression with the new occupancy network causing phantom swerving for speed bumps detected as an obstacle in the road:

Oops, if they used my vehicle's data to train the network, that's my bad. I always drive around speed bumps whenever I can.

But in all seriousness, I wonder what it would do when faced with a large speed bump and oncoming traffic. Come to a full stop and wait for the oncoming traffic to pass to try and navigate around the speed bump?
 
The task of developing a fully autonomous vehicle ( no steering wheel or peddles) is almost infinite. Everyone is looking for a vehicle that drives exactly as they do and we all drive differently. In addition there will always be that new edge case that no one has encountered before. This will also be the case for insurance based on driving performance. Yesterday while driving in full manual mode I was hit with several major warnings as I drove through a very convoluted construction area at 10 mph. The route involved getting close to cones and barriers and crossing double lines multiple times. I know Elon is committed to FSD and it will eventually get better than the average human driver, but I think his aggressive commitments are hurting his loyal customer base. Asking people who just paid $60K for a vehicle to pay another $12K for a promise is a tough pill to swallow.
I realize this is not a popular perspective but it's going to be a long time before FSD can handle 100% of driving without driver assistance.
I continue to believe the greater revenue driver for the average owner is not driverless cars but simply the capability for FSD to handle 99.5% of driving with handoff of edge cases to the driver in a non emergency manner. In other words instantaneous takeover is not required. For example you come up on a construction project with flagmen/police officers handling traffic. Or rerouting because of a traffic accident. So lets assume the driver has 30 seconds to respond. Tesla could then request regulators to allow drivers to text, watch videos or otherwise be distracted until and/or if they must take over. No sleeping. The first car maker who provides this will not be able to meet demand. Sure robotaxi is important but not for the average owner. I would much rather see Tesla approach FSD as a phased implementation. I'm not looking for the holy grail.
 
I realize this is not a popular perspective but it's going to be a long time before FSD can handle 100% of driving without driver assistance.
I continue to believe the greater revenue driver for the average owner is not driverless cars but simply the capability for FSD to handle 99.5% of driving with handoff of edge cases to the driver in a non emergency manner. In other words instantaneous takeover is not required. For example you come up on a construction project with flagmen/police officers handling traffic. Or rerouting because of a traffic accident. So lets assume the driver has 30 seconds to respond. Tesla could then request regulators to allow drivers to text, watch videos or otherwise be distracted until and/or if they must take over. No sleeping. The first car maker who provides this will not be able to meet demand. Sure robotaxi is important but not for the average owner. I would much rather see Tesla approach FSD as a phased implementation. I'm not looking for the holy grail.
fully agree. Although I think it will be a very long time before it gets to even 90%
The current method of approaching everything as if its the first time, but ignoring most road signs means that almost everything is a surprise guarantees an emergency takeover.
Its a great technical achievement as far as its got so far, but I doubt HW3 will ever achieve anything close to anything other than driver assistance with the driver paying full attention and ready for an instant handoff/hot potato.
 
fully agree. Although I think it will be a very long time before it gets to even 90%
The current method of approaching everything as if its the first time, but ignoring most road signs means that almost everything is a surprise guarantees an emergency takeover.
Its a great technical achievement as far as its got so far, but I doubt HW3 will ever achieve anything close to anything other than driver assistance with the driver paying full attention and ready for an instant handoff/hot potato.
Maybe you could explain your statistic (90%)? Is that either miles driven, time driven or situations handled safely?
 
Maybe you could explain your statistic (90%)? Is that either miles driven, time driven or situations handled safely?
It hardly matters .. most of the "it can't/won't do X/Y/Z" posts here are purely speculation based on guesswork. Look back a few years here and you will see people crafting arguments to "prove" it would not be able to see traffic signals (it can), would not be able to drive at night (it can) etc etc. I'm not saying that Tesla can achieve their goals, just that at this point in time no-one knows either way .. including Tesla themselves :)
 
Everyone is looking for a vehicle that drives exactly as they do and we all drive differently

I have zero desire for a car that drives as I do - clearly that's not possible. Even today there are times where in my opinion FSD is either "too fast" or "too slow". I can't change that other than trying to gain consensus on the specifics of the situation and hopefully the engineers pay attention. But the car will *never* drive like I do. If someone expects that they have already lost the game.
This is starting to drive me batty. Why are we diminishing the ability of a driving computer in this fashion?

I’ve been very clear: I want a vehicle which drives far better and more consistently than I do, and it’s also fine if it is configurable for different levels of assertiveness to meet personal preferences with no appreciable difference in safety.

I don’t think that’s too much to ask. If it has excellent vision, understands what it cannot see, and can measure distances accurately up to a few hundred yards, there is no reason it should not be excellent.

We’re not talking about driving styles here. We’re talking about quantifiable measures of human comfort which have been extensively studied when implementing vehicle control systems, apparently for decades.

This “it will never drive like you” (or similar) argument was used recently to try to quash claims that FSD Beta drove in a jerky manner (which was somehow controversial I guess?). Next thing we know, Tesla rolls a bunch of updates in 10.69 to fix glaring control loop latency issues. It was broken! And in 10.69, it probably still is broken. It is just not as broken; might be starting from a better place now.

Instead of deciding that our future self-driving cars are going to snap our necks at every turn, let’s always remember that a computer driving a car can be smoother and more consistent than we can ever be, at performing the basic control functions of the vehicle (notably, this impeccable control does not necessarily exclude it crashing into things for no good reason - that’s a different issue - computer vision is tricky). And let’s realize that to the extent it is not excellent, it is broken.

For the time being, Tesla is not prioritizing this polish, which is presumably why it is terrible (by terrible I mean that passengers will invariably ask you to turn it the f*** off).

Anyway, none of this is to suggest that there will be any wide release any time soon - as long as such shortcomings exist there won’t be a wide release. People simply won’t use it or pay for FSD if their passengers insist that it be turned off. It’s not like Autopilot, which has some real utility, even if it is kind of annoying in some occasional circumstances. For city streets, drivers and passengers aren’t going to put up with something that is not extremely polished (because it certainly isn’t going to be driving people from point A to point B without intervention, as a tradeoff for this lack of polish!). There’s just no value there, if you have to be taking over in every corner to prevent having your head snapped around, or whatever the issue may be on that particular drive.
After this all settles out we’ll see I guess. Going to be an interesting few years.
 
Last edited:
I realize this is not a popular perspective but it's going to be a long time before FSD can handle 100% of driving without driver assistance.
I continue to believe the greater revenue driver for the average owner is not driverless cars but simply the capability for FSD to handle 99.5% of driving with handoff of edge cases to the driver in a non emergency manner. In other words instantaneous takeover is not required. For example you come up on a construction project with flagmen/police officers handling traffic. Or rerouting because of a traffic accident. So lets assume the driver has 30 seconds to respond. Tesla could then request regulators to allow drivers to text, watch videos or otherwise be distracted until and/or if they must take over. No sleeping. The first car maker who provides this will not be able to meet demand. Sure robotaxi is important but not for the average owner. I would much rather see Tesla approach FSD as a phased implementation. I'm not looking for the holy grail.
I agree. Driving "assist" is all I want from FSD! I intend to test out FSD City driving assist on my usual routes around my neighborhood. If it achieves reducing my effort (even with known disengagements) of driving those routes, I will be happy.
 
  • Like
Reactions: drtimhill
I continue to believe the greater revenue driver for the average owner is not driverless cars but simply the capability for FSD to handle 99.5% of driving with handoff of edge cases to the driver in a non emergency manner.
Agreed. A long way to go. Even if it wasn’t L3 as you suggest it would still be a long way to go to have it actually be useful.
 
Looks like a regression with the new occupancy network causing phantom swerving for speed bumps detected as an obstacle in the road:
View attachment 845165

Seems like a tricky problem as visually a speed bump does look like some object in the road except it's designed to be driven over. I suppose that's another reason for the existing static object prediction needs to be kept around to provide special meaning to certain objects.
One way to help solving this issue would be to have FSD be able to recognize speed bump yellow waring signs.

Speed Bump - Yellow Sign  - circled .jpg
 
  • Like
Reactions: KLam
fully agree. Although I think it will be a very long time before it gets to even 90%
The current method of approaching everything as if its the first time, but ignoring most road signs means that almost everything is a surprise guarantees an emergency takeover.
Its a great technical achievement as far as its got so far, but I doubt HW3 will ever achieve anything close to anything other than driver assistance with the driver paying full attention and ready for an instant handoff/hot potato.
The only real hurdle is accurately recreating the world in vectorspace at decent framerate.

And they’ve demonstrated by now that HW3 can do that. It’s not quite there yet, but that just takes a bunch more NN targeting, training, and tuning.

After that there is a TON of work still to do to add the heuristics behind key information detection and decision-making, but this is essentially a mountain of relatively trivial work.

And by trivial, I don’t mean it doesn’t take a lot of smarts and won’t have mistake/decision setbacks. But the implementation process for these is trivial.

Once you can detect a “no left turns during school days while mercury is in retrograde” sign, the decision and implementation of what the car needs to do on that is trivially easy from an architectural point of view.

It just takes a LOT of work. Not a lot of compute thankfully. (The vectorspace modeling is what takes the real compute.)

Anybody thinking FSD Beta’s current driving ability is a bad sign on where it will be on the exact path it is currently on (hardware, architectural approach, etc) is just wildly, and I mean WILDLY, misunderstanding its development process.

This isn’t like testing Mk1 of a spacecraft and finding its faults to go back and tweak designs. This is more like the tank hop tests where the nose cone is not planned to be put on for many months.
 
Last edited:
There doesn't seem to be a point to driving the same road twice a day and reporting the same stupid moves over and over - it's just easier to leave FSDb disengaged.
There's a tar strip in the middle of a briefly wide lane on my way to work. (Think very long bus stop.) Given the angle of the sun in the morning, the tar looks lighter than the asphalt. FSD treats that lighter tar strip like a lane marker. Thus it swerves into the wrong "lane" and has to swerve again when the road narrows a hundred yards later.

But in the afternoon the tar looks darker so FSD ignores it.

Forcing a disconnect by holding the steering wheel to reject the first swerve, then hitting the report button, gives Tesla the training data to reject tar strips in more lighting conditions.

Doing it every day gives them a bunch of similarly geolocated disconnects. (That's a signature that can be mined out of a sea of big data.) Doing it every day also provides subtle variations in the captured video data with respect to lighting, weather, other vehicles, etc.