Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
Not sure if it's a software development issue, configuration management, NN training, etc but these builds are having a hell of a time maintaining past improvements as well as eliminating bugs. It almost never pays to rush something to delivery but the performance on the last 3 builds is a head scratcher. The responsiveness isn't uniform. It's slow in some benign scenarios (stopping in oncoming lanes) but old school laggy in others. That's not confidence inspiring to the driver.

Maybe the the design is close to reaching an optimal balance between responsiveness, phantom braking and little room for new features.
 
Last edited:
I think this is indeed the case. In my area there are 4 lane roads where the average speed is 60mph. For an unprotected right on red at a traffic light the cameras have to look across a wide intersection and then work out the velocity of cars that are approaching the traffic light - about 100m. At least with 10.12 it seems to not see the traffic at all - when it is 100% clear it pauses a good 15 seconds and does a timid commit to the turn. When there is approaching traffic it does the same thing - except there is traffic and I have to disengage or slam on the accelerator / change lanes. Curious to see if 10.69 is any better on this type of turn.

Phantom breaking is probably similar - at 1280x960 your are calculating the risk of hitting fuzzy pixelated blobs at 50 meters - AI's can do amazing things but it will be limited by the cameras. If the cameras were sharper there would be much better distinction of shadows / lighting vs a concrete object in the road at distance. And yeah, the compute on higher res cameras would be way too much.
You need to watch this. Great explanation regarding the cameras current limitations being the software and not the hardware.
 
Maybe she shouldn’t be typing a letter while driving through pedestrian areas….
If I were a safety regulator looking at this, I'd be wondering why FSD isn't throwing any alerts as the vehicle haphazardly navigates this busy road with a driver whose attention is divided. If her behavior is problematic, we're going to see much worse when this goes wide release.

But the overcautiousness is probably there for exactly these reasons, and people already complain about the nags


They literally probably have a slider they use to adjust sensitivity to these things through the dev tools
 
  • Like
Reactions: Yelobird
Your criticisms are a great indicator of FSD Beta's progress. If sporadic slowdowns are all you can pick on in a video, then things are going pretty well.
Not when there are still single drives with severe safety disengagement. We are not talking about 1000th drive after a software update.

Secondly excessive phantom braking is literally identified as a safety defect by NHTSA and has historically been recalled by NHTSA.

Thirdly, of-course Tesla's software will make progress. They have been working on this for over 7 years. The issue is with the rate of progress which is snail slow compared to what Elon fraudulently proclaims every year.
 
Hmmm. Same PB issues.
I'm not seeing "phantom braking". In the areas where I have had issues on .1 and still have on .2 is what I would call phantom regen... meaning the car lets off the accelerator, but does not apply the brakes. Same effect. The car slows for about 1 or 2 seconds but not greatly enough to freak out anyone behind me. Improvement still needed.
 
I know something about DSP (in audio), but not nearly to the depth that you seem to have. But I still think higher resolution cameras could help a lot in two ways. The most obvious is that pixels can be combined to form a higher quality image that is at the same resolution as those provided by the current cameras. The same processing with better quality images should yield real benefits.

But when we drive, we are not constantly looking at what is happening a quarter mile away. We look occasionally and retain what we have observed until we get the chance to look again. Tesla could do something similar. Think of it as two threads of processing, one a very high frame rate of low-res images (formed by pixel binning), the other a much lower frame rate of native high-res images.
No real argument in what you're saying, but there's this thing with Shannon.

For those of you who don't know, Shannon is known as the Father of Information Theory. Amongst other things, he related bit flipping in a channel to Entropy. As in thermodynamics. And the 2nd law of Thermo, which says that, in a closed system, things proceed to a disordered state and no going back. (As I remember, the three laws of Thermodynamics are: 1) you can't win, 2) you can't break even, and 3) the game is rigged.)

How this relates to moving information: There's this plot, somewhere, that says that if one has a certain signal to noise ratio in a channel of fixed bandwidth, then there's a maximum data rate in that channel. It's the Shannon-Hartly theorem. What's interesting about this theorem, and the plot, is it gives an upper limit that is truly hard and fast: You can't transmit data faster than yea on that plot.

What it doesn't say is how to get to that limit 😁. Put the worlds largest collection of CRC-based forward error correction codes on that data one is launching into that noisy channel? One will approach that limit, but won't pass it. Apply one's earth-shaking algorithm that nobody's ever thought about before? Great! One has moved things forward - and one is closer to that limit, but one won't pass it. It's that kind of thermodynamics.

This idea in information processing (and, yeah, that includes image recognition, moving objects, etc.) applies everywhere. So, if one has a high-bandwidth, non-noisy channel, it's just amazing how much information can get passed and used. A high-bandwidth, noisy channel: I'm a-thinking that that describes what Tesla's trying to do.

Just so we're clear: This isn't saying that Tesla can't get autonomous driving down pat. We wetware types do it all the time. But the algorithms that we use to do this are obtuse and are part of current research. And, naturally, good old evolution and cut-and-try has Done Things in eliminating Things That Don't Work (Didn't spot that tiger? Now you can check on your failures from the inside. Didn't spot that tasty fruit? Now you get to starve. Etcetera.) What actually works.. Well, people do get into accidents all the time, so there's a decent idea that we aren't, collectively, quite, up against any Shannon's limits.. at least with the algorithms that we're running inside.

Musk has been pretty adamant that Tesla is going to come up with a self-driving car that an order of magnitude or better. I strongly suspect that these statements aren't being done in a vacuum: Very likely, somebody has done enough of the theoretical work to show that the Shannon limit is big enough to do the job. The tricky bit is getting algorithms in silicon and neural networking hardware that surpass what our wetware can do.

This whole argument about Shannon limits and information processing is why I tend to disbelieve the occasional poster who shows up and says, without much evidence other than Gut Feeling, that it'll be years or never before self-driving cars roam the highways. That's a human-centric view that Humans are Best At Driving, Period. I don't think so: There's plenty of hardware and other living entities out there that are better at various tasks than we are, so, where's the proof that self-driving isn't a task that nothing else can do? (Strawman argument, I realize, but, still.)
 
Mmmm... Beer.
@AlanSubie4Life Sunday video > 90% success no problem!
Yo Daniel. You and Alan are going to have to up the stakes next time. Maybe a case of wine or something. You guys will spend more money driving to deliver that beer than the beer is worth. Ok, I guess it is worth the satisfaction that one of you is right :)
 
  • Funny
Reactions: AlanSubie4Life
You need to watch this. Great explanation regarding the cameras current limitations being the software and not the hardware.
If you look at this video around 6:25, you can see that the car that it "almost" hit was no longer blue (because he diseganged), and the drone footage shows it passed or even with the Chuck's car, but the display shows it to the left of Chuck's car.
 
  • Like
Reactions: FSDtester#1
Here's the full YouTube video where she's complaining about why FSD Beta takes so long at a stop sign and wants to creep so much into the intersection:

The current moving object network for predicting vehicles needs to see quite a bit of the vehicle to classify it correctly whereas a human would see even just the top of a vehicle and know the rest of the vehicle is occluded by the fence (notice no visualized crossing vehicle below):
kim why creep.jpg


Even with the new video occupancy network, there would need to be some classification to understand the larger picture behavior of these are vehicles taking turns at a stop sign instead of just "here's some moving blobs."

Arguably, FSD Beta is being extra cautious in its creeping behavior because potentially there could be a very short vehicle like a kid on a bike that natural human behavior of assuming a tall-enough vehicle would be seen might accidentally miss.
 
Anyone ever get a musty smell coming from the vents? If so, how did you fix it?
Kool-It and new filters fixed this for me. Learned about it from various TMC threads. eg:


SW
 
  • Like
Reactions: FSDtester#1