Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

FSD Beta 10.69

This site may earn commission on affiliate links.
Yep, should be not a problem to replicate. Just need the sensors.
There are two approaches - mechanical and image processing. The brain pathways I describe are mechanical. Many devices (most smart phones, for example) actually use image processing to help stabilize an image. It may bet that that is actually a better and/or more reliable solution for this application.
 
Boring video with less weaving all over the road (there are two clear instances).

But I said I would post it so I am. The exciting thing is we will have external drone cam this time, eventually. Will be another post from @Daniel in SD , who is helping validate FSD Beta externally, since he unwisely forgot to buy the revolutionary hardware for himself.

One intervention! This version has a tendency to abort on right turns from main roads onto residential/commercial minor side roads. Just the way it is.

By my count I would have intervened about 8 times if I had been driving normally (once every 45 seconds on average). So doing pretty well.

At the end it looked like the path planner punted.

On a side note, I use to ride dirt bikes out there when the Miramar area wasn't so developed.
 
Boring video with less weaving all over the road (there are two clear instances).

But I said I would post it so I am. The exciting thing is we will have external drone cam this time, eventually. Will be another post from @Daniel in SD , who is helping validate FSD Beta externally, since he unwisely forgot to buy the revolutionary hardware for himself.

One intervention! This version has a tendency to abort on right turns from main roads onto residential/commercial minor side roads. Just the way it is.

By my count I would have intervened about 8 times if I had been driving normally (once every 45 seconds on average). So doing pretty well.

Thank for posting... I noticed you were headed to a establishment that serves adult beverages...is it safe to assume you won more free beers 🍻?
 
Thank for posting... I noticed you were headed to a establishment that serves adult beverages...is it safe to assume you won more free beers 🍻?
No. The bet on 10.69.3 was inconclusive (insufficient data - need more than 10 attempts and wasn’t going to go through all of Chuck’s ULTs to figure it out). Count on specific video somewhere above. One failure so I was on a path to potential success but could have gone either way.

@Daniel in SD seemed reluctant to make any more bets on the single stack release or whatever release is coming (regardless, fully expected to be 🔥). I don’t see any reason to change my 90% threshold any time soon. Failures are very common. But I am fine sticking with my winnings.

We are still waiting on that first 9, then the march can begin!!! Remarkable what has been accomplished in the last couple years, so hopefully the next 4-5 years will get us to maybe the first and second 9s.
 
Last edited:
  • Love
Reactions: FSDtester#1
There are automatic neural feedback pathways in the brain that move your eyes to counter head movements.
It's awesome having an MD on the forums!

My opinion on the return of radar...

"It turns out the full self drive problem ended up being far harder than I imagined." -Elon Musk
"It turns out that FSD using vision only ended up being far harder than I imagined." Elon Musk, probably

The most recent AI presentation focused a lot on occupancy networks. IMO, HD radar (depending on the TX/RX hardware used) is a much better *all around* sensor for filling out occupancy maps of the surrounding environment than LIDAR. Yes, I know each has their own individual strengths, but since radar is much better at piercing through most low visibility conditions than LIDAR, it takes the OVER ALL crown. Especially when it isn't being used as the primary sensor for filling out the occupancy map; Tesla Vision has that responsibility. Confused? Good, let me explain... :D

Let's go back to an old problem the military used to face. How would a sniper estimate the distance to his target? The old system used to use an estimate based on the size of an average human being. You'd put the target into preset size vs distance brackets, and get a very rough estimate of the target's distance. So this would be analogous to today's Tesla Vision. So what's the state of the art in figuring out sniper range-to-target? Lasers, NOT vision. Kinda/sorta similar to LIDAR. So technology wins vs plain old vision.

So now apply the sniper situation to Tesla Vision? How far away is that dog that the object recognition system is "dog"? Well, shoot. What breed is the dog? How can we use the average canine size to determine the distance to "dog" if it's a pomaranian when "average dog" size is set to poodle? How can vision only solve this distance-to-target problem? Not very well. HD radar has entered the chat.

Tesla Vision can initially be used to fill out the occupancy map, and HD radar can be used to fill in any distance-to-object questions that the Vision system isn't able to adequately resolve.

So in my opinion, they are using it to answer any "hey, I'm not sure about the distance to this particular object" situations that Tesla Vision might have.

HD radar can also help resolve any speed and direction questions Tesla Vision might have, too.

So it's not really a matter of "fusing sensor data," it's a matter of the primary system saying, "hey, back me up here, I'm having a hard time resolving the distance to this object with xx% confidence, little help?" So in instances where Tesla Vision is able to resolve all objects to the required confidence level, HD radar wouldn't be used at all.

Again, in my opinion, this would help a lot with the directions that Tesla Vision doesn't have binocular vision to help with it's "depth perception", i.e, any direction that isn't forward facing. ;)

Of course, this is all speculation and opinion on my part. But in my admittedly tiny, smooth brain, it makes sense.

Edit: It *could* also be used to help train the Vision system. Vision: "Hey, I think that object is 100 feet in front of us." HD Radar: "Close, it's 99.2 feet." Vision: "Noted. Added for training."
 
Last edited:
So now apply the sniper situation to Tesla Vision? How far away is that dog that the object recognition system is "dog"? Well, shoot. What breed is the dog? How can we use the average canine size to determine the distance to "dog" if it's a pomaranian when "average dog" size is set to poodle? How can vision only solve this distance-to-target problem? Not very well. HD radar has entered the chat.

Tesla Vision can initially be used to fill out the occupancy map, and HD radar can be used to fill in any distance-to-object questions that the Vision system isn't able to adequately resolve.

So in my opinion, they are using it to answer any "hey, I'm not sure about the distance to this particular object" situations that Tesla Vision might have.

HD radar can also help resolve any speed and direction questions Tesla Vision might have, too.

See 15:25 (t=925) in this video for related examples of HD radar over cameras/vision.

 
I‘ve corrected things on open street maps but Tesla still gets them wrong (the correction was 18 months ago and was formally approved by OSM)

You also need to update TomTom.

I see a lot of posts regarding the need to update 3rd party mapping databases to improve FSD. In this case Open Street and TomTom.
Does anyone have any proof these changes actually work? And if they do work why wouldn't Tesla let everyone know that? I know, my last sentence is ridiculous by it's very nature. Tesla communicate something relevant to FSD, LOL.
 
It's awesome having an MD on the forums!

My opinion on the return of radar...

"It turns out the full self drive problem ended up being far harder than I imagined." -Elon Musk
"It turns out that FSD using vision only ended up being far harder than I imagined." Elon Musk, probably

The most recent AI presentation focused a lot on occupancy networks. IMO, HD radar (depending on the TX/RX hardware used) is a much better *all around* sensor for filling out occupancy maps of the surrounding environment than LIDAR. Yes, know each has their own individual strengths, but since radar is much better at piercing through most low visibility conditions than LIDAR, it takes the OVER ALL crown. Especially when it isn't being used as the primary sensor for filling out the occupancy map; Tesla Vision has that responsibility. Confused? Good, let me explain... :D

Let's go back to an old problem the military used to face. How would a sniper estimate the distance to his target? The old system used to use an estimate based on the size of an average human being. You'd put the target into preset size vs distance brackets, and get a very rough estimate of the target's distance. So this would be analogous to today's Tesla Vision. So what's the state of the art in figuring out sniper range-to-target? Lasers, NOT vision. Kinda/sorta similar to LIDAR. So technology wins vs plain old vision.

So now apply the sniper situation to Tesla Vision? How far away is that dog that the object recognition system is "dog"? Well, shoot. What breed is the dog? How can we use the average canine size to determine the distance to "dog" if it's a pomaranian when "average dog" size is set to poodle? How can vision only solve this distance-to-target problem? Not very well. HD radar has entered the chat.

Tesla Vision can initially be used to fill out the occupancy map, and HD radar can be used to fill in any distance-to-object questions that the Vision system isn't able to adequately resolve.

So in my opinion, they are using it to answer any "hey, I'm not sure about the distance to this particular object" situations that Tesla Vision might have.

HD radar can also help resolve any speed and direction questions Tesla Vision might have, too.

So it's not really a matter of "fusing sensor data," it's a matter of the primary system saying, "hey, back me up here, I'm having a hard time resolving the distance on this object with xx% confidence, little help?" So in instances where Tesla Vision is able to resolve all objects to the required confidence level, HD radar wouldn't be used at all.

Again, in my opinion, this would help a lot with the directions that Tesla Vision doesn't have binocular vision to help with it's "depth perception", i.e, any direction that isn't forward facing. ;)

Of course, this is all speculation and opinion on my part. But in my admittedly tiny, smooth brain, it makes sense.

Edit: It *could* also be used to help train the Vision system. Vision: "Hey, I think that object is 100 feet in front of us." HD Radar: "Close, it's 99.2 feet." Vision: "Noted. Added for training."
Supposedly Tesla did just that - they used radar to train the vision algorithms and achieved parity. Now users did not report parity in their experiences so there's a question how they defined parity, but that was the report.

My guess is that they had very good days on estimating distances or some other specific parameter but the actual integration of that visual data vs the radar data was lacking.

There are other ways to estimate distance besides apparent size. For example, both real and perceived movement relative to other objects also provide data. If you think about it, humans with vision in only one eye function quite well and don't go around bumping into things. Of course we've seen ample examples of how humans can easily do things that computers have a very difficult time with, so just because a human does it doesn't mean they can make the computer do it or do it as well.
 
See 15:25 (t=925) in this video for related examples of HD radar over cameras/vision.


IMO, radar (or lidar) augments vision. they do not act as redundant sensors; meaning, if vision fails completely (like the the often cited foggy/blizzard conditions), relying solely on radar or lidar will not work, because they have their own limitations that prevent safe driving.

The HD radar example in the video also sheds some light onto why Tesla's old radar was pretty much useless. It comes down to how much you can trust the data coming from the radar. With the HD radar, it can identify overpasses, discrete vehicles in a clump, and objects at distances near impossible for cameras. If you can trust that data, then when radar and vision don't align (like the distant object), the car can more confidently rely on the radar and take action. Whereas with Tesla's old radar, there was a huge confidence problem. Do you phantom brake? Or do you cause an accident? Might as well be a tossup.

I think it's a positive sign that Elon believes that vision + HD radar is more reliable. This is great for advancing the capability of the FSD program, but clearly there's a logistical challenge of addressing already-built cars. It remains to be seen how Tesla will handle that. Once upon a time, Musk did say that the justification for the high cost of FSDb was that you'll get the hardware needed for FSD. We've already seen poor execution of this with older S/X FSD cars, of which there aren't that many. Updating all 3/Y FSD cars with the latest hardware seems like it will be much more taxing.
 
I see a lot of posts regarding the need to update 3rd party mapping databases to improve FSD. In this case Open Street and TomTom.
Does anyone have any proof these changes actually work? And if they do work why wouldn't Tesla let everyone know that? I know, my last sentence is ridiculous by it's very nature. Tesla communicate something relevant to FSD, LOL.

The only effect I've noticed was that my speed limit changes on TomTom picked up at the next nav update. Nothing else I've done has made any difference.

I also took a closer look at your Gorham fork on OSM, and I didn't see anything out of the ordinary with how that intersection was defined.
 
I see a lot of posts regarding the need to update 3rd party mapping databases to improve FSD. In this case Open Street and TomTom.
Does anyone have any proof these changes actually work? And if they do work why wouldn't Tesla let everyone know that? I know, my last sentence is ridiculous by it's very nature. Tesla communicate something relevant to FSD, LOL.
Yes. The road that I live on had the wrong speed limit and I changed it on both OSM and TT. It took a year, but that and other speed limit changes that I made finally showed up.
 
I see a lot of posts regarding the need to update 3rd party mapping databases to improve FSD. In this case Open Street and TomTom.
Does anyone have any proof these changes actually work? And if they do work why wouldn't Tesla let everyone know that? I know, my last sentence is ridiculous by it's very nature. Tesla communicate something relevant to FSD, LOL.
I can say that I have made around half-a-dozen corrections to TomTom and most of them make it to my Tesla in the next map update. …not sure how we can prove it works.
 
  • Like
Reactions: DadRS99
There are other ways to estimate distance besides apparent size.
Yeah, that was just the quickest, easiest (and possibly hackiest?) analogy I could think of.

Maybe the return of radar (albeit HD Radar) this time means that they might need higher resolution data, or maybe it'll be used to help train low visibility or precipitation conditions?

The next couple of years are going to be exciting to see.
 
  • Like
Reactions: FSDtester#1
Yeah, that was just the quickest, easiest (and possibly hackiest?) analogy I could think of.

Maybe the return of radar (albeit HD Radar) this time means that they might need higher resolution data, or maybe it'll be used to help train low visibility or precipitation conditions?

The next couple of years are going to be exciting to see.
What IS HD radar?

Radar is radar. I'm assuming there's some improvements to processing on the back end and maybe a higher frequency/lower beamwidth to improve res-cell?

The term gets tossed around quite a bit but I've not yet seen it defined. Any industry folks that can help out?
 
Last edited:
What IS HD radar?

Radar is radar. I'm assuming there's some improvements to processing on the back end and maybe a higher frequency/lower bandwidth to improve res-cell?

The term gets tossed around quite a bit but I've not yet seen it defined. Any industry folks that can help out?
Here's an article that talks about some advances in HD and 4D radar:

 
What IS HD radar?
Radar is radar. I'm assuming there's some improvements to processing on the back end and maybe a higher frequency/lower beamwidth to improve res-cell?
The term gets tossed around quite a bit but I've not yet seen it defined. Any industry folks that can help out?
Various sources indicate "HD" it is digital vs analog
Digital Versus Analog Radar - The traditional analog radar like old smokey used to use to hand out tickets operates by sending out radio waves, then measuring the reflected signal. If there's a difference in velocity between the radar unit and the object, there will be a Doppler shift in the reflected wave.
_ Digital radar uses a similar radio signal, but each transmitter sends out a 77-GHz signal with a unique digital code modulated on it that allows the receiver to distinguish each individual signal from among any other 77-GHz waves bouncing around the area. This technique, known as digital code modulation, allows the system to sense instantaneous position as well as velocity.
_ 30 times the resolution of traditional analog radar.
16x Resolution
Typical integrated analog radar chip solutions pair three transmitters with four receivers, but Uhnder's 4D digital-imaging radar chip features 12 transmitters and 16 receivers for
192 virtual radar channels, each capable of determining the speed and distance of an object in the distance. That vastly richer point cloud results in images with 16 times the resolution afforded by a standard analog radar unit.

--------------------- ------------------ -----------------------
CES 2022 INNOVATION AWARD PRODUCT
Magna ICON Digital Radar (appears to be uhnders too per their YT channel videos)

WHAT: 4D digital radar
INNOVATION: ICON 4D Radar is a software-defined digital imaging radar, achieving performance levels that open the door for the future of autonomous driving. Scanning the environment in four dimensions with 16 times better resolution and 30 times better contrast than analog, ICON Radar can see a stalled car inside a dark tunnel or detect a child running into traffic from behind a truck that a human driver is likely to miss.

“Analog radar uses frequency modulation,” Reddy tells us. “Phase modulation in digital provides interference robustness.”

Every radar has a quintillion unique code embedded into the signal. That means every signal sent from the unit is unique. Not only is the radar looking for the return signal, but also if the inbound signal has the unique code.

In an intersection with various cars, for example, cars have to now try to figure out which radar is theirs and which radar is from the opposing vehicles. With digital radar, that confusion is eliminated by the unique identifier.

There aren’t any sacrifices going to digital, either. We’re told that this new radar technology has a range of greater than 300 meters. In comparison, our long-term Model 3 has a 200 meter radar unit, and most analog radar units work from 150 to 200 meters.

Additionally, the digital radar has a range of 150 meters for pedestrian detection. Digital radar’s resolution allows it to pick up things never before distinguished by radar. For example, it can pick up a tire laying in the road and warn the driver. Magna’s U.S. offices are in Michigan and that’s totally a Michigan thing.

The Icon Radar can also pick up stopped vehicles inside a tunnel, a pedestrian beside a guardrail as two separate items, and safe pathways on a multilane highway.
 
@AlanSubie4Life
I have the braking bar in my car with proof:
4F23CDC4-BBF2-4618-B443-B2DC5755A0CE.jpeg
 
He got a strike so for sure they are not. One wonders if he also used a buddy device when not filming. Since the circumstances matched the pattern. And seems hard to think you would get a strike on highway with the button method assuming they haven’t defeated it (which it looks like they have not based on evidence).

Hoping that it is not a buddy device though (which presumably he would immediately figure out, unlike some users here, who oddly do not) and somehow he just gets strikes, and they’ll keep accumulating. Though maybe he’ll have them removed with V11, we’ll see (I think that is doubtful). A strikeout would arguably be a blessing to us all.

Analysis contorted by speculations of behavioral compliance with concepts based on no reality at all only helps with flapping wings, just as obfuscation creates its own wind......... its own logical fallacy.......