Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Wiki MASTER THREAD: Actual FSD Beta downloads and experiences

This site may earn commission on affiliate links.
I can't say for certain whether or not you are wrong about California, but usually when I see a "no turn on red" sign it is because of the visibility of the intersection (sometimes it might be about the amount of traffic typically present, and then there is ofter a time limit tied to the sign). Outside of those scenarios, in Indiana, we definitely have plenty of intersections with left green arrows for one direction of traffic that don't have "no turn on red" signs for the other.
I can say for certain I know the California Vehicle Code.
 
  • Love
  • Like
Reactions: Phlier and jebinc
On my drive today - I was impressed with the way it handled unprotected right.

FSD waited for the car to pass and soon after took the right, no hesitation at all.

Looks to me - the planner needs some "time to setup" for the turns - and that is the root of hesitation at turns. Humans start preparing for the turns before the actual intersection comes - and thus have a good plan on how to go about doing the turn, before the turn comes. Planner also needs to do some "prep" (apart from the actual lane change) ?
 
After reading the comments being made about some of the unexplained behaviors by the tesla's with the FSD I began to wonder how a human would behave if we saw things as the car does. What if we had one eye in the middle of our forehead and an eye where each ear is, one eye on each cheek looking rearward and one eye on the back of the head looking to the rear and none of the eyes could rotate up or down, left or right. I get a sore neck just thinking about it.
But, aren’t there THREE eyes in the middle of the forehead that look LEFT RIGHT and CENTER? Or more specifically, CENTER, WIDE and NARROW with varying capabilities and distance. The wide being more of a 150 degree (where IS that symbol on a keyboard) which is pretty near a humans 135 degree view per eye.. If I had eyes on the side of my head, well that would be a game changer.

As for Tesla, I’m more interested in what they can/should do with the REAR camera and FSD. Certainly options that a typical human wouldn’t do.
 
  • Like
Reactions: BM3B
After reading the comments being made about some of the unexplained behaviors by the tesla's with the FSD I began to wonder how a human would behave if we saw things as the car does. What if we had one eye in the middle of our forehead and an eye where each ear is, one eye on each cheek looking rearward and one eye on the back of the head looking to the rear and none of the eyes could rotate up or down, left or right. I get a sore neck just thinking about it.

It sounds weird if you try to imagine it that way but Tesla knits the 8 camera views into a single 360 bird's eye view. So from the car's point of view, it is just a one 360 view around the car.
 
On my drive today - I was impressed with the way it handled unprotected right.

FSD waited for the car to pass and soon after took the right, no hesitation at all.

Looks to me - the planner needs some "time to setup" for the turns - and that is the root of hesitation at turns. Humans start preparing for the turns before the actual intersection comes - and thus have a good plan on how to go about doing the turn, before the turn comes. Planner also needs to do some "prep" (apart from the actual lane change) ?
Good observation. We also know that FSD treats every intersection as new whether it's the first time or the 100th time. What improves how FSD handles the intersection is the collective fleet data sent back to Tesla resulting in programing/neural network improvements in the next release.
 
Last edited:
Good observation. We also know that FSD treats every intersection as new whether it's the first time or the 100th time. What improves how FSD handles the intersection is the collective fleet data sent back to Tesla resulting in programing/neural network improvements on the next release.
This brings me back to the question I asked sometime back.

Apparently there was some reference to "memory" that CNN uses in each car during AI day. Does anyone remember about what time it was in the AI day ?

BTW, there was some speculation about it in the past years - with some of us thinking they store some kind of configuration with each car that they use as input to CNN.
 
@diplomat33 If true, why don’t they give us this “view”? Would be very helpful, actually.
That is an interesting question, and it would be an interesting view of the world. I was also thing, would it not be better, to mount the camera's on the cover for the side mirrors. I think aesthetically and functionally it would be better than the door post and fender. On the mirror cover (which is in a fixed position) the side camera would be further forward to better see traffic left and right, much like the human driver, and the reward facing camera also higher like the human driver. I am far from an expert in this field just spitballing.
 
  • Like
Reactions: jebinc
That is an interesting question, and it would be an interesting view of the world. I was also thing, would it not be better, to mount the camera's on the cover for the side mirrors. I think aesthetically and functionally it would be better than the door post and fender. On the mirror cover (which is in a fixed position) the side camera would be further forward to better see traffic left and right, much like the human driver, and the reward facing camera also higher like the human driver. I am far from an expert in this field just spitballing.

Yep. Mobileye does something very close to that.

65b0388f192d76fe2c84e525ea1f2a70_1600938960066.png
 
@diplomat33 If true, why don’t they give us this “view”? Would be very helpful, actually.
Every time I’m driving my friend’s Taycan I get jealous of the front camera down low. I can’t stand parking spaces with the cement blocks where you pull in. Give us 360 cams, Elon! I know they can’t add a specific front camera low down but we know the car can extrapolate as you drive closer and give you a good idea.
 
  • Like
Reactions: jebinc and Phlier
@diplomat33 If true, why don’t they give us this “view”? Would be very helpful, actually.

The neural network isn't knitting all of the camera views together in a way that would let you see the actual 360 footage (warping and stitching camera outputs together). They're using all camera views collectively as inputs to the neural network that's outputting the top-down vector interpretation of the world. The output is what you're seeing in the FSD visualization.
 
The neural network isn't knitting all of the camera views together in a way that would let you see the actual 360 footage (warping and stitching camera outputs together). They're using all camera views collectively as inputs to the neural network that's outputting the top-down vector interpretation of the world. The output is what you're seeing in the FSD visualization.
If it's not human interpretable then how do they label it?
 
  • Funny
Reactions: lUtriaNt
Just had my car steer around a bit of road debris for the first time. Nice!!! It didn't seem to be represented in the visualization.

This was on a neighborhood street, and the debris was toward the left side of my lane. The car smoothly arced its path to the right side of my lane to completely avoid it. No braking at all. Had my eyes been closed, I wouldn't have even noticed the car was doing it.
I was going to post this exact thing. Water main had popped. Crew had a square cutout of asphalt in the right lane, with dirt piled up. I was wishing my Wife was with me so she could take a picture of the FSD visualization for it. Showed a perfect square of "non-driveable space;" square bordered with a thick red line.
Have you guys experienced the lag in deceleration if you reduce the speed with the scroll wheel? I'll tell it to reduce speed and it takes forever to drop the speed. I noticed the same for reducing speed when it passes a lower speed limit sign example: there is an area that goes from 45 to 35 to 25 in a fairly short distance, I get into the 25 zone and im still doing 40.

I haven't experienced this at all. It reacts immediately and usually smoothly for me.
I've experienced the delayed deceleration, starting with the latest release, but it's situational. The highway stack still responds nicely to scroll wheel induced speed changes, whereas FSD Beta seems to take its own sweet time... sometimes.
So random. Good drive one day, and abysmal the next - same route, similar conditions. Doesn’t instill any confidence at all. Looking forward to seeing if 10.4 is any better, repeatability wise.
Yup. I've been repeatedly driving an area around me where FSD chooses... poorly. I'm planning on driving this area over and over again with each release, just to see if it really does help.
Going to work everyday, I come up on wabash ave and turn right on rose-hulman rd. The speed limit on wabash ave is 45 mph. The speed limit on rose-hulman st is 15 mph. When I am driving manually, it is very easy. I come to a complete stop at the intersection (there are traffic lights not shown in this pic) and then make a slow 90 degree right turn at about 8-10 mph when it is clear. There is a speed limit sign after the turn that clearly says 15 but FSD Beta reads it as 25 mph. Weird because it is a normal, plainly visible sign. If I manually change the TACC speed to 10 mph, it handles the rose-hulman st pretty well, even the tight bends. But it does not know to do that since it thinks the speed limit is 25 mph.

RjFX1iP.png
I really appreciate how you include pictures with your scenarios; it makes it so much easier to understand. Although it does make me feel lazy. ;)
I will try different coming from a different direction. But I will point out that once you complete the turn, the front cameras will have a clear view of the speed limit sign. So the car should immediately slow after the turn as soon as the front cameras see the speed limit sign but it continues to drive past the speed limit sign at the same speed. I specifically noticed that the speed limit on the screen does not update after the car passes the speed limit sign.
I'm having this same problem with a certain set of speed limit signs that the highway stack won't recognize. And they're big honking signs out in the middle of nowhere. No clutter to obscure them, nada.
Ok so something I figured out last night and have tested a little that has made my experience with beta sooo much better. I force it through turns now, not every turn but if there are other cars around and it hesitates at all I push the accelerator, not tap it, I push it as if I was driving and the car has done great every single turn. It's almost like it has a confidence problem like someone jumping out of a plane or something and if you just shove it the cars is like oh *sugar* here we go I better just make this work, and it has every single time. It hardly ever even jerks the steering wheel at all. They are the smoothest, frustration free turns I have had. I used to disengage all the time, now I just force the car through the turn and its like magic. I am still very cautious if there are vehicles in the oncoming lane on the road I'm turning onto because I worry about it overshooting the turn or something since I am controlling the accelerator but so far its been magic. If I don't force it through turns it will sit there and jerk the wheel left and right over and over before it decides to go like someone at a thermostat going back and forth between 67.7 or 67.8 in the end it doesn't matter just set it and walk away. Just something I have learned and thought I would share for the other people who have said they disengage a lot at intersections (obviously since you are overriding the system and making it go make sure there isn't a reason its not going)

Edit: I should specify I don't live in a big city environment, more rural so this has been helpful at stop signs and rural roads turns. Large intersections with traffic lights don't seem to really be a problem.
That's great advice, but would like to add one word of caution...

When FSD is being timid with creep, there's a fine line between telling it "hey, creep a bit faster" and "Boss says it's ok to go, so let's go!" So if you're just telling it to creep a bit faster, BE CAREFUL that you don't inadvertently tell it to just go. :)
@diplomat33 If true, why don’t they give us this “view”? Would be very helpful, actually.
Exactly.

Even my brother's Ford truck does this... show's him a complete top down "bird's eye" view around his truck. Doesn't seem like it would be that hard to implement.
 
FSD beta is remarkable, terrifying, and exhausting.

It’s currently not a usable commercial product (obviously) and is truly a “beta” in a way that autosteer isn’t.

Keeping an eye on FSD is much more fatiguing than driving by hand so I set up an additional driving profile with FSD and disabled the feature on my main profile.

All that said, when it gets something really right it’s incredible. The system has done some truly remarkable maneuvers in my car, the most noteworthy was when it tracked a little boy running from the sidewalk in front of a car and into traffic. I didn’t realize any of this was happening (the parked car occluded my view) but FSD braked and swerved just as he popped out into moving traffic. Wild stuff.
 
That is what they are "showing" in the visualization.

Anyway - how do you show a 360 degree video on a 2D screen ? For most of us it would be a disorienting view (because of "360-degree video in 360x180 equirectangular format.").



View attachment 728178
Checkout how Ford does it in their trucks. It shows a top down view of the truck, with a nicely done stitching job of the camera's output. I'm sure there's an example of it on Ford's web site.