Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Brief FSD beta 11.3.6 report

This site may earn commission on affiliate links.
This (autopilot suddenly dropping speed on freeway) happened to me yesterday on my first drive using 11.3.6. I don’t remember it happened before on this same stretch of freeway. Is there a setting somewhere that might mitigate this situation (for example, allowing/keeping the set speed regardless of the maximum speed on a particular section)?
I’ve been getting that on both 11.3 & .6. Many times it’s just a few mph but it disturbs the flow of the drive and is annoying. If someone is tailgating you might have to join the long waiting list for body repair.
 
Radar gives speed immediately.
May want to check your engineering on this one. Radar is not instantaneous. It's calculated. The Radar module indeed responds with speed, but that's taken some math to determine. The Doppler shift is determined by the frequency of the returning waves. There's first a lot of math used to filter a lot of crap out before a reading is taken, but the reading depends on the measurements of a large number of waves. And since we are talking 3-D radar here, there's a heck of a lot more processing needed.
Think of the classic radar that you see in movies. It's able to detect a reflection in one direction. It has to wait for the radar dish to rotate around (hence the line) before it can show the results. And those results don't show altitude.

If Tesla were to put a "vision module" on the car, similar to radar, it could just as easily return speed immediately (to the untrained user) as well.
And I expect that these boxes will indeed be available in a number of years. The radar box that Tesla was using only took about 80 years to develop. The "vision module" probably won't take anywhere near that long.
 
May want to check your engineering on this one. Radar is not instantaneous. It's calculated. The Radar module indeed responds with speed, but that's taken some math to determine. The Doppler shift is determined by the frequency of the returning waves. There's first a lot of math used to filter a lot of crap out before a reading is taken, but the reading depends on the measurements of a large number of waves. And since we are talking 3-D radar here, there's a heck of a lot more processing needed.
Think of the classic radar that you see in movies. It's able to detect a reflection in one direction. It has to wait for the radar dish to rotate around (hence the line) before it can show the results. And those results don't show altitude.

If Tesla were to put a "vision module" on the car, similar to radar, it could just as easily return speed immediately (to the untrained user) as well.
And I expect that these boxes will indeed be available in a number of years. The radar box that Tesla was using only took about 80 years to develop. The "vision module" probably won't take anywhere near that long.
The time to obtain and process Doppler measurement is much lower than the time to estimate speed from vision calculations over multiple frames, so relatively it's instantaneous.

I don't know what you mean by 'vision module'---lidar?
 
  • Like
Reactions: beachmiles
The time to obtain and process Doppler measurement is much lower than the time to estimate speed from vision calculations over multiple frames, so relatively it's instantaneous.

I don't know what you mean by 'vision module'---lidar?

Vision module = all the stuff that Tesla is using for vision wrapped up into a single box, similar to radar. Yes I know the cameras, I'm just talking logically.

But instantaneous wasn't really your point. You were "assuming" that just because the radar spit out a data stream that included the distance, that it was different from vision.
My point being that it is just as easy to create a vision box that spits out the same information. And internal to the Tesla software, I'm pretty sure that's what happens.

The bigger thing is the object resolver. If you just look at the output of a 3D radar, the car would never be able to move, because something is always too close to it. Something has to look at the data that is returned and "paint the picture" of what it is really saying. There's an object 200m straight ahead? That means that I have to stop? No, that's because the road turns before then.

Think of that point. 3d radar data is practically useless on its on. You have to pass it through object resolvers to try to figure objects out and then what you have to do about it.

Radar is not the panacea that everyone thinks that it is.
 
Vision module = all the stuff that Tesla is using for vision wrapped up into a single box, similar to radar. Yes I know the cameras, I'm just talking logically.

But instantaneous wasn't really your point. You were "assuming" that just because the radar spit out a data stream that included the distance, that it was different from vision.
My point being that it is just as easy to create a vision box that spits out the same information. And internal to the Tesla software, I'm pretty sure that's what happens.
I think it's not at all as easy to create such a "vision box" if you're at a distance beyond useful parallax (and so far Tesla doesn't have widely spaced stereoscopic cameras anyway, unlike e.g. Subaru---I think it should which would give almost ground truth short distances). It would have to be something downstream after various layers of the vision neural networks and integrated deeply into the computation as it matches objects across multiple frames and estimates change in size compared to that expected by vehicle movement from odometry and translates that into a speed estimate. Estimating the change in sizes accurately requires the object span quite a number of pixels because a 1 pixel difference is the minimum reliable size estimate, and then going back to the original issue the insufficient resolution of their vision cameras now makes this difficult at sufficient distance. Your fovea sees things substantially better than the cameras do. So, with the cameras, if you need to detect something at distance and the object in the roadway (you've computed where that should be), and it's small across the image sensor, and then temporarily seems to have a velocity significantly different from your own then that's a danger signal and can cause phantom braking. As it also doesnt know if it's a large object far away or a smaller one closer up, remember no physical distance estimates.

If you had been able to correlate that object with a radar return, otoh, that radar return would have given you distance and velocity from doppler effects from the hardware and its firmware in much less than a vision frame's time. That velocity is less likely to be inaccurate at distances, and is more likely to arise from true vehicles with metal in them instead of a mirage.

The bigger thing is the object resolver. If you just look at the output of a 3D radar, the car would never be able to move, because something is always too close to it. Something has to look at the data that is returned and "paint the picture" of what it is really saying. There's an object 200m straight ahead? That means that I have to stop? No, that's because the road turns before then.

Think of that point. 3d radar data is practically useless on its on. You have to pass it through object resolvers to try to figure objects out and then what you have to do about it.

Radar is not the panacea that everyone thinks that it is.
No, it's another input, but it is a useful one if resolution is good enough.
 
  • Like
Reactions: beachmiles
I think it's not at all as easy to create such a "vision box" if you're at a distance beyond useful parallax (and so far Tesla doesn't have widely spaced stereoscopic cameras anyway, unlike e.g. Subaru---I think it should which would give almost ground truth short distances). It would have to be something downstream after various layers of the vision neural networks and integrated deeply into the computation as it matches objects across multiple frames and estimates change in size compared to that expected by vehicle movement from odometry and translates that into a speed estimate. Estimating the change in sizes accurately requires the object span quite a number of pixels because a 1 pixel difference is the minimum reliable size estimate, and then going back to the original issue the insufficient resolution of their vision cameras now makes this difficult at sufficient distance. Your fovea sees things substantially better than the cameras do. So, with the cameras, if you need to detect something at distance and the object in the roadway (you've computed where that should be), and it's small across the image sensor, and then temporarily seems to have a velocity significantly different from your own then that's a danger signal and can cause phantom braking. As it also doesnt know if it's a large object far away or a smaller one closer up, remember no physical distance estimates.

You are really missing my point.
But in response.
As I said, look at it as a logical box, but for the heck or argument, it's a physical box with inputs for some cameras spaced around the car. Or in other words, take the computers in the car and put them in the box and call it a "vision box"

Now the rest of your justification isn't valid, because Tesla seems to be doing a good job of it now. The car is measuring distance using the parallax of the forward cameras.

You've got to stop thinking like computers when looking at the system and start thinking like a driver.
You see a car in front of you. How far away from you is it in feet? What? You don't know? That's because it doesn't matter when driving. Everyone is putting so much emphasis on something that really doesn't matter. Think about it, what do you look at when driving? You don't measure distance. You don't look everywhere at once. You don't really see much in your peripheral vision. (you do swivel head periodically, sometimes)
You do start to measure rough distances when the cars get closer (and the parallax equations get more accurate)

But the main part that most everyone seems to miss is that Tesla is doing a pretty great job on the visions portion of driving. The problems are much closer aligned to the decision system. It's not the vision system that hesitates in an intersection, it's the decision system. That can easily be demonstrated by just pressing the accelerator. The car is going to go where it needs to.
 
11.3.6 occasionally tries to swerve into an exit lane when I'm not intending to exit the highway. The system needs to be able to differentiate between one lane splitting into two and an exit lane. Visually they're nearly identical other than signs. Perhaps it's time Tesla learns to read standard road signs. Afterall, it's mandatory for humans to be able to read and understand the signs in order to pass a driving test successfully.
 
  • Like
Reactions: AlanSubie4Life
11.3.6 occasionally tries to swerve into an exit lane when I'm not intending to exit the highway. The system needs to be able to differentiate between one lane splitting into two and an exit lane. Visually they're nearly identical other than signs. Perhaps it's time Tesla learns to read standard road signs. Afterall, it's mandatory for humans to be able to read and understand the signs in order to pass a driving test successfully.
Is this like the car has done for years or something new?

Because it's been doing that for years, but this release indicated that it has tried to reduce it, which for me it seems to have maybe helped.
 
  • Like
Reactions: edseloh
Except they aren't.

We have not yet seen a successful demonstration of an unprotected left turn!!! Remarkable, but true.

and then going back to the original issue the insufficient resolution of their vision cameras now makes this difficult at sufficient distance.

The car is measuring distance using the parallax of the forward cameras.
Forward looking is not the only place you need to do distance estimates (time really is what matters of course!!!). So parallax is not used in at least some cases. Humans of course do not need to use parallax when driving unless we’re talking about picking up the phone.
Interesting possible case of resolution limits, causing limits of perception, causing possible phantom vehicle at 5:50 in this video. (The only technical failure in this video I think, under relatively easy conditions…note the other regression is stopping way too far back.) Watch the visualization. Or is it just a bug?

Still not a single success on Chuck’s UPL. 😞 Maybe 11.4 will do it (note: with large error bars)!

Discuss!



It's not the vision system that hesitates in an intersection, it's the decision system.
But why? Is it due to lack of confidence in the vision? Or “just” a bug?

Personally I think there are plenty of examples of the vision system being very adequate and being apparently limited by decisions, and many examples of insufficient vision (not clear to me what limits it but I certainly think that increased resolution with proper increases to processing power and techniques to selectively use the resolution would not hurt).

I haven’t seen reading signs at distance being brought up (EDIT: I see this was brought up recently; didn’t read all the recent posts) but I suppose the counter argument would be that it won’t need to match human capabilities there for reasons.
 
Last edited:
Forward looking is not the only place you need to do distance estimates (time really is what matters of course!!!). So parallax is not used in at least some cases. Humans of course do not need to use parallax when driving unless we’re talking about picking up the phone.
Interesting possible case of resolution limits, causing limits of perception, causing possible phantom vehicle at 5:50 in this video. (The only technical failure in this video I think, under relatively easy conditions…note the other regression is stopping way too far back.) Watch the visualization. Or is it just a bug?

Still not a single success on Chuck’s UPL. 😞 Maybe 11.4 will do it (note: with large error bars)!

Discuss!





But why? Is it due to lack of confidence in the vision? Or “just” a bug?

Personally I think there are plenty of examples of the vision system being very adequate and being apparently limited by decisions, and many examples of insufficient vision (not clear to me what limits it but I certainly think that increased resolution with proper increases to processing power and techniques to selectively use the resolution would not hurt).

I haven’t seen reading signs at distance being brought up (EDIT: I see this was brought up recently; didn’t read all the recent posts) but I suppose the counter argument would be that it won’t need to match human capabilities there for reasons.

Humans use parallax a LOT. It's just a natural thing to do with the two eyes. Cover an eye and move around and you'll find that you have indeed loss some depth perception.

I don't believe that the hesitation is for the lack of vision, there's just a lot of things to consider. From the instances that I've had, the car eventually does it. But it's basically like a shy kid learning to drive with a parent watching and complaining about every little thing that they do. The safety protocols are just winning the fight.

I just watched a minute or so of the video and it looks like a successful left turn into a divided busy intersection. It's the safety being set high.

And Tesla has every right in the world to keep the safety set really high right now. One significant accident will cause a big PR problem.
 
Last edited:
Humans use parallax a LOT.
Yes. That is what I suggested in my post!

I just watched a minute or so of the video and it looks like a successful left turn into a divided busy intersection
Maybe you missed the time stamp?

Did you see how it waited for a very long time and detected a close car when there apparently was not one (I am not sure why)?

I mean it is true it is for safety - the system detected a car (imaginary), so it didn’t go. Which is good. But not really the point here.
 
Ummmm .. what? You mean the ones my car has done dozens of times each month? Or all those ones Chuck Cook has shown in his many videos that address this very thing? Are we on the same planet?
I’d be interested to see a turn where FSDb has never been disengaged (those would be candidates for success). Chuck’s turn has not had this occur yet (of course can wipe the slate on a new release).
 
Last edited:
I’d be interested to see a turn where FSDb has never been disengaged (those would be candidates for success). Chuck’s turn has not had this occur yet (of course can wipe the slate on a new release).
So "a demonstration of a successful left turn" is one where the car makes the turn 100% of the time over some number of attempts? If you are going to set specific goals please make them clear, and provide some justification for those specifics.
 
  • Funny
Reactions: KArnold
So "a demonstration of a successful left turn" is one where the car makes the turn 100% of the time over some number of attempts? If you are going to set specific goals please make them clear, and provide some justification for those specifics.

This has been discussed a great deal elsewhere as you probably know.

Not 100%. Seems reasonable to have success rates similar to human success rates. Justification seems apparent: Not a lot of utility in a system that fails to safely accomplish such a turn 10% of the time. And quite hazardous.

I think most FSDb users are familiar with this dynamic now.

We can quibble about the exact requirement for a useful L2 system, but the current performance clearly does not cut it.

Personally for my ULT it is completely useless. Takes forever, is really slow, and usually performs the turn incorrectly if there is no traffic. With traffic I typically have to intervene.
 
Last edited:
This has been discussed a great deal elsewhere as you probably know.

Not 100%. Seems reasonable to have success rates similar to human success rates. Justification seems apparent: Not a lot of utility in a system that fails to safely accomplish such a turn 10% of the time. And quite hazardous.

I think most FSDb users are familiar with this dynamic now.

We can quibble about the exact requirement for a useful L2 system, but the current performance clearly does not cut it.

Personally for my ULT it is completely useless. Takes forever, is really slow, and usually performs the turn incorrectly if there is no traffic. With traffic I typically have to intervene.
Perhaps, but I dont think any of this can translate to "We have not yet seen a successful demonstration of an unprotected left turn!!! Remarkable, but true." which was your original claim (which, with the use of the singular, reads as if the car was incapable of doing a ULT at all). Perhaps you should consider a career in journalism? :)
 
Perhaps, but I dont think any of this can translate to "We have not yet seen a successful demonstration of an unprotected left turn!!! Remarkable, but true." which was your original claim (which, with the use of the singular, reads as if the car was incapable of doing a ULT at all). Perhaps you should consider a career in journalism? :)
I stand by that statement. A successful demonstration obviously does not just mean one time! That would be possibly promising, but useless.

I actually think it is truly remarkable that we haven’t gotten lucky and strung together 20 turns in a row or something that makes it “look” like things are working.

But it turns out it is so far from human-like performance this is very unlikely. So maybe not remarkable.
 
Is this like the car has done for years or something new?

Because it's been doing that for years, but this release indicated that it has tried to reduce it, which for me it seems to have maybe helped.

It's new to my car. With prior versions the car would generally always keep left. I think Tesla tries to move to the slower, right lane when appropriate, but the logic is not quite all there yet, so it moves right into exit lanes thinking the number of lanes continuing down the highway have increased.
 
It's new to my car. With prior versions the car would generally always keep left. I think Tesla tries to move to the slower, right lane when appropriate, but the logic is not quite all there yet, so it moves right into exit lanes thinking the number of lanes continuing down the highway have increased.
Mine is better described as instantly lunging to the right exit lane, or worse, right into a short merging lane just past the intersection, sometimes with acceleration.
 
  • Like
Reactions: SSonnentag