Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Brief FSD beta 11.3.6 report

This site may earn commission on affiliate links.
You are assuming that higher resolution equal further distance. That's not really the case. If so, my new iPhone should be able to a few miles further than the original iPhone. I'm pretty sure that some of the first satellite observation birds had relatively low resolution, but they could see hundreds of thousands of mile.

And a car is a LOT bigger than the single lens of a 8 inch traffic light. Should be able to recognize the situation further away.

But speciifcally, with the latest update, have you seen the issue? I haven't had the opportunity to experience an 85 mph to 0 situation yet.
I'm not a graphics designer or image expert at all, so I may be completely wrong. That being said, the higher resolution the image, the further out the system can see detail, such as a car vs the background, when looking at individual pixels.

It's not in English, but this might help illustrate the point:

 
  • Like
Reactions: BobbyTables
Fsd was a way to get cash in 2016-2019 when the company’s future was in doubt and they needed bridge financing until their production lines were running well. There was a lot of hype about self driving cars, and Tesla was the only company that allowed ordinary people to spend money to get a piece of the pie. So we did, in droves.

Anyone at the time could have told you it wasn’t about self driving: the car doesn’t have the needed sensors or compute. 1.2 Mp cameras that fog with tiny sensors placed too close to the windshield are inadequate. Even 2014 Subarus had 1.7 mp cameras that were placed further from the windshield (raindrops) at a distance apart that allowed them to use parallax to see distance well.

I didn’t catch on until after I had opted for the option in both of my 2020 vehicles.

That said, 11.3.6 is driving better in Seattle than previous versions.
Much appreciated follow up ewoodrick, thanks much. 👍

I gave FSD two tries today with 11.3.6 and it flat out choked both times I tried it. I should have videoed it but it slipped my mind.
Try 1 - Engaged FSD after making a right turn on 25 mph residential road. Coming up to a T intersection with no stop sign for me or on coming traffic. Just across the intersection parked the curb on my right was a guy pulling stuff out of a delivery truck. The car slowed quickly to a stop before the intersection and FSD quit. WTF?

Try 2 - Engaged FSD about 200 feet before a 4 way stop sign intersection and the car stopped properly at the intersection. I waved the guy to my right on and he went and turned right. Just then the driver in the left turn lane facing me decided to go as well. As he was starting to head into the intersection and was turning left, my car crossed over the limit line and seemed intent on driving through the intersection, despite the car now directly in front of me. I hit the brakes.

Frustrating. Not one bit better than the previous version from what I can tell.
Agreed and I wish that I could revert to an older version. After several problems, I stopped using FSD starting with 11.3.5 and things didn't change with 11.3.6. In addition, starting with 11.3.5, the speed control system really gives problems. I was driving 65 on a highway and the system dropped the speed setting to 45 for no reason. When changing speed zones, the system does not react. It notes the new speed but the Max setting and actual speed is unchanged.
 
This is my experience on the streets as well.

But on a rainy highway, adaptive cruise acceleration and deceleration are still worse than radar autopilot. Back in those days, I drove in torrential downpour and adaptive cruise still worked. Today it will refuse to keep up with traffic.
 
  • Informative
Reactions: pilotSteve
I'm not a graphics designer or image expert at all, so I may be completely wrong. That being said, the higher resolution the image, the further out the system can see detail, such as a car vs the background, when looking at individual pixels.

It's not in English, but this might help illustrate the point:

I think that the screen that is shown is evidence in my favor. ALL resolutions, even though the open back door are easily visible. I don't have to recognize 10 pt text at 30 ft. And if I'm not mistaken, the forward cameras effectively have multiple resolutions. There's the wide camera and the narrow camera. I believe that the physical camera resolutions are the same, but the width of view is different.

There are a number of other things such as focus and view width that come into play that make things not so cut and dry.

But the biggest one is that the more resolution that there is, the more computer time needed to process the signal. That's a killer. Ever noticed how computers have problems driving one or more 4k monitors? That's because of the processor power needed to decode and display the images.

But to the point, what's a real-life example of the car not able to see far enough?

It's real easy to say that the car needs more resolution, but why does it? As you increase resolution, you need an exponential increase in processor power. Processor power is not unlimited and you have to remember that the car has to process ALL of it's cameras 20-60 times a second. That's about 500 images each second.
 
I think that the screen that is shown is evidence in my favor. ALL resolutions, even though the open back door are easily visible. I don't have to recognize 10 pt text at 30 ft. And if I'm not mistaken, the forward cameras effectively have multiple resolutions. There's the wide camera and the narrow camera. I believe that the physical camera resolutions are the same, but the width of view is different.

There are a number of other things such as focus and view width that come into play that make things not so cut and dry.

But the biggest one is that the more resolution that there is, the more computer time needed to process the signal. That's a killer. Ever noticed how computers have problems driving one or more 4k monitors? That's because of the processor power needed to decode and display the images.

But to the point, what's a real-life example of the car not able to see far enough?

It's real easy to say that the car needs more resolution, but why does it? As you increase resolution, you need an exponential increase in processor power. Processor power is not unlimited and you have to remember that the car has to process ALL of it's cameras 20-60 times a second. That's about 500 images each second.
Hence HW4 with a new FSD computer which increases compute power.

You don't seem to grasp the increased resolution allowing the system to see and discern objects farther away, so I'll stop trying to explain the process and benefit.
 
Hence HW4 with a new FSD computer which increases compute power.

You don't seem to grasp the increased resolution allowing the system to see and discern objects farther away, so I'll stop trying to explain the process and benefit.
Hence no. Hence Elon's promise that HW3 is going to be able to do FSD.

I totally understand the ability to see objects far away. You are totally not understanding that there isn't necessarily the NEED to see objects far away.

Do you need to make out the headlights on a car to determine that it is a car?
 
Just tried other previously reported failure scenarios including not exiting a single-white-line HOV lane to take a planned exit. 11.3.6 is just a bug a failure as 11.3.3 for my area, except that it’s more timid.
 
Hence my query of "Is it happening"


There seems to be a large number of people who believe that they are experts in autos, vision, and AI here. Just like the millions of armchair coaches for sports.
In a certain strict sense there are NO experts on self-driving cars today, because the whole field is so new and being trail-blazed as we speak. That, of course, is why the ratio of speculation to fact is so poor, even for those trying to stay rational. You will find people here who claim LIDAR is a must-have, or RADAR, or higher-resolution cameras, or MORE cameras, or (insert pet theory here). But Tesla continue to make significant progress, though at a pace that is too slow for some (though quite how they can define “too slow” for a new field of research is not clear to me). And, at present, that’s about all that can be said with any certainty.
 
Fsd was a way to get cash in 2016-2019 when the company’s future was in doubt and they needed bridge financing until their production lines were running well. There was a lot of hype about self driving cars, and Tesla was the only company that allowed ordinary people to spend money to get a piece of the pie. So we did, in droves.

Anyone at the time could have told you it wasn’t about self driving: the car doesn’t have the needed sensors or compute. 1.2 Mp cameras that fog with tiny sensors placed too close to the windshield are inadequate. Even 2014 Subarus had 1.7 mp cameras that were placed further from the windshield (raindrops) at a distance apart that allowed them to use parallax to see distance well.

I didn’t catch on until after I had opted for the option in both of my 2020 vehicles.

That said, 11.3.6 is driving better in Seattle than previous versions.
Hey, Musk said self driving was a solved problem….
 
Hence HW4 with a new FSD computer which increases compute power.

You don't seem to grasp the increased resolution allowing the system to see and discern objects farther away, so I'll stop trying to explain the process and benefit.
It does seem obvious, your point, but I don’t think fsd is limited by perception now, I think it’s limited by logic…

Maybe higher resolution cameras will make the logic easier, maybe not. Will it help at night? I think only way to improve night vision is to increase the sensors physical size….
 
I'm not a graphics designer or image expert at all, so I may be completely wrong. That being said, the higher resolution the image, the further out the system can see detail, such as a car vs the background, when looking at individual pixels.

It's not in English, but this might help illustrate the point:

Moreover it's not enough to detect a car, it needs to estimate its speed from multiple frames. And that needs higher resolution still than just detection. Radar gives speed immediately.

I think it should have at least 4-8 mp front cameras dual with sufficient distance to give good parallax, as well as good sensor size for low light performance. This costs money and Elon doesn't want to spend money even if it's technically desirable. If he won't pay for a $10 rain sensor (which has definitively better performance than any hack they've tried so far), he won't pay for a much more expensive camera system and the computation needed to ingest that larger data size.

I hope JB gets to be CEO of Tesla soon and starts to make realistic engineering focused decisions and listens to people who know better.

It's unclear how much FSD is limited by perception vs logic---certainly it's better on the perception side (where Karpathy worked) but machine learning cannot make up for insufficient physical data. Estimating speed from vision is one of those problems.
 
Agreed and I wish that I could revert to an older version. After several problems, I stopped using FSD starting with 11.3.5 and things didn't change with 11.3.6. In addition, starting with 11.3.5, the speed control system really gives problems. I was driving 65 on a highway and the system dropped the speed setting to 45 for no reason. When changing speed zones, the system does not react. It notes the new speed but the Max setting and actual speed is unchanged.
This (autopilot suddenly dropping speed on freeway) happened to me yesterday on my first drive using 11.3.6. I don’t remember it happened before on this same stretch of freeway. Is there a setting somewhere that might mitigate this situation (for example, allowing/keeping the set speed regardless of the maximum speed on a particular section)?