Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla replacing ultrasonic sensors with Tesla Vision

This site may earn commission on affiliate links.
Big update here. Have FSDb 10.69.2.3. Was driving somewhere, NAP no activated, TACC not activated. Just driving plain car mode. Stopped at red light. 10 seconds later I here a ding and the traffic light on the touch screen varied back and forth red and green. This code is getting stupider
That is not Beta code. That is legacy Red Light courtesy alert code that probably hasn't been updated in the couple of years since release. It has always had some trouble with multiple lights in view. BUT most important: What does this "Big Update" that is NOT an update have to do with USS removal?
 
  • Like
Reactions: Yelobird
That is not Beta code. That is legacy Red Light courtesy alert code that probably hasn't been updated in the couple of years since release. It has always had some trouble with multiple lights in view. BUT most important: What does this "Big Update" that is NOT an update have to do with USS removal?
Never worked before I had FSDb. Never dinged AT ALL. And didn't they supposedly move red/green light ding to Autopilot. That's the problem with all these mini releases. You can't keep things straight. USS in? USS out? Radar works? Radar doesn't Work. Inconsistent and confusing. As a former coder and systems programmer you did not release code in minor updates. You made major strides and then impressed the customer. They didn't want to hear, yeah we're working on the for some time "next year"
 
  • Like
Reactions: Boza
As another mentioned, the current system just looks up the most similar object it has in its database (in this case a motorcyclist) and assigns that. So it's not unusual at all to falsely identify an area as a vehicle (for example I have cabinets in front of my car when I pull in and it'll show a semi truck in front of me).

The occupancy network instead only has the task of determining if the given area in a space is occupied or not (it shows blocks). At most it'll label the blocks a different color, but it won't be trying to put a predetermined object model there.

Anyways, read the links I posted. The Occupancy Network is an entirely new system that does not work like the previous one, so what you observe in the old system is irrelevant to determine how the new system would work.
So, the old system recognizes objects (albeit not always correct) and the new system just determines if the space is occupied or not. How come just coming up with blocks is superior than actually recognizing the objects?!
 
So, the old system recognizes objects (albeit not always correct) and the new system just determines if the space is occupied or not. How come just coming up with blocks is superior than actually recognizing the objects?!

Because if the conventional system doesn't have a sufficiently high confidence that there is even a recognized object in a given location it will crash into it, and because the conventional systems at least as in academia (image segmentation & recognition) are essentially all 2-d, looking for the presence or absence of a certain class of item within various rectangular subsets of the image. More technically there are neural networks with a certain 'comb' of potential outputs over various subrectangles of the image and each of these emits numbers tending towards '1' in the identified categories and '0' for all the other categories. These categories are preset during model training. One category is inevitably 'I don't know' or 'undetermined background' for anything which wasn't labeled in the training set because the outputs are normalized to sum to 1 in common use. In order to be sure there is something there, each category will have to have some numerical threshold somehow set empirically which triggers a sufficiently certain identification. When you see something on the visualization flashing in and out of existence that's probably the certainty on the image detection network output fluctuating below or above some threshold.

Previous Tesla presentations on labeling all looked like it was dealing with 2-d representation and then a secondary estimation of distance of the already identified 'thing' from other means.

The newer vision systems are pretty advanced from what I saw at the last AI day. They attempt to reconstruct in an underlying 3 dimensional representation, and don't need full classification to be certain ("what specific thing is this?") in any spatial location in order for it to estimate some notion of physical presence ("there is something here but I'm not sure what yet") in 3 d. It's not so clear to me how they work, it's probably quite a bit of novel proprietary discovery.

Classificaiton of the specific category of item is harder statistically (though the newer ML system is much more sophisticated and harder to code) and you don't want to ask the ML system to do that in order to detect 'is something solid here or not' which is what's needed to avoid crashing. Of course you do need to do the classification in addition for semantic prediction (who will be driving/walking where?) but separating out the tasks of 3-d interpretation from specific categorization is a win.
 
Last edited:
Never worked before I had FSDb. Never dinged AT ALL. And didn't they supposedly move red/green light ding to Autopilot.....
It was and is right there in the Autopilot menu for years. Not sure how you never noticed it but you must have never cut it On. Here is pic I took back in 2020 (old UI).

Screen Shot 2022-10-19 at 6.21.51 AM.png
 
Last edited:
  • Like
Reactions: DrGriz
I have given below my observations with version 2022.36.4 production autopilot (2022 Model Y - Australia). If anyone else is using this version, is it possible that they have already moved occupancy network to production autopilot. I have found this version to be comfortable and smoother than any other AutoPilot versions I have used over last 3 years with Model 3 and Model Y.

During a short drive with 2022.36.4, it seems to show more detailed visualization than version 2022.36.2.

Examples are some of the detailed visualization seen by FSD Beta testers mentioned in following notes:

https://www.notateslaapp.com/tesla-reference/636/all-tesla-fsd-visualizations-and-what-they-mean

https://www.notateslaapp.com/software-updates/upcoming-features/id/1008/tesla-may-release-more-detailed-vehicle-visualizations-soon

Only following features are yet to be shown with 2022.36.4 production autopilot:

- Birds Eye view of intersections with multiple lanes and traffic
- Unidentified objects (although occupancy network seem to be active)
- Vector based lanes
- Creep visualization
- Blue vehicle
- Brake Lights

If the planned FSD Beta release 10.69.3 (major release expected this week) is based on 2022.36 branch, it may be another indication that production AP and FSD Beta have already started to merge, and final single stack version may come with 2022.44 or 2022.48 branch, this year, and include some of the missing features (listed above).
 
... is it possible that they have already moved occupancy network to production autopilot.....During a short drive with 2022.36.4, it seems to show more detailed visualization than version 2022.36.2......
Highly unlikely since the AP Stack is legacy code now. Doubt we see Occupancy Network used on highway until the Single Stack/V11. Then after that you will likely see it moved over to standard AP.

Don't think the Occupancy Network has a lot to do directly with the UI seen in the car. The UI is about identifying things and displaying them in a HUMAN relations way. It may be improved some indirectly by the Occupancy Network but I bet a separate UI team is probably responsible for matching, rendering and providing a database lookup table of objects for the UI.

EDIT: Just to add Beta will now avoid objects or conditions that are not displayed at all on the UI.
 
Last edited:
  • Like
Reactions: momo3605 and mongo
So, the old system recognizes objects (albeit not always correct) and the new system just determines if the space is occupied or not. How come just coming up with blocks is superior than actually recognizing the objects?!
It does both, not just occupancy.
The new system uses both the occupancy network for avoidance of things where they are now, and classification for avoidance of where they will be in the future (along with applying correct behavior based on lights, signs, and such).
 
Doubt we see Occupancy Network used on highway until the Single Stack/V11. Then after that you will likely see it moved over to standard AP.
I agree that was my initial expectation as well. However, after seeing details that was not shown with versions 2022.28.x.x and older, I was not sure whether they had already moved part of occupancy network over (similar to how TeslaVision was moved from FSD Beta to production AP). Thanks.

Meanwhile, we are hoping that Tesla will start deploying FSD Beta outside USA/Canada soon (especially Australia).
 
I have given below my observations with version 2022.36.4 production autopilot (2022 Model Y - Australia). If anyone else is using this version, is it possible that they have already moved occupancy network to production autopilot. I have found this version to be comfortable and smoother than any other AutoPilot versions I have used over last 3 years with Model 3 and Model Y.

I think it's unlikely. There would probably be FSDb releases for the beta testers only which move to single stack on highway and streets before pushing to production AP---and there will be performance regressions on the highway side for a while which would need to be trained/tweaked away.

I think it's more likely that your conventional AP has been retrained and is more caught up with the US left-side-driver versions. The AP failures and reports I read seem to have been significantly worse in Europe, and also particularly in right-side driver areas, than my experiences in California (where I expect AP to work the best). Machine learning systems often key in on background data correlations and make assumptions that humans know aren't important, so that they perform poorly outside their training data. Signs and streets & markings look different in different countries. Even things like the different sizes of license plates might confuse a ML system as it could easily be looking for a certain geometry rectangle to firmly identify something as a 'car' and use knowledge of that size to estimate distances.
 
I agree that was my initial expectation as well. However, after seeing details that was not shown with versions 2022.28.x.x and older, I was not sure whether they had already moved part of occupancy network over (similar to how TeslaVision was moved from FSD Beta to production AP). Thanks.

Meanwhile, we are hoping that Tesla will start deploying FSD Beta outside USA/Canada soon (especially Australia).
Can you post pictures of the "details" you are seeing that are new to production AP? I have not read anything that suggests they're making changes to production AP
 
Can you post pictures of the "details" you are seeing that are new to production AP?
The shapes of vehicles and trucks are more detailed (similar to what FSDb images show). Also can see vehicles far away and around 3 to 4 vehicles in front and behind my lane and adjacent lane, and this was never possible with earlier AP version visualization. Currently 2022 Model Y is downloading a new version 2022.36.5.
 
The shapes of vehicles and trucks are more detailed (similar to what FSDb images show). Also can see vehicles far away and around 3 to 4 vehicles in front and behind my lane and adjacent lane, and this was never possible with earlier AP version visualization. Currently 2022 Model Y is downloading a new version 2022.36.5.
the shapes was updated in 2022.16 as you posted yourself. And seeing vehicles 3 to 4 away in adjacent lane has always been possible. Doesn't sound like anything different. I think you're looking for something that doesn't exist.
 
My HOUSE was replaced by a motorcycle? Now I know why the AI guy quit earlier this year. Elon has Ukraine peace plan, He counseled Kanye West about insulting Jews, he spoke/did not speak w Putin, he's going broke w Starlink in Ukraine, he's gonna fix Twiiter which has basically become a sewer. He's 50 which means he was born in 1972. Just checked, Apartheid ended in 1994, so I think he learned a thing or two from massa whitey. btw I'm white

My car thinks my motorcycle in the garage is a bus, a person or a trash can. It’s fun to see what it’ll be each morning.
 
The shapes of vehicles and trucks are more detailed (similar to what FSDb images show). Also can see vehicles far away and around 3 to 4 vehicles in front and behind my lane and adjacent lane, and this was never possible with earlier AP version visualization. Currently 2022 Model Y is downloading a new version 2022.36.5.

the shapes was updated in 2022.16 as you posted yourself. And seeing vehicles 3 to 4 away in adjacent lane has always been possible. Doesn't sound like anything different. I think you're looking for something that doesn't exist.

I think it's possible Australia/rhd is updated later than other areas.
 
Never worked before I had FSDb......

I believe that's true. No traffic light dinging until I bought FSD



You are conflation the FSD Capability package (what you buy) with FSD Beta (aka: Autosteer on City Streets). Traffic Light Chime is a feature included in the FSD Capability package (I think it may be standard now). BUT it has nothing to do with FSD Beta.

The Traffic Light Chime was a feature in the FSD Capability package (like Summon or NoA) before FSD Beta became available for us to apply for and test.

So YES you did have it before you got FSD Beta and got it when you bought the FSD Capability package.
 
Last edited: