Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Radar getting turned off on Model 3/Y with 2022.20.9

This site may earn commission on affiliate links.
The notion FSD could ever work in a Vision-only system is just laughable. What happens when an ego maniacal CEO tail wags the corporate dog...

I'm sure the engineers know it, and must be pulling their hair out trying to deal with the inherent limitations.

Love the cars, but the current approach to AP/FSD will be the death of the brand unless Musk gets out of the way and lets it be done right.
Laughable Yet they are doing it and progressing nicely.
 
The notion FSD could ever work in a Vision-only system is just laughable. What happens when an ego maniacal CEO tail wags the corporate dog...

I'm sure the engineers know it, and must be pulling their hair out trying to deal with the inherent limitations.

Love the cars, but the current approach to AP/FSD will be the death of the brand unless Musk gets out of the way and lets it be done right.
I wonder if the AI guy Kaparthy got fed up and left because he got tired of trying to beat the dead horse.
 
  • Like
  • Helpful
Reactions: Mikecm1 and kavyboy
What does this have to do with the quality of the sensors?! If the camera cannot see through the fog all AI behind it is useless.
2 Things: Tesla has developed software that uses photons rather than images for a substantial increase in camera distance penetration in low or obscured light. The car cannot drive in conditions just like humans where radar alone can see.

The car will adjust speed to visibility just like a human would, but actually see better because of photon parsing.
 
2 Things: Tesla has developed software that uses photons rather than images for a substantial increase in camera distance penetration in low or obscured light. The car cannot drive in conditions just like humans where radar alone can see.

The car will adjust speed to visibility just like a human would, but actually see better because of photon parsing.
What are you talking about?

Can you explain what you mean when you say Tesla is using "photon parsing" with regular cameras? I have no idea what that means.

Are you referring to Tesla somehow using raw data from the focal plane arrays in their cameras to somehow measure distance to objects with the data? If so, how?

One doesn't usually refer to cameras as being able to "parse photons". Maybe the term would mean something with LIDAR/LADAR, but with cameras? Not really sure what that means in the context of cameras.
 
Last edited:
2 Things: Tesla has developed software that uses photons rather than images for a substantial increase in camera distance penetration in low or obscured light. The car cannot drive in conditions just like humans where radar alone can see.

The car will adjust speed to visibility just like a human would, but actually see better because of photon parsing.
Could you please elaborate? All visual range cameras use photons to create images. Cars with radar have the advantage of adding another spectrum with different characteristics which increases the available information for decisioning - regardless of what algorithm does that. If the sensor array does not receive information (due to obstacles, fog, snow, lack of bouncing back photons, etc.) then no algorithm can help you.
Currently, my radar equipped MS can see things that I cannot see (which is awesome, BTW). If the new “upgrade” will reduce it to only what I see, how is that an “upgrade”?!
 
Could you please elaborate? All visual range cameras use photons to create images. Cars with radar have the advantage of adding another spectrum with different characteristics which increases the available information for decisioning - regardless of what algorithm does that. If the sensor array does not receive information (due to obstacles, fog, snow, lack of bouncing back photons, etc.) then no algorithm can help you.
Currently, my radar equipped MS can see things that I cannot see (which is awesome, BTW). If the new “upgrade” will reduce it to only what I see, how is that an “upgrade”?!
It all sounds like a bunch of malarkey to me.
 
Using photons directly from camera sensors instead of images processed by the camera... It's simple and can gather far more useful information than images in low visibility conditions. It's been discussed in several broadcasts from Tesla engineers. Some of the videos showing results showed an amazing difference in vector space output in fog. snow and nighttime conditions.
 
Using photons directly from camera sensors instead of images processed by the camera... It's simple and can gather far more useful information than images in low visibility conditions. It's been discussed in several broadcasts from Tesla engineers. Some of the videos showing results showed an amazing difference in vector space output in fog. snow and nighttime conditions.
That sounds very weird. They cannot use photons directly from the camera; they need an array to convert the photon kinetic energy to electricity (Einstein got a Nobel prize for that). It is possible that they use raw image from the camera but there is nothing new about that. It is a common practice and I have not heard of anyone doing post processing for the purpose of AI in the camera - it just doesn’t make sense. Still, if there is no photon then no AI can help.
I would have to agree with @ElectricIAC - it sounds like a bunch of BS marketing to cover simple cost-cutting measures.
Do you have any links for the “photon parsing” and how it is better than using simple radar?
 
I just traded our 25-month-old 2020 LR AWD Y for a 2022 PMY. Am guessing the P doesn't have RADAR. I only have 286 miles on it (200 highway) and haven't experienced any issues or differences. What should I be on the lookout for?
I have 2020 MSLR+ so it has a radar. I did not install 2022.20.8 so I don’t know exactly what the difference is (I am still holding on finding out) but if I have to speculate, you should look for performance in low light/adverse conditions, e.g. fog, snow, heavy rain, obstacles, etc.
 
  • Like
Reactions: ElectricIAC
I have 2020 MSLR+ so it has a radar. I did not install 2022.20.8 so I don’t know exactly what the difference is (I am still holding on finding out) but if I have to speculate, you should look for performance in low light/adverse conditions, e.g. fog, snow, heavy rain, obstacles, etc.
The problem I won’t have any point of reference (side by side comparison) when that “event occurs”, for all I know the car with radar would do exactly the same thing. No two scenarios are 100% identical.
 
If you have roads that you travel frequently under different conditions then you may be able to compare. For example, there was an overpass on one of my regular commutes. In the late afternoon I always got phantom breaking. About a year ago, after one of the updates, I was not getting those PD anymore.
 
If you have roads that you travel frequently under different conditions then you may be able to compare. For example, there was an overpass on one of my regular commutes. In the late afternoon I always got phantom breaking. About a year ago, after one of the updates, I was not getting those PD anymore.
It's definitely a YMMV. My 2018 M3 has been having PBs at the same overpasses for years with every revision, including changing to FSD Beta a few months ago. I'm not super surprised since they seem to be devoting very little time to classic AP relative to FSD and the freeway FSD Beta is essentially still Autopilot. It's not totally identical, but my experiences with PBs hasn't changed.
 
Using photons directly from camera sensors instead of images processed by the camera... It's simple and can gather far more useful information than images in low visibility conditions. It's been discussed in several broadcasts from Tesla engineers. Some of the videos showing results showed an amazing difference in vector space output in fog. snow and nighttime conditions.
I call BS on that. Let's see the links to these amazing broadcasts.

The way cameras work is by exposing the individual cells in the sensor to light which changes the charge in the cells. At regular intervals (60, 120, etc. per second) the cells are read to get the total amount of light energy received during the exposure period. To work with individual photons in and ordinary lit scene would require nano-second scanning and very sensitive cells not to mention very fast processing.
 
  • Like
Reactions: Boza
I call BS on that. Let's see the links to these amazing broadcasts.

The way cameras work is by exposing the individual cells in the sensor to light which changes the charge in the cells. At regular intervals (60, 120, etc. per second) the cells are read to get the total amount of light energy received during the exposure period. To work with individual photons in and ordinary lit scene would require nano-second scanning and very sensitive cells not to mention very fast processing.
Our MCU2 can’t even deal with the web browser being open without crashing, I sincerely doubt there’s any magic photon wizardry going on.