ElectricIAC
Good-Natured Rascal
Read you loud and clear.I'm on 2022.20.8, does that means I'm still on radar ?
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Read you loud and clear.I'm on 2022.20.8, does that means I'm still on radar ?
Laughable Yet they are doing it and progressing nicely.The notion FSD could ever work in a Vision-only system is just laughable. What happens when an ego maniacal CEO tail wags the corporate dog...
I'm sure the engineers know it, and must be pulling their hair out trying to deal with the inherent limitations.
Love the cars, but the current approach to AP/FSD will be the death of the brand unless Musk gets out of the way and lets it be done right.
Phantom braking says otherwise.Laughable Yet they are doing it and progressing nicely.
I wonder if the AI guy Kaparthy got fed up and left because he got tired of trying to beat the dead horse.The notion FSD could ever work in a Vision-only system is just laughable. What happens when an ego maniacal CEO tail wags the corporate dog...
I'm sure the engineers know it, and must be pulling their hair out trying to deal with the inherent limitations.
Love the cars, but the current approach to AP/FSD will be the death of the brand unless Musk gets out of the way and lets it be done right.
What does this have to do with the quality of the sensors?! If the camera cannot see through the fog all AI behind it is useless.Vision skeptics need to watch this video. The Occupancy network will come to basic autopilot for "don't crash ever".
2 Things: Tesla has developed software that uses photons rather than images for a substantial increase in camera distance penetration in low or obscured light. The car cannot drive in conditions just like humans where radar alone can see.What does this have to do with the quality of the sensors?! If the camera cannot see through the fog all AI behind it is useless.
What are you talking about?2 Things: Tesla has developed software that uses photons rather than images for a substantial increase in camera distance penetration in low or obscured light. The car cannot drive in conditions just like humans where radar alone can see.
The car will adjust speed to visibility just like a human would, but actually see better because of photon parsing.
Could you please elaborate? All visual range cameras use photons to create images. Cars with radar have the advantage of adding another spectrum with different characteristics which increases the available information for decisioning - regardless of what algorithm does that. If the sensor array does not receive information (due to obstacles, fog, snow, lack of bouncing back photons, etc.) then no algorithm can help you.2 Things: Tesla has developed software that uses photons rather than images for a substantial increase in camera distance penetration in low or obscured light. The car cannot drive in conditions just like humans where radar alone can see.
The car will adjust speed to visibility just like a human would, but actually see better because of photon parsing.
It all sounds like a bunch of malarkey to me.Could you please elaborate? All visual range cameras use photons to create images. Cars with radar have the advantage of adding another spectrum with different characteristics which increases the available information for decisioning - regardless of what algorithm does that. If the sensor array does not receive information (due to obstacles, fog, snow, lack of bouncing back photons, etc.) then no algorithm can help you.
Currently, my radar equipped MS can see things that I cannot see (which is awesome, BTW). If the new “upgrade” will reduce it to only what I see, how is that an “upgrade”?!
That sounds very weird. They cannot use photons directly from the camera; they need an array to convert the photon kinetic energy to electricity (Einstein got a Nobel prize for that). It is possible that they use raw image from the camera but there is nothing new about that. It is a common practice and I have not heard of anyone doing post processing for the purpose of AI in the camera - it just doesn’t make sense. Still, if there is no photon then no AI can help.Using photons directly from camera sensors instead of images processed by the camera... It's simple and can gather far more useful information than images in low visibility conditions. It's been discussed in several broadcasts from Tesla engineers. Some of the videos showing results showed an amazing difference in vector space output in fog. snow and nighttime conditions.
I have 2020 MSLR+ so it has a radar. I did not install 2022.20.8 so I don’t know exactly what the difference is (I am still holding on finding out) but if I have to speculate, you should look for performance in low light/adverse conditions, e.g. fog, snow, heavy rain, obstacles, etc.I just traded our 25-month-old 2020 LR AWD Y for a 2022 PMY. Am guessing the P doesn't have RADAR. I only have 286 miles on it (200 highway) and haven't experienced any issues or differences. What should I be on the lookout for?
The problem I won’t have any point of reference (side by side comparison) when that “event occurs”, for all I know the car with radar would do exactly the same thing. No two scenarios are 100% identical.I have 2020 MSLR+ so it has a radar. I did not install 2022.20.8 so I don’t know exactly what the difference is (I am still holding on finding out) but if I have to speculate, you should look for performance in low light/adverse conditions, e.g. fog, snow, heavy rain, obstacles, etc.
It's definitely a YMMV. My 2018 M3 has been having PBs at the same overpasses for years with every revision, including changing to FSD Beta a few months ago. I'm not super surprised since they seem to be devoting very little time to classic AP relative to FSD and the freeway FSD Beta is essentially still Autopilot. It's not totally identical, but my experiences with PBs hasn't changed.If you have roads that you travel frequently under different conditions then you may be able to compare. For example, there was an overpass on one of my regular commutes. In the late afternoon I always got phantom breaking. About a year ago, after one of the updates, I was not getting those PD anymore.
I call BS on that. Let's see the links to these amazing broadcasts.Using photons directly from camera sensors instead of images processed by the camera... It's simple and can gather far more useful information than images in low visibility conditions. It's been discussed in several broadcasts from Tesla engineers. Some of the videos showing results showed an amazing difference in vector space output in fog. snow and nighttime conditions.
Our MCU2 can’t even deal with the web browser being open without crashing, I sincerely doubt there’s any magic photon wizardry going on.I call BS on that. Let's see the links to these amazing broadcasts.
The way cameras work is by exposing the individual cells in the sensor to light which changes the charge in the cells. At regular intervals (60, 120, etc. per second) the cells are read to get the total amount of light energy received during the exposure period. To work with individual photons in and ordinary lit scene would require nano-second scanning and very sensitive cells not to mention very fast processing.
The MCU isn't involved in AP/FSD operation, other than displaying the visualizations.Our MCU2 can’t even deal with the web browser being open without crashing, I sincerely doubt there’s any magic photon wizardry going on.