The photo you process is created by processing raw data from photons striking the camera sensors. Words can have different meanings in context. Before you criticize Tesla's use of "photons", do a deep dive into their programming. I guess you can't because they won't share proprietary details. They have only summarized what they are doing in public releases. In one of those releases, they did show the difference between the output from raw camera images and their photon processing directly to vector space in low visibility. it was pretty remarkable.
Reinventing the wheel is not something to be proud of, especially when coupled with clear misuse of terminology.
Raw image processing is foundational for pretty much any kind of image recognition. It is surprising that Tesla was not doing that from the beginning. Calling it “photon counting” (btw: I could not find Tesla official reference to that) is taking the amateurism to the next level. As mentioned above that is a very well defined concept in quantum physics. To be able to count individual photons you need a very, very specialized equipment and, frankly, for the purpose of autonomous driving does not make any sense.
Since 2022.20.9 came out I spent many hours researching the subject which increased my knowledge but I also have a life. The car for me is a tool and it is ridiculous to have to do a research every time when Tesla decided to drastically change something that I own.
Unfortunately, I see the same tail tales as with the comical V11 efforts: Ignoring significant body of knowledge on the matter, coming up with weird arguments, coming from unfounded “We know best” position, etc.
There is no evidence that radarless car performs better: It has a lower speed limit, it has longer following distance, it needs additional help (high beams, wipers), etc. Additionally, no one outside of Tesla is either convinced or pursuing vision only autonomous driving, let alone adaptive cruise control. I challenge you to find an independent study on the matter that concludes that camera only is, at least, equivalent to combined sensor array.
All points to:
- Tesla had challenge obtaining the radar modules due to the global chip supply issues. They decided to remove the chip, save cost and accelerate the Tesla Vision (interestingly, Tesla filed for FCC approval for their own radar module this year)
- As another cost saving measure they want to maintain a single code base (naturally); thus, removing the radar support.
All this “photon counting”, camera only is better than combined sensors, etc. is a marketing BS.