Depends on the focal length. speaking in 35mm ("full frame") equivalents, on a 24mm wideangle it would need to be super hires, on a 400mm I'd say HD (1080p) should be enough. Now, I dont know the 35mm focal length equivalent of the 3 front cameras or their resolution.What resolution of camera would be needed to deliver this 10cm 3D map at a distance of say 100m?
The question is what do you need 10cm 3D resolution for at 100m?
3D not necessarily needed for object detection, 2D can do that.
Judging from these collages from the cameras, and assuming the tele is just as high a resolution as the midrange and wide, it should be possible with the installed cameras to identify objects of 10cm size at 100m or more. But that's just eyeballing.
Lidar has the same issues plus significantly higher processing requirements and most of the same drawbacks.
All Tesla is talking about in terms of HW3 is an increase in processing power and running the Tesla Vision NN "on bare metal".
What can be done with just cameras was recently demonstrated by Mobileye with a very similar setup to Teslas, just cameras.
They also use a Roadbook, which is essentially a localized landmarks, to localize the vehicle on the road down to a precision of 2-3 cm. It's not as intense as a classic SLAM, because far fewer tracking points are needed.
Not really on topic though. My SLAM remark was intended as a reply to the original "one-eye-vision" post and why depth perception worked once in motion.
SLAM is not even necessary. You can already calculate a pretty decent image difference map from simply comparing two subsequent frames from the camera to get outlines and calculate movement vectors from those. Combine that with the existing image recognition and you dont need full-blown SLAM.Another big problem with SLAM based on vision alone is moving objects. It works well for mapping stationary objects but if the things you're mapping are moving while you're moving you can't really separate their movement from your movement. It also requires a fair amount of processing horsepower, which Tesla is also stretched thin on.
Basically Tesla is completely screwed from doing any of the sophisticated things that the big boys (the ones with real sensor suites and processing power) can do.
And considering what kind of quality was available two years ago on decent cell phone cameras as well as that you are nowadays able to run pre-trained image recognition in realtime on said cellphones, I remain optimistic.
I still fail to see the "you need centimeter resolution at kilometer distance" Lidar argument, especially considering the massive cost/processing power demands and near identical issues with environmental influences.
But this is offtopic here as it's not really V9 related but belongs in the camera capabilities thread. So sorry to everyone who read this entire post and didnt learn anything new about V9.
Last edited: