Lanes are like 8ft wide anyway, right? So would work wonders for the stated purpose of other cars encroach on you.[Ultrasonic]These sensors are useful for detecting nearby cars, especially when they encroach on your lane...
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Lanes are like 8ft wide anyway, right? So would work wonders for the stated purpose of other cars encroach on you.[Ultrasonic]These sensors are useful for detecting nearby cars, especially when they encroach on your lane...
I would love to get some input on this thread about Tesla’s camera specs from the sleuths in this thread. Two-part question: 1) what specs are needed to avoid motion blur and other visual artifacts while driving at high speeds and 2) do Tesla’s Hardware 2 cameras meet those specs?
Yeah I think it would be effective for the purpose of avoiding collisions but even 8 feet is a lot for comfort on highways. IMO suddenly moving even outside the 8 foot bubble around a car can make other drivers freak out.Lanes are like 8ft wide anyway, right? So would work wonders for the stated purpose of other cars encroach on you.
Instead of asking the people in this thread to repeat information that has already been discussed, I would recommend that you use the search feature on TMC to answer your questions.
You were using the wrong search terms. Hardware 2 is a term that practically nobody uses in discussion. At most you see HW2, but people typically just say AP2.0.The paper I'm referring to was published just last month and my post is the first one on the TMC forums to mention the title. My post is also the first to mention "motion blur" in the context of Hardware 2 cameras or autonomy (the rest are about motion blurred photos of the Model 3, etc.). To my knowledge, this information has never been discussed before here.
I tried to search the exact model of cameras used in Hardware 2 but it is not easy info to find. Searching "what cameras does Tesla use", "Hardware 2 cameras", "camera model" etc. doesn't get you anywhere. Thankfully verygreen was kind enough to let me know it's the ON Semiconductor ART0132. (Hopefully now if someone searches those same terms, this post will come up.)
You were using the wrong search terms. Hardware 2 is a term that practically nobody uses in discussion. At most you see HW2, but people typically just say AP2.0.
We have a long thread dedicated to AP2.0 cameras right here:
AP2.0 Cameras: Capabilities and Limitations?
"ap2.0 camera" turns up the thread, but not the model."AP 2.0 camera model" still doesn't turn up the answer. Try searching yourself and see if the answer comes up.
I am probably ten or so pages into the AP2.0 Cameras thread. But I think it's reasonable to ask for information that isn't easily searchable rather than have to read a 48-page thread in the hopes of finding information that might not be in there.
Interesting..
ON Semiconductor spends $400M buying Aptina image sensor supplier
Any references to camera model/product numbers?
Image Sensors World: ON Semi-Aptina Early Samples Automotive BSI Sensor
AR something perhaps?
AR0136AT: 1.2 MP 1/3" CMOS Image Sensor
Image Sensors
All cameras, except rearview cam, uses Aptina / ON Semiconductor AR0132. Rearview cam uses OmniVision OV10635.
Trent, there are thousands of posts about the cameras on the Tesla model S and X's. Instead of asking the people in this thread to repeat information that has already been discussed, I would recommend that you use the search feature on TMC to answer your questions.
As pretty much anybody serious into videography would tell you, fps does not really matter, it's the actual shutter speed that matter since nothing prevents you from having 30fps video where exposure for every time is 1/120s or less. Lots of cameras internally would have shorter exposure time when the light permits it (o rather, requires it I guess) to maintain correct exposure.I heard from a reliable source who has professional experience in computer vision that 60 fps and a global shutter should be enough.
Most CMOS image sensors (to save one transistor per cell compared to a true "snapshot" shutter) use Electronic Rolling Shutter. Basically it implements two pointers to sensor pixels, both proceeding in the same line-scan order across the sensor.
One is erase pointer, the other one - readout. Erase pointer runs ahead discharging each photosensitive cell, then follows the readout one. Each pixel sees (and accumulates) the light for the same exposure time (from the moment erase pointer passes it till it is read out), but that happens at different time.
If you will make an image of a fast passing car with ERS with short exposure - it will all be sharp (no blurring), but the car will seem to lean backwards. The roof will be captured at earlier time than the wheels and this time difference across the frame can be as long as 1/15 of a second for the full frame of the MT9P001 5MPix sensor (it is equal to the frame readout time that is equal to 1 sec divided by the frame rate in most circumstances).
Just a general tip when doing searching (I do it a lot). If using specific and narrow search terms doesn't find what you want, broaden it and using synonyms (the search here is not sophisticated enough to do that automatically). Also searching by putting it as a question in a human format is rarely works, as the extra words are completely irrelevant to the search engine (it would only work if someone worded the question exactly as you did).stopcrazypp, okay, but the actual camera model doesn't appear in the search result. Unless you already knew what to look for, you wouldn't know the answer is on the next page. It would be nice to have this information be more accessible. I don't think asking questions is bad, and I'm personally happy to give people information that can't easily be found.
The naysayers about cameras aren't going to be convinced by this argument. I don't think I have seen them question localization in general. Rather it's about situations with glare, poor weather, and low light.Anyway, my question wasn't just about the HW2 camera specs. It was also about what specs are required to prevent motion blur and other visual artifacts that might interfere with visual localization and mapping at high speeds. This question has not yet been discussed on this forum, as far as I'm aware.
I heard from a reliable source who has professional experience in computer vision that 60 fps and a global shutter should be enough. I was happy to see Tesla's HW2 cameras are 60 fps. I was a bit concerned to see they have an electronic rolling shutter rather than a global shutter. Motion blur will occur at high speeds with this hardware.
However, motion blur can apparently be counteracted with software. This is the next thing I'm looking into. I want to see how effective this software is.
In case the stakes are not clear here, this is about closing in on hard, quantiative evidence to support Elon Musk's assertion that "you can absolutely be superhuman with just cameras". If a multi-camera system is demonstrably better or at least as good as humans at localization at low driving speeds, and if there is nothing to stop a multi-camera system from being as accurate at high driving speeds, then this is hard evidence that a multi-camera system (plus GPS) provides all the sensor input a self-driving car needs to be better at driving than humans.
It would be quite a feat to shoot 60fps, with exposure of 1/30 or longerYou can shoot something at 60 fps, but use a slower shutter speed to get more motion blur
I knew there was something weird about that sentence , probably shouldn't be doing edits like that. It should be the other way around: shoot at a faster shutter speed for less motion blur.It would be quite a feat to shoot 60fps, with exposure of 1/30 or longer
The naysayers about cameras aren't going to be convinced by this argument. I don't think I have seen them question localization in general. Rather it's about situations with glare, poor weather, and low light.
...Tesla will have to use imaging alone to build most of the model of the world around itself, but, as I noted above, we don’t yet know how to do that accurately. This means that Tesla is effectively collecting data that no-one today can read (or at least, read well enough to produce a complete solution). Of course, you would have to solve this problem both to collect the data and actually to drive the car, so Tesla is making a big contrarian bet on the speed of computer vision development. Tesla saves time by not waiting for cheap/practical LIDAR (it would be impossible for Tesla to put LIDAR on all of its cars today), but doing without LIDAR means the computer vision software will have to solve harder problems and so could well take longer. And if all the other parts of the software for autonomy - the parts that decide what the car should actually do - take long enough, then LIDAR might get cheap and practical long before autonomy is working anyway, making Tesla’s shortcut irrelevant. We’ll see.
In lacking LIDAR, [a Tesla car] also lacks the ability to precisely know positions, directions, speeds, even for the objects it can detect. ... Worse still, in lacking LIDAR it also doesn’t have the ability to position itself exactly in the world.
I think speed-measuring devices based purely on video camera were at least somewhat common in the past. This is even without taking potential stereo vision into account.The paper I found demonstrated visual SLAM with high accuracy, but only at low speeds. If there's no barrier to a similar system achieving high accuracy at high driving speeds, then this objection is overcome.
Comments in code are super common.
In shipping binaries?
Yes, you'll see lots of readable characters. However, those readable characters are not comments. They are strings used by the binary to log messages, open files, request URLS, etc, etc. You'll also see strings used reference symbols in other modules, and to export symbols from the library you're looking at. If you are super lucky, you'll also have debug symbols (like function names for non-exported functions, variables etc). I've never seen source code (eg, comments) included in a binary.
I suspect @verygreen was talking about something interpreted, like shell, perl, or python, etc.