Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
it sees it in complete grey-scale.
You mean grayscale with a red channel.... because there's a red channel.

If rendering video you don't have to render the red channel as red... but it's red nonetheless.

Elon Musk on Twitter

k8JYKQ.jpg


As you know technically autopilot does see any picture at all, it sees a stream of data logically in a series of 2D matrices. Autopilot can certainly read the luminosity of the red pixels. For video, you can render it anyway you want before compression without extra hardware (since the hardware onboard should be plenty)
 
Last edited:
This will be a bit lengthy and technical update of what I found out in the snapshot images, that looks more like a blog post. I’m posting it because it partially shows some info about the cameras, partially straightens some incorrect things I said or shown earlier. Hope it will not bore you too much (and sorry for my english, hope you won’t struggle reading it as much as I when writing it).

I took a look into “sunset” sets previously, but images I got from this light conditions didn’t look right and it bothered me. I tried to find information helping with it in the camera specification, assuming that this sensor is Aptina AR0132AT (as mentioned earlier in this thread, and what can be seen in the data seems to confirm that). Documentation however at first did not add up fully with what I saw, so looking up for hints I ended up decoding camera chip register data sent as first two lines of the image (they are mangled up a bit due to the fact how the camera is transmitting pixel data and how this is then stored into the file). I couldn’t find the registers set reference for AR0132AT, so I used the documentation for AR0130, (register description from it seems to match up with values in the images, so I think those are close enough). I think I finally realized what is going on in those images.

Firstly, those sensors are HDR cameras, with build in hardware support for multi-exposure HDR capture that is being used by Autopilot. Cameras are set up to capture three images for each frame, stepping exposure time between them by a factor of 16. Those images are automatically combined into a single frame by the camera.

Secondly, among camera registers are:
  • green1_gain
  • blue_gain
  • red_gain
  • green2_gain
Those are digital gain values for separate color channels in Bayer pattern. This chip is most likely designed for couple different purposes at once, including full color cameras with Bayer mosaic. Grayscale sensors and Gray-Red sensors differ only with a different color filter coating on them, so in our case red gain affects red channel, rest affects grayscale pixels. Anyway, those registers are set all to the same value meaning, that there are no color balancing set up, at least at the sensor level. Why is this important for me? The red color filter coating is preventing some portion of the light to reach pixel sensors, this means that the grayscale pixels (without coating) are much more sensitive than the red ones (this is one of many reasons why you want to use a grayscale camera whenever applicable). When restoring color in previous images I had to multiply red values by a factor of about 2. The fact that this is not done at sensor makes me think, that Tesla is not yet using color information, at least for Enhanced Autopilot, but I might be wrong.

If those cameras were capturing a single frame with linear output, all I had to do to restore the color is to multiply the red channel, to match them up. But when multiple frames are combined in HDR mode, all with red channel off from balance, than the color relation is a bit more complicated.

This is why I thought previously that this relation might not be linear. In previous images I converted, I accidentally found the portion of brightness from the middle exposure (the most useful for their light conditions). This is also why I thought the side cameras had different response. All what was wrong with them was a wrong exposure time set up (this is the only meaningful difference in registers state), making them overexposure. In their case I accidentally found the portion from the shortest exposure, which was the most useful for their condition.

This is not yet the end. When working in HDR mode the camera is combining the images into the resulting frame. According to the specification it has a 12bit sensor. When combing three images with x16, x16 exposure ratios it ends up with 20bit linear brightness value (E1 + 16*E2 + 16*16*E3). It can then transmit this 20bit value, or compress the brightness to 14bit (without data loss, fitting the sum of 12bit values into it), or to 12bit (with some data loss), using this function:
Compression.png


When looking at the image data I was sure that the camera is working in 14bit mode, because I was seeing values up to 15 999 (and 14bit range is 2^14 = 16 384). This is where images data did not match with the camera specification for 14bit mode. After looking into registers it turned out that the camera is actually working in 12bit mode. Pixel values in the files are just shifted by two bits (multiplied by 4). Why it is set up like that, I don’t know. Maybe they prepared the vision algorithms for 14bit data and then decided to switch the cameras to 12bit mode, although I doubt it. More over those two extra bits seems to be occasionally filled with random values, as I saw in the register encoding lines, which is weird and still unexplained for me. Anyway, when taking under account 12 bit mode of the camera and 2 bit shift afterwards things are starting to make sense. Characteristic points on the image histogram finally lined up with values in the specification. Processing images according to this specification finally works universally for all the images. The colors are still quite off for different images, but this time there are no sharp spots of opposite colors in the same image that suggested nonlinear relation previously.

Basically I have to reverse what the camera is doing when combining multi exposure images and compressing brightness values, split brightness range back into three exposure ranges (although it may not be possible to do it perfectly) and amplify the red channels for each range separately. Even when not computing the color, the grayscale image still have to be processed to decompress the brightness range.

For example, previously I uploaded an image with traffic light warning signs that was a bit overexposure. Turns out in reality it is not as bad:
tcd4-44.jpg

But getting back to sunset snapshots. Images from this light conditions have very wide usable range of brightness and it is quite hard to visualize them in such a way, that will not lie about what Autopilot can actually see. I don’t know much about HDR post-processing or tone mapping, and methods from quick online search are not producing satisfying results, so I try to present it in other way. Here are images made out of almost whole captured brightness range (notice that traffic lights are not overexposure):
cf.jpg df.jpg bf.jpg

Dark parts of those images of course contains much more information. If I crop brightness to those dark ranges this is what can be seen:
c.jpg d.jpg b.jpg

In some parts you can see, what I think, is a ghosting from HDR multi-exposure in motion.

Below is a combined video of main camera replays from all sunset snapshots. This time the quality is really crappy. This is because the h265 compression was applied on top of the brightness range compression, and when decompressing the brightness I’m basically amplifying h256 compression artifacts up to the roof. Replay video is also stored as 10bit/px, so some quality is lost here too.

sunset main_replay - Streamable

Some other random info that can be found in register settings:

When in HDR mode the sensor can perform 2D motion detection and compensation, but it is not in use (it is turned off in its register).

Sensor provides automatic exposure, analog gain and digital gain management, but it is not used and all of those parameters are controlled manually by Autopilot ECU. It can be seen in the snapshots that Autopilot is changing exposure time and gain depending on the lighting conditions. In the video at about 0:31 you can see the result of increasing gain in the sensor.

If I’m correctly understanding how to calculate the chip temperature out of register values, than the main camera sensor in sunset snapshot is at about 50°C, side camera sensor is about 44°C, main camera in intersection snapshot is about 40°C, and at the end of “drivingaround1_day” set the main sensor reached over 60°C. A bit warm, but maximum allowed ambient temperature for the sensor is 105°C, so allowed die core temperature should be even higher. I’m bringing this up because some people were checking sensor temperatures using thermal cameras and weren’t sure if the raised temperature was because of the sensor running or heating element working.
 
The ghosting might also be from the windshield.

Good point. I hadn't thought about it, and this actually might be it.


@DamianXVI is this SW in any way helpful?

Well... not really. Or at least I can't force it to cooperate. The package you linked directly doesn't contain much (just some pieces of documentation and registering tool that is missing Qt runtime to work). But readme points to their DevSuite software. This kind of software is however meant to be used when you have the camera connected to the computer and want to interface with it. It can load images from files but i cannot figure out how to force it to interpret the files from Autopilot correctly. It doesn't even seems to support Gray+Red sensor pattern.
 
Last edited:
Good point. I hadn't thought about it, and this actually might be it.




Well... not really. Or at least I can't force it to cooperate. The package you linked directly doesn't contain much (just some pieces of documentation and registering tool that is missing Qt runtime to work). But readme points to their DevSuite software. This kind of software is however meant to be used when you have the camera connected to the computer and want to interface with it. It can load images from files but i cannot figure out how to force it to interpret the files from Autopilot correctly. It doesn't even seems to support Gray+Red sensor pattern.
From the AR0130 data sheet, it only supports monochrome and RGB Bayer, which probably explains the lack of RCCC support. I'm guessing the tool for the actual AR0132.

A google about AR0132 software only found this page talking about "Devware" which seems to be part of the same DevSuite software you found.
Help - Detailed - DevSuite - Confluence

The only part where it mentions RCCC is this, not sure if it helps:
"Data Interpretation Page
This page gives some control over the demosaicing algorithms that are used to convert the bayer image into an RGB color image.
For sensors with clear pixels (RCCB, RyyB, RGBC, RCCC), use "RGB (fast-linear)" to convert to a viewable RGB image.
This setting interpolates each of the four channels to get R, G1, G2, and B for each pixel. It then averages G1 and g2 together to get G. The result is RGB for each pixel."
 
  • Informative
Reactions: dogldogl
The only part where it mentions RCCC is this, not sure if it helps:
"Data Interpretation Page
This page gives some control over the demosaicing algorithms that are used to convert the bayer image into an RGB color image.
For sensors with clear pixels (RCCB, RyyB, RGBC, RCCC), use "RGB (fast-linear)" to convert to a viewable RGB image.
This setting interpolates each of the four channels to get R, G1, G2, and B for each pixel. It then averages G1 and g2 together to get G. The result is RGB for each pixel."

Use "RGB" setting to convert Bayer-like image to RGB image. So RGB is a destination format here. That suggests, that the software needs to know the format of the source image on its own. I think this applies to the use case, when you are streaming the video on-line from the camera and the software gets additional information from the camera. I could not figure out how to set up the format of the source data from the file. I clicked through many of the formats in the settings without much change in the results. I think this software is not really prepared for reading unprocessed sensor data out of the file.
 
How about the forward- and rear looking side cameras, how much info can we get from them in pitch dark? Maybe some help from the head- and taillights, but probably not much...?
Well, If there's a car behind, it'll illuminate for us (or blind us?), otherwise there's nothing to see and no useful information if the car is moving forward, right? ;)
Same for the sides. Anything that protrudes to hit the car would be illuminated by headlights first otherwise it's at a safe distance.
 
  • Informative
Reactions: lunitiks
How about the forward- and rear looking side cameras, how much info can we get from them in pitch dark? Maybe some help from the head- and taillights, but probably not much...?
I was interested in this too. But from @DamianXVI's analysis, they have an HDR mode, that can significantly help in low light (having a RCCC filter helps already by allowing more light in than typical RGB).
 
So, I had FCW triggered on my recent trip, and as it turns out this is an event that triggers a snapshot. Only records can data and main camera replay. I uploaded a sample to https://app.box.com/s/bv03mxivwrtr8iau4x7odwc0j3g8s5ml
I wonder what other triggers beyond FCW and crash exist.

Also I finally drove around a bit at night (also partially raining, 2+ hours after sunset) taking snapshots every 3 minutes (in hindsight should have made it more often since a bunch of totally unlit roads were not apparently captured, but snapshot #12 is me stopped at unlit street deliberately - only headlights on).
Anyway, uploading the 14 resultant snapshots to https://app.box.com/s/jah6ovf6y5eoq9faejeylt7fm2j2od4i
It has a mixed set of highway and a country road stuff.

I'll also add a couple of DVR records for comparison and here are a few snapshots that I took so we can look at them here, but the forum is not cooperating, so I am also uploading them to the above mentioned location.
In particular IMG20170604-234144F.JPG is the unlit street for comparison.
 
Last edited:
Finally took some time to extract what I was working on into a stand-alone command line tool. Putting it as an attachment. I also improved color computation since last time (aka corrected one of my mistakes). Now lots of images look almost "normal".

If anyone would want to generate some good looking images using it, the key part is to find the brightness range to crop to (argument -range). It operates in HDR linear range (first exposure is contained in range 0% to 0.34%, middle exposure in 0.34% to 6.19% and last exposure in 6.19% to 100%), so resulting short ranges in low values are nothing unexpected. Usually it requires some experiments to find it. There is an automatic range mode (-auto and -autolo switches) but it doesn't work very well and is useful as starting point or quick preview. Also, exporting bmp with full range and trying to enhance it in graphic editor will not work, as precision will be lost when saving bmp with 8bit per channel. For further tweaking there is also gamma parameter, and red factor parameter for color balancing.
 

Attachments

  • ImageConverter.zip
    108.9 KB · Views: 113