Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
As I said earlier - below is a new version of image conversion tool with a function for converting images from the backup camera. It's accessible with "-back" switch (with no additional options, this is a straightforward conversion to RGB BMP).

I was also experimenting a bit with this tool since. Some functions are rewritten for SSE for faster computation (useful when converting replay videos). I also wanted to add the ability to export a full precision image to some advanced external image editor, for better HDR processing and enhancements, so I added an option to save to TIFF files ("-tiff" switch). This saves a 32bit floating point TIFF with full linear range (without histogram stretching and gamma correction). As far as I understand, it should be possible to load this file with Photoshop or similar application without any precision loss, but I didn't properly tested that.

TIFF export works as expected! Thanks!

BTW how do you make the converter work on the unpacked hevc images? so say I unpack it into a raw gray16 file, but obviously that does not have registers correct post compression/decompression cycle so the converted produced me some super overexposed result instead (even with tiff output)
 
@verygreen do you have and idea / notion what triggers snapshots / videos taken? Is it completely random, is it driving based events, time based maybe?
There are several sources.
One is mothership tells your car a set of triggers, format was discussed here: AP2.0 Cameras: Capabilities and Limitations?
and then the car takes snapshots as it was told (I did not look into modern firmwares if any of the parameters have been added).
Another avenue is for internal events. FCWs used to generate a limited snapshot with just main+narrow cameras for example (but stopped), if car thinks it's in an accident, that triggers a snapshot as well (hopefully I never find out what cameras are included ;) ).

the 17.34 firmware reportedly became a lot more chatty too, but I did not get my hands on one yet, so I don't know what is it sending.
Could be something as boring as services crashing and sending crashdumps for analysis (every time a part of autopilot crashes that happens, when my windshield was replaced and a tri-cam cluster was disconnected, that generated quite a bit of traffic).
 
BTW how do you make the converter work on the unpacked hevc images? so say I unpack it into a raw gray16 file, but obviously that does not have registers correct post compression/decompression cycle so the converted produced me some super overexposed result instead (even with tiff output)

You mean for converting replay videos? That is quite a hackish multi-step process that I used. From the beginning, step by step:

As I said before I'm using ffmpeg with rawvideo output. First the replay frame files need to be concatenated (concatenation at files level, not video stream level), but starting from the first key frame, not necessarily the first file in the snapshot (I just delete a couple of files at the begging before the first noticeably larger file). I used an additional C# program for that, but I believe ffmpeg is able to do that. Then this concatenated file ("video.h265") is converter to a raw frames stream ("video.bin"):

Code:
ffmpeg -i video.h265 -f rawvideo video.bin

Output from this however, have a different format than the still images. It's 10bit/px (still images are effectively 12bit, stored as 14bit), grayscale and red channels are stored as separate planes, and the registers data is practically gone because of compression. So I have just hardcoded a procedure for those properties. It is available under a hidden "-vid" switch in the tool.

You should first find a fixed brightness range by experimenting on the still images from the same camera, instead of using auto (in which case range would be calculated for each frame and the video would be blinking). For example, for range 0.01%...1.5% and red factor 1.9:

Code:
ImageConverter -range 0.01 1.5 -rf 1.9 -vid video.bin

This command will create single BMP for each frame in the same directory (so prepare for up to 300 files to be created). From this on, those BMPs just have to be converted into a video.
 
So, I had FCW triggered on my recent trip, and as it turns out this is an event that triggers a snapshot. Only records can data and main camera replay. I uploaded a sample to https://app.box.com/s/bv03mxivwrtr8iau4x7odwc0j3g8s5ml
I wonder what other triggers beyond FCW and crash exist.

Also I finally drove around a bit at night (also partially raining, 2+ hours after sunset) taking snapshots every 3 minutes (in hindsight should have made it more often since a bunch of totally unlit roads were not apparently captured, but snapshot #12 is me stopped at unlit street deliberately - only headlights on).
Anyway, uploading the 14 resultant snapshots to https://app.box.com/s/jah6ovf6y5eoq9faejeylt7fm2j2od4i
It has a mixed set of highway and a country road stuff.

I'll also add a couple of DVR records for comparison and here are a few snapshots that I took so we can look at them here, but the forum is not cooperating, so I am also uploading them to the above mentioned location.
In particular IMG20170604-234144F.JPG is the unlit street for comparison.
Continuing on from this post, I processed the snapshot #12 as you suggested.
HW2.5 capabilities

Using Auto Contrast, and "Exposure and Gamma" to convert to 8-bit.
(Order: fisheye, main, narrow, leftpillar, leftrepeater, rightpillar, rightrepeater)
fisheye_2405577247136_eg.jpg main_2405407486272_eg.jpg narrow_2405481286656_eg.jpg leftpillar_2405662935264_eg.jpg leftrepeater_2405856985408_eg.jpg rightpillar_2405756861984_eg.jpg rightrepeater_2405941129472_eg.jpg

Just for reference to others, the snapshot #0 files are here (posts also describe process):
HW2.5 capabilities
HW2.5 capabilities

From the images, I think it's fairly obvious the fisheye, main, narrow have absolutely no problem with illumination because of the headlights. The left/right repeaters have the rear illuminated by the tailights (for example the CRS sign). The only part not so clear will have proper illumination (in a pitch black example) are the left/right pillars because there are some lights from the buildings on both sides.
 
Last edited:
@stopcrazypp, for very low light condition you may consider using only a grayscale channel images ("-og" switch). Red channel has lower sensitivity and is amplified before color computation. This my inject some noise into the image. I don't expect the difference to be big, but just mentioning this.
 
  • Like
Reactions: zmarty
Sorry guys, I'm not trying to threadjack or anything but I'm just completely obsessed with the rain sensor business and came over some info... If you don't care, well ignore this post by all means.

So I've been searching around to try to find out exactly how the rain sensor in pre-AP2.0 cars works. Why the weird shape, what does it do, and how does it do its thing? How would it substantially differ from a vision-based rain sensor?

The problem is that the manufacturer (Hella) doesn't provide many technical details that's readily available on the interwebs. The closest I've come are these general brochures: Here, here and here. TLDR version:

Hella RLS.JPG


So I went and found what appears to be Hella's patent for the module from 2011/2013. Glad I did, because it explains a couple of things. The big, black disk that we can see from the outside of the windshield, seems to be a shared plate for several optical elements on the inside. The optical elements are sensors or light sources allocated to sensors, designed as diodes. According to the patent drawings and descriptions, it includes one IR transmitter, four IR receivers, solar sensors, ambient light sensors, temperature and humidity sensors. Why four IR receivers for the rain sensor, you say? Quote: "Since a rain sensor with only one allocated receiving unit can only scan a very small surface, it is provided that the sensor arrangement has at least two, in particular at least four receiving units. These receiving units all preferably exhibit the same distance to the centrally arranged transmitting unit. This ensures that impinging rain can be detected with the highest certainty possible, since the rain sensor detects a larger surface and receiving unit tolerances are compensable among each other." The patent goes on explaining how the sensor determines solar radiation and how surface temperature can be used to detect fogging on the window.

I mocked up some references on top one of the drawings, like so:

Hella patent drawing with notes.jpg


Judging by the appearance of the so called HVAC-sensor in AP2.0 cars, I'm beginning to doubt my own theory now that it is actually a rain sensor :eek:

PS: Not asking for any particular opinions on this issue (again), just posting some data for future reference o_O
 
Judging by the appearance of the so called HVAC-sensor in AP2.0 cars, I'm beginning to doubt my own theory now that it is actually a rain sensor :eek:

PS: Not asking for any particular opinions on this issue (again), just posting some data for future reference o_O

(Trying to bring this back on topic): FWIW, I never mentioned it, but one of my parents worked on camera-based rain sensors in the late 90's at an unnamed major automotive supplier. As a kid interested in engineering, I had more to do with that system than in retrospect I should have, and I might even have a couple lines of code in some rain sensors and brushless DC HVAC controllers :D

Based off my experience as a 10-year-old, I would agree, the HVAC sensor in AP2 is simply just for AC vs heat decisions in borderline temperatures and to compensate for direct vs indirect sunlight AC intensity… not for rain sensing and most likely not even for condensation sensing.

And I would also say, as far as AP2 goes, that the FOV of the front camera is actually fine for rain sensing. The cameras that were used when I was a little kid also did not directly focus on the glass, and were only around 300x200px resolution. It was looking for periodic pixel diffs as opposed to raindrops themselves. It was eventually canned for a IR based solution similar to the patent drawings you saw, mainly because of certain corner cases that were not possible to work around with 90's-era technology, including lack of dynamic range, low framerate resulting in missing rain drops, and even wiping infinite loops caused by the camera seeing the wiper going across.

I believe these problems are easy to address or have largely gone away with modern technology, but it's still a nontrivial amount of work to make this robust enough to take away your manual control and replace it with auto wipers. Note that there's even legal implications of inappropriate auto wiper activation that Tesla cannot shove onto a supplier anymore.

Bottom line: I don't think anything technical prevents them from achieving auto wipers with camera based hardware. I think the devil is in the details, and it's more of a matter of prioritization that we haven't seen them yet.
 
@chillaban perhaps you can give the mothership you code from when you were 10, we'd all be very grateful and I'm sure a couple of free beers will make it your way ;);)

I think older-and-wiser me would write better code if Tesla's hiring, but at the same time, work-life-balance has started mattering more, which too often seems like a foreign concept in Silicon Valley :D

Plus, my day job is already overwhelming at times because management figures in a meeting decide to trivialize the effort of writing software. "Oh yes, that's possible, it's just software"
 
Oh, so clearly you already work for Tesla :rolleyes:;)
Haha, what I keep saying is that it's Silicon Valley + Cars. It's kind of amusing too and I appreciate the rest of the world (that's rooted in reality and not delusion like us here) reminding us that we are the crazy ones....

But this whole announcing things before they're ready as a way to motivate engineers, always being oversubscribed and behind schedule due to borderline unrealistic ambition, etc etc.... not a foreign concept at all. Maybe to cars, but not for the rest of consumer electronics land. It's just most devices don't have the same kind of scrutiny (and safety implications, to a certain degree) as Teslas. You guys are fanatics.... but I am too apparently :D
 
I've been thinking about AP2 camera angles and came to a conclusion, that placement that is in use today
will not allow level 5 autonomy (no human intervention needed in all situations).
Tesla is following a truck. One lane city road. Traffic is slow, weather fine. Semi stops and doesn't move any more.
Let's suppose truck stopped. Switched hazards on. AP now has to overtake that semi (or just a bulky van).
This is the view (in city distance might be even less). What can FSD program do?
shutterstock_846056.jpg


There is a reason why driver's eyes are extremely close to opposite traffic lane and not center of the vehicle.
Left repeater cam can not see ahead. Windscreen cameras can not see ahead.
But driver would be able to see MUCH further, especially when leaned towards driver side window. This is what we do.

I'm absolutely confident in my findings. I have simulated situation in real traffic. I see no solution.
And backing up in not an option. Also keeping huge following distance doesn't work in this scenario.
There must be a camera on the left side looking forward. Either repeater, headlight enclosure, or upper left windshield corner.

And just this one situation not being handled by FSD pack means Level 4 system.
Just a camera.

Problem 2:
ALL cameras must be with wash+airblasting function. I do not understand WTF was Elon thinking when he said "level 5".
Majority of cameras I've seen in rain, slush, dirty-snow are covered with crap with absolutely no visibility.
This includes all-around cameras on many vehicles, incl Leaf. ABSOLUTELY NO VISIBILITY.
Yes, 3 central cameras can be cleaned. But others? If not, this is Level 4:(

PS: cameras with wash+blow functions are available, this video shows just wash version:

Sorry if that was already in this topic.
 
I've been thinking about AP2 camera angles and came to a conclusion, that placement that is in use today
will not allow level 5 autonomy (no human intervention needed in all situations).
Tesla is following a truck. One lane city road. Traffic is slow, weather fine. Semi stops and doesn't move any more.
Let's suppose truck stopped. Switched hazards on. AP now has to overtake that semi (or just a bulky van).
This is the view (in city distance might be even less). What can FSD program do?
shutterstock_846056.jpg


There is a reason why driver's eyes are extremely close to opposite traffic lane and not center of the vehicle.
Left repeater cam can not see ahead. Windscreen cameras can not see ahead.
But driver would be able to see MUCH further, especially when leaned towards driver side window. This is what we do.

I'm absolutely confident in my findings. I have simulated situation in real traffic. I see no solution.
And backing up in not an option. Also keeping huge following distance doesn't work in this scenario.
There must be a camera on the left side looking forward. Either repeater, headlight enclosure, or upper left windshield corner.

And just this one situation not being handled by FSD pack means Level 4 system.
Just a camera.

Problem 2:
ALL cameras must be with wash+airblasting function. I do not understand WTF was Elon thinking when he said "level 5".
Majority of cameras I've seen in rain, slush, dirty-snow are covered with crap with absolutely no visibility.
This includes all-around cameras on many vehicles, incl Leaf. ABSOLUTELY NO VISIBILITY.
Yes, 3 central cameras can be cleaned. But others? If not, this is Level 4:(

PS: cameras with wash+blow functions are available, this video shows just wash version:

Sorry if that was already in this topic.
Repeaters face backwards, B pillar cameras face forwards... they are further out than the drivers head can possibly be unless you stick your head out of the window. I'm not seeing any problem.

Cruise Automation demo
2:30
(and this wasn't just because it had lidar)
 
Last edited: