Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Ok, the tool is here: GitHub - verygreen/cccr_raw: CCCR raw convertor/red channel extractor for Tesla EAP images along with a modest instruction in the README

Here's a sample wideangle camera processed shot (the B&W version, not the red channel) (+ the Photoshop equalization as per the README):
View attachment 226122
same image
I started just using imageJ and lightroom because imageJ lets you perform nearly everything manually and is great with raw.
tcd3.jpg


You can get imageJ from the NIH or from ImageJ
 
  • Love
Reactions: lunitiks
Not to threadjack, but on the subject of cameras, are the ap 2.0 cameras able to handle a hilly road with lots of undulations? We have quite a few in South Eastern PA, and the throw off the ap 1.0 camera quite badly. The undulations get registered as curves, and the car swerves side to side. Does having 3 focal lengths in the front facing cameras allow something akin to stereo vision that can overcome this?

Back on topic, great discussion of the ap 2.0 cameras and their capabilities. Looking forward to continuing to follow it!

Thanks.

-Ben
 
  • Like
Reactions: Kristoffer Helle
Not to threadjack, but on the subject of cameras, are the ap 2.0 cameras able to handle a hilly road with lots of undulations? We have quite a few in South Eastern PA, and the throw off the ap 1.0 camera quite badly. The undulations get registered as curves, and the car swerves side to side. Does having 3 focal lengths in the front facing cameras allow something akin to stereo vision that can overcome this?

Back on topic, great discussion of the ap 2.0 cameras and their capabilities. Looking forward to continuing to follow it!

Thanks.

-Ben

It is kind of off-topic, but with AP2 I'm already seeing much better behavior on hilly roads. Hills seem to be AP1's achilles' heel. AP1 does great with even sharp curves on non-hilly roads. But throw in a few hills, and it steers confidently but incorrectly. Like that guy at work you always argue with.
 
  • Like
Reactions: BQst
The grid has more to do with laziness than anything else :p I didn't want to totally strip the red data
So I tried to read in the docs for a bit and there was no obvious way outlined to tell the tool to "ignore/do something to every Xth pixel".
How would you easily strip the red data? Embedded scripting supports something like that?
 
So I tried to read in the docs for a bit and there was no obvious way outlined to tell the tool to "ignore/do something to every Xth pixel".
How would you easily strip the red data? Embedded scripting supports something like that?
I looked briefly at some of the raws and was wondering also if there are tools that can do manipulations manually to account for the RCCC filter. Obviously I can write a tool too, but way too lazy to do that.
 
So I tried to read in the docs for a bit and there was no obvious way outlined to tell the tool to "ignore/do something to every Xth pixel".
How would you easily strip the red data? Embedded scripting supports something like that?
under process->filters there's one called convolve, in there you can define any filter and it will perform the convolution.
 
Tried playing with the Photoshop RAW opening a bit more. Interleaved has to be selected, that's for sure.

The problem seems to be lack of sufficient granularity in the options. I can open a 1280x964 image with 16 bits and 1 channel or a 1280x964 image with 8 bits and 2 channels (one dark, the other corrupted) or a corrupted 960x482 image with more channels/bits, but these options are not covering the exact amount of bits per channel the image contains so what one gets is only partial data/partial dynamic range and/or gibberish.

So, thank you for the tips on other tools, it seems Photoshop alone is not enough for this task. Also, I could not seem to get anywhere with the Camera RAW importing at all.
 
Tried playing with the Photoshop RAW opening a bit more. Interleaved has to be selected, that's for sure.

The problem seems to be lack of sufficient granularity in the options. I can open a 1280x964 image with 16 bits and 1 channel or a 1280x964 image with 8 bits and 2 channels (one dark, the other corrupted) or a corrupted 960x482 image with more channels/bits, but these options are not covering the exact amount of bits per channel the image contains so what one gets is only partial data/partial dynamic range and/or gibberish.

So, thank you for the tips on other tools, it seems Photoshop alone is not enough for this task. Also, I could not seem to get anywhere with the Camera RAW importing at all.
It'd be cool if Tesla gave an interface to record all the captured video like the self driving demo did. The internal hardware would do the processing from raw for us.
 
@verygreen can you post processed version of the entire image set (7 images) for us without a C compiler readily available? Just the output files from your tool for the entire image set we have been hashing?

As raw as possible so we can further process if need be...
 
Last edited:
The newest release notes mentioned it could capture short video clips and send it back to Tesla if the end user allows it.
Also if you have the unfused (read debug?) hardware - it writes clips to sata (read any usb) device plugged in. Vision task certainly checks if this dir exists and if it is, I guess writes there.

# if board is not fused, check if a SSD drive is plugged in, automount to
# "/tmp/external_drive/" and start writing clips/RTDVs to it
if detect-ecu-unfused; then
echo "Boot $bootcount: Settling udev $(/sbin/uptime-seconds) s after boot"
# wait for udev to finish so that drive will be mounted if it is present
# finish early if the external drive is already mounted
udevadm settle --exit-if-exists=/dev/sda1 --timeout=1
echo "Boot $bootcount: udev settled $(/sbin/uptime-seconds) s after boot"
# Check if a SSD drive is present and "/tmp/external_drive" is not already mounted
if [ ! -d /tmp/external_drive ] && [ -b "/dev/sda1" ]; then
# Mount the drive and set permissions
mkdir /tmp/external_drive
mount -t ext4 -o data=writeback,barrier=0 /dev/sda1 /tmp/external_drive
if [ $? -ne 0 ]; then
# Can't mount the drive, remove folder
rmdir /tmp/external_drive
echo "Boot $bootcount: External drive detected but not mounted. $(/sbin/uptime-seconds) s after boot"
else
chmod 774 /tmp/external_drive
chown root:autopilot /tmp/external_drive
# Vision will dump logs into /tmp/external_drive/clips
ensure_directory "/tmp/external_drive/clips" "1774" "root:log"
# FIXME - RTDV logs will all go into the same folder across multiple drives.
# only way to differentiate drives will be the time stamps in the
# log files themselves
ensure_directory "/tmp/external_drive/dvlog" "0770" "root:autopilot"
ensure_directory "/tmp/external_drive/dvlog/staging" "0770" "root:autopilot"
# Bind the usb-storage kernel thread and IRQ to the Vision CPUs
for pid in $(pgrep usb-storage); do
taskset -p 6 ${pid}
done
for pid in $(pgrep xhci); do
taskset -p 6 ${pid}
done
echo "Boot $bootcount: External drive mounted $(/sbin/uptime-seconds) s after boot"
fi
fi
fi​
 
  • Love
Reactions: AnxietyRanger
OK @verygreen - I had trouble opening those raw files properly in Photoshop (too dark, loss of dynamic range) but the ImageJ tool mentioned above (or just Google it) works. I used it to get images for Photoshop processing. Obviously I am out of my depth here, so I may just be doing something wrong...

I think the two channel / red alpha channel approach I tried above produces wrong results for the naked eye at least. Not that it wasn't obvious already from the red trees. :) Here is the fisheye using that method I used in #113:

tcd3_two_channel.jpg


Instead, creating an RGB grayscale image (all channels grey but red) and pasting the red channel on red channel created an interesting effect:

tcd3_rgb_with_red_channel.jpg
 
  • Like
Reactions: JeffK