Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Do you remember these S and X mules?

View attachment 252614
@FredLambert :eek::p;)

I'm fairly sure this one picture is a Bosch Test mule using a pre-AP1 Model S85, where it was outfitted with a Bosch system unrelated to anything Tesla is/was doing. Not sure where the other pictures came from, but they could be Bosch too. Here's a 90 second video of the 1400 hours Bosch working on the car:
 
The datasheet says "The maximum output pixel rate is 74.25 Mp/s, corresponding to a clock rate of 74.25 MHz"
The sensor is 1280x964(*) pixels =1233920 pixels, so that means the fastest we can get those pixels is 60.174 times/sec.
This is supiciously close to their claimed maxfps of 60.

In a way this makes sense I guess, since there's no physical shutter and no separate image buffer, the faster you read the sensor the more fps you get.
 
Does that mean the shutter speed is 1/60?
Yes assuming you read the whole chip. If you only read some subset I imagine you can do it faster and therefore reduce "shutter speed".

Note there's no shutter in this device. Basically the way it works is you have an array of registers (the sensor). When you read a register, it's value is reset to 0, when photons hit the register in the sensor, some of them are caught and increase the register value. The more photons hit the register, the higher the value next time you read it.
 
The datasheet says "The maximum output pixel rate is 74.25 Mp/s, corresponding to a clock rate of 74.25 MHz"
The sensor is 1280x964(*) pixels =1233920 pixels, so that means the fastest we can get those pixels is 60.174 times/sec.
This is supiciously close to their claimed maxfps of 60.

In a way this makes sense I guess, since there's no physical shutter and no separate image buffer, the faster you read the sensor the more fps you get.
Beat me to it, I was going to post a similar post, but was busy over the weekend.

It actually shows the maximum FPS on the first page:
Frame rate: Full resolution 45 fps 720p 60 fps

On page 17 you can see the scheme it uses is traditional rolling shutter, with 370 pixel clocks of horizontal blanking. This horizontal blanking is due to the chip architecture and is mandatory:
"The AR0132AT uses column parallel analog-digital converters; thus short line timing is not possible. The minimum total line time is 1650 columns (horizontal width + horizontal blanking). The minimum horizontal blanking is 370."

So that boosts your calculation to 74.25/(1650x964) =46.68 times/sec, which corresponds to the 45 fps speed at full resolution (there is some vertical blanking at the end for the exact value).
Excluding the vertical blanking at the end, the rolling shutter effect would be proportional to this number.
 
Last edited:
Pardon me for this newbie question. Say I have four integers representing R, C1, C2, B from the raw bayer data of AR0231AT, how to compute G?
I presume you are talking about a RCCB filter (note Tesla uses RCCC AR0132AT instead). In a regular RGGB filter, the two G1 and G2 values are just averaged together to get a G value (after interpolation/demosiacing).
Confluence
AP2.0 Cameras: Capabilities and Limitations?

Let's ignore the interpolation right now (and presume those values are already interpolated values) and just focus on getting the color value. The clear pixels give you a clear value. The general idea is the average of the three colors gives you the clear value. Formula (assuming linear values) would be C = R/3 + G/3 + B/3. So you can average the two C1 and C2 to get a C value, and then solve for G (given you know R and B).

The problem is the encoding scheme is usually not linear, also the response of each pixel may not be linear either (the clear pixels, red pixels, blue pixels allow a different amount of light in). You would have to play around with the weights to correct for both of these factors.
Grayscale - Wikipedia
Here's also an article on demosiacing RCCC which might be helpful (it also explains the demosaicing process). The formula/weights they use is (Y is luminance, equivalent to C in above):
Y = 0.59 * G + 0.3 * R + 0.11 * B
http://www.analog.com/media/en/technical-documentation/application-notes/EE358.pdf

Here's @DamianXVI post on calculating color from Tesla's sensor that would help.
AP2.0 Cameras: Capabilities and Limitations?

With RCCC you can't get a true RGB image. You can only get a value where G and B are the same value (since you don't have either value to solve for the other), so the color palette is as posted by @DamianXVI here:
palette-png.235011

AP2.0 Cameras: Capabilities and Limitations?
 
Last edited:
Do we actually know the FOVs on all the cameras? According to Teslas website, the fisheye camera is 120 º. But that's it.

Wide FOV.jpg


Luckily, @verygreen continues to be a treasure trove of camera footage. So I thought maybe it's possible to extrapolate the actual FOVs from his images? Like this image, for example:

Narrow FOV.jpg


As you can see I've superimposed the Narrow camera image on top of the Wide camera image. In order to do this, I had to do a bit of manual resizing on the Narrow image, since both pictures are the same pixel size.

So I count 1280 pixels (Wide cam) versus 511 (Narrow cam) image width. Knowing that Wide FOV is 120 º, wouldn't Narrow FOV be ~47,9 º ? Is my math wrong?
 
Do we actually know the FOVs on all the cameras? According to Teslas website, the fisheye camera is 120 º. But that's it.

View attachment 259280

Luckily, @verygreen continues to be a treasure trove of camera footage. So I thought maybe it's possible to extrapolate the actual FOVs from his images? Like this image, for example:

View attachment 259281

As you can see I've superimposed the Narrow camera image on top of the Wide camera image. In order to do this, I had to do a bit of manual resizing on the Narrow image, since both pictures are the same pixel size.

So I count 1280 pixels (Wide cam) versus 511 (Narrow cam) image width. Knowing that Wide FOV is 120 º, wouldn't Narrow FOV be ~47,9 º ? Is my math wrong?
Electrek reported the following, but it's unknown where they got their numbers (did they measure from illustration?):
Main Forward Camera: Max distance 150m with 50° field of view
Narrow Forward Camera: Max distance 250m with 35° field of view
Wide Forward Camera: Max distance 60m with 150° field of view
A look at Tesla’s new Autopilot hardware suite: 8 cameras, 1 radar, ultrasonics & new supercomputer

For what it's worth, the picture on the Tesla website with the triple cams measures the following: 120 degrees, 46 degrees, 34 degrees. Forward looking side cameras measure 90 degrees (it also says the same in description). Rearward looking side cameras measure 75 degrees. Rear camera measures 133 degrees.

I presume theses are horizontal field of view (and not diagonal) given the pictures are top down.
 
Last edited:
  • Informative
Reactions: croman and spottyq
@lunitiks picture (and similar pictures posted over the past months in this thread) also helps explain why EAP - which so far uses only the narrower front cameras - really only sees adjacent lane cars (when changing lanes) towards the front, not when they are closer to front corners...

Like @buttershrimp .44 video showed, the car on the adjacent lane only appeared when it was far enough towards the front... that image of the non-wide camera shows clearly why this is.

Thank you for the efforts, guys, very informative.
 
BTW another notable finding - the wide angle camera has all signs of motion blur to me. (see that ford?) to the left

At night time Autopilot will increase the exposure time in the sensors, so naturally motion blur and HDR ghosting become more visible, in all cameras.

When combined with the color calculation HDR ghosting will often result in blobs of opposite colors around bright places (because red and gray are shifted relative to each other and end up being values from different exposures):
colorghosting.jpg


As for the FOV angle, when experimenting with radar data I ended up (by trial and error) with an angle of 45° to be suitable enough for the main camera (medium range), which I think is within the error range to the 50° stated by the Electrek article.
 
@lunitiks picture (and similar pictures posted over the past months in this thread) also helps explain why EAP - which so far uses only the narrower front cameras - really only sees adjacent lane cars (when changing lanes) towards the front, not when they are closer to front corners...

Like @buttershrimp .44 video showed, the car on the adjacent lane only appeared when it was far enough towards the front... that image of the non-wide camera shows clearly why this is.

Thank you for the efforts, guys, very informative.
I get the impression that it sees the cars behind and in the blind spot but it does not represent them in the display yet. My car won’t autolane change if someone is in the actual blind spot. Though I haven’t tested this rigorously because I don’t want to scare other drivers. It’s the representation on the display that is lagging. It makes me think there is a lot of work in getting cameras and software to hand off vehicles as they move from one FOV to another.
 
  • Like
Reactions: croman
Good info @stopcrazypp. So you're saying 46 or 50 degrees for the main camera. That means either my math is wrong, or I superimposed the Main camera (not Narrow) in my picture above, or your data is wrong. Statistically I should wrong on one of these accounts :confused:
That is a main camera camera image you have used.

At night time Autopilot will increase the exposure time in the sensors, so naturally motion blur and HDR ghosting become more visible, in all cameras.
Yes, but when you look at the superimposed image, the wide angle one is more notable (of course relative angle velocity is also higher in there, I guess)
 
Yes, but when you look at the superimposed image, the wide angle one is more notable (of course relative angle velocity is also higher in there, I guess)

Ah, you mean more blur when compared to the main camera. Yes, angular velocity do matter, but it is also possible, that fisheye and main camera were working at the time with different exposure times (those are controlled individually per camera). You can check that by comparing "Integration time" from register dumps of those frames.
 
I get the impression that it sees the cars behind and in the blind spot but it does not represent them in the display yet. My car won’t autolane change if someone is in the actual blind spot. Though I haven’t tested this rigorously because I don’t want to scare other drivers. It’s the representation on the display that is lagging. It makes me think there is a lot of work in getting cameras and software to hand off vehicles as they move from one FOV to another.

I think it currently sees the blind spots on ultrasonics and thus refuses to go there? It can't show ultrasonic signals as cars on the screen as they are not identified necessarily as cars...

@verygreen nothing, I guess, points yet at side cameras being active or is there?
 
At night time Autopilot will increase the exposure time in the sensors, so naturally motion blur and HDR ghosting become more visible, in all cameras.

When combined with the color calculation HDR ghosting will often result in blobs of opposite colors around bright places (because red and gray are shifted relative to each other and end up being values from different exposures):
View attachment 259305

As for the FOV angle, when experimenting with radar data I ended up (by trial and error) with an angle of 45° to be suitable enough for the main camera (medium range), which I think is within the error range to the 50° stated by the Electrek article.
Yeah, it looks like chromatic aberration (different colors are shifted relative to each other) is stronger than the motion blur. Have to get back home to have the tools, but looking at the different color channels can tell the difference between that and regular motion blur.