Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Hm, I guess this is not really directly related to cameras, and probably a bit of old news, since this app was present for a really long time, but I just found it and probably need to really play with it during the day.
I guess the clips would not really work on a non-debug build, but hopefully augmented reality will... right? right?

View attachment 226810
View attachment 226811 View attachment 226812 View attachment 226813
Noob question, how do you enable these debug screens?
Custom firmware? A code in the T button?
Via CAN bus? Via API?

Thanks
 
It would be really interesting if it could generate something like this to be displayed on the CID.. :)

giphy.gif
This seems like AR to me :)
 
Please show snapshots of the "Tuning Parameters" screen and others, if you wouldn't mind.

A similar free and open source implementation would be super and could evolve nicely. I wonder if this will become available, eventually, perhaps on a vehicle that is inexpensive and more in the not-for-profit style of the Raspberry Pi Foundation.
 
Please show snapshots of the "Tuning Parameters" screen and others, if you wouldn't mind.
Oh, forgot to take a picture of it.
Basically it had a cryptic list in the form of:
Parameter1
Parameter2 (so just numbered parameters, no actual description) that you can adjust with a slider. Nothing interesting. I'll take next time I play with it if I don't forget.

Noob question, how do you enable these debug screens?
Custom firmware? A code in the T button?
It's a combination of the factory mode (the code in the T button) + some other stuff.
 
The driver assist app is present on the AP1 cars too, but I only have old AP1 snapshot and I did not try running this yet to see how different is it. Certainly vide was done differently back then.


No, this is a control app on the cid itself that controls the autopilot behavior (to some degree).

I also discovered today that it's actually more than that, just needed to make it full screen.
And you can also enable instrument cluster autopilot status app which is also interesting, shows you a bunch of stuff.
And when autosteer disengages, it also prints you the reason.

View attachment 226841 View attachment 226842

Could not easily get augmented feed from ape, so possibly another compile-time feature.
Also in 17.17.4 they took away internal screenshot feed, it seems. Or somehow put it behind some control I have not discovered yet.

View attachment 226843 View attachment 226845 View attachment 226846
View attachment 226844

Driving with this stuff gives you soem glimpse intto AP operations even in the current crippled nondevelopment state, let me try to upload my video.

Hi Verygreen, just sayin' that you forgot to blur the geohash of your location on one of the screenshot (it has been published on other websites as well... :s). I think that Tesla can easily find who you are if it was not already the case.

I hope it won't cause any problem for you since it's really interesting to discover and explore further how they created the architecture of their software (nothing really surprising as a tech guy, specialized in maps...).
 
The camera init is done by the AP software and it's possible that only those two get any serious adjustments on the sensor.
Gotta read those dumped registers from the raw frames to see what the difference is, I guess.

Yeah, they have probably spent some time creating a calibration/correction profile for the sensors in that are in use at the moment.

Did you have an opportunity to look in to the new data sharing of short video clips feature in 17.17.4?

IMG_1256.JPG
Quote: "we need to collect short video clips using the car’s external camera to learn how to recognize things like lane lines, street signs and traffic light positions. The more fleet learning of road conditions we are able to do, the better your Tesla’s self-driving ability will become"

It would interesting to know what actual events that triggers the video collection at the moment, which cameras that are in use and the specs of the video clips (resolution, length and data sizes). A sample video clip would be even better :).


I have a feeling they will end up using more cameras for EAP. It simply doesn't make sense to implement the promised EAP features using more restricted cameras when they have the hardware on board to make problems easier to solve.
While anything is possible, I'm not so sure about this. The 4 cameras vs. 8 cameras is a great options selling point. EAP is basically AP1 goal completed eventually, so ramp to ramp highway driving with better blind-spot detection using the backwards looking side cameras. I don't see where they would need additional cameras for that scenario - whereas I see clear benefits for them in making additional features available soon for cars with FSD...

Making EAP better than described is not in their interests IMO. I guess one could make the argument they'll need more cameras for some auto lane change scenarios, but they'll still have the blanket of ultrasounds there as well... What I could see Tesla do is add some kind of 8 camera AEB to the entire fleet. That I could see them do... That would be a way to not tie it into EAP.

I have also been thinking about the number of cameras in use for EAP vs FSD. Tesla has announced four active for EAP and FSD 'unlocks' all eight cameras, and I agree that it is a good selling point since it is a clear distinction.
I also agree with chillaban, if it is necessary to implement the functionality announced Tesla should use any cameras necessary to make the implementation as good and safe as possible.
 
I also agree with chillaban, if it is necessary to implement the functionality announced Tesla should use any cameras necessary to make the implementation as good and safe as possible.

As I mentioned, they could probably circle around that by making some safety features standard (and those could then use any cameras). That ambiguity would allow the 4 vs. 8 to remain the PR spiel.

I mean, so far it seems Tesla intends to maintain AP1 single camera on AP2 hardware for some early customers as well...

How long these separations might last is another question, of course.
 
Legally, 8 active cams (vs the announced 4) for EAP wouldn't be a problem at all. (At least in Norway.) On the contrary: If 5, 6, 7 or 8 active cameras is what it takes to realize the "autosteer+" and "smart summon" cababilities they must deliever no matter what. Especially when the HW is in car already. No judge from hell would determine otherwise.

This thread still has potential. Now let's hear moar about what Teslas "PX2" has, and what it doesn't have, Nitty Gritty.

I'm still curious about "lb", fisheye, augmented vision, blindspots/overlaps etc.

Verygreen, bjorn, anx, jeff, chilla, stopp, blade and guys: We're not done
 
Last edited:
Did you have an opportunity to look in to the new data sharing of short video clips feature in 17.17.4?


Quote: "we need to collect short video clips using the car’s external camera to learn how to recognize things like lane lines, street signs and traffic light positions. The more fleet learning of road conditions we are able to do, the better your Tesla’s self-driving ability will become"

It would interesting to know what actual events that triggers the video collection at the moment, which cameras that are in use and the specs of the video clips (resolution, length and data sizes). A sample video clip would be even better :).

Not super scientific but I had some observations about this: 17.17.4 I'd love to get the software in place to do a deeper flow analysis on this traffic (though its likely over a TLS connection. hopefully they don't pin the cert... well hopefully they do but still would love to know what this is sending).

I'd also love to know more. If I can help in any way I'm happy to.
 
View attachment 227180
Quote: "we need to collect short video clips using the car’s external camera to learn how to recognize things like lane lines, street signs and traffic light positions. The more fleet learning of road conditions we are able to do, the better your Tesla’s self-driving ability will become"

It would interesting to know what actual events that triggers the video collection at the moment, which cameras that are in use and the specs of the video clips (resolution, length and data sizes). A sample video clip would be even better :).
um... and I get to this screen how?
 
Not super scientific but I had some observations about this: 17.17.4 I'd love to get the software in place to do a deeper flow analysis on this traffic (though its likely over a TLS connection. hopefully they don't pin the cert... well hopefully they do but still would love to know what this is sending).

I'd also love to know more. If I can help in any way I'm happy to.

I have a 4G hotspot placed in my car, and I also try to observe the amounts of data going in/out. Since this hotspot is sometimes used by phones/tablets during the drives the statistics is not always accurate, but I try to reset the hotspot data counter when parking for the day, and check the counter the next morning. Before 17.17.4 I used to see a few tens of megabytes quite often, but when I checked this morning the amount of data used was more than 200MB. I will verify that over a several days before I can say this is for sure.

um... and I get to this screen how?

I dont remember the exact names in the menu, settings, privacy, data sharing or something..
 
I have a 4G hotspot placed in my car, and I also try to observe the amounts of data going in/out. Since this hotspot is sometimes used by phones/tablets during the drives the statistics is not always accurate, but I try to reset the hotspot data counter when parking for the day, and check the counter the next morning. Before 17.17.4 I used to see a few tens of megabytes quite often, but when I checked this morning the amount of data used was more than 200MB.

I got impatient waiting for 17.17.4 so connected my phone up as a hotspot.

The car burned through 3Gb of my allowance before I killed it. 17.17.4 arrived 4 days later, so assuming this was all upload data (the app on my phone doesn't differentiate).
 
Ok, so based on what I see in the code it collects zoom and left pillar cameras output.
I guess I take that back.
I see that all cameras have corresponding dirs for collection created (new to 17.17.4), but the actual collection still might be for narrow and main cameras based on some circumstantial evidence I see. I'll keep an eye on it for a more definitive answer.

Oh, I also found the data sharing settings window, interesting that it all defaulted to opt-in for me.
 
  • Helpful
Reactions: BearBu
I got impatient waiting for 17.17.4 so connected my phone up as a hotspot.

The car burned through 3Gb of my allowance before I killed it. 17.17.4 arrived 4 days later, so assuming this was all upload data (the app on my phone doesn't differentiate).
I had a similar thing happen, but it looks like it downloaded 4.2GB of map data. Ever since then it stopped downloading, and I never got a UI notification for map updates.

I strongly suspect there's some AP-related tile maps that it downloads/caches over wifi that are independent from the nav maps that the user downloads. I suspect too that without wifi, it just fetches these on-demand as you drive near the area.
 
  • Informative
Reactions: J1mbo
Oh, I also found the data sharing settings window, interesting that it all defaulted to opt-in for me.

So far, USA has been opt-in for previous privacy settings, other countries are opt-out. In the MVPA you sign it refers to Tesla's privacy policy which already has some very broad language for fleet learning. You have to opt out of that explicitly, as it's opt-in by signing it. I presume in the EU, their privacy laws are more stringent and this doesn't fly.
 
I'm not a regular member, but I just wanted to quickly share some experiments I did. I was working in the past on a Bayer demosaic implementation, and since I learned about grayscale/red pattern used in Autopilot cameras I was wondering, how much it is possible to retrieve colors from such sensor. Thanks to a raw data posted by @verygreen I was able to do some experiments.

My approach is similar as in basic Bayer demosaic - use neighboring pixels to interpolate missing one (grayscale value for red masked, and red value for not masked), ending up with grayscale and red values for each pixel. Assuming grayscale is a weighted average of R, G and B, I'm trying to work backwards from grayscale, using R, to retrieve average of G and B (it is impossible to determine them separately, as Autopilot is technically colorblind). Unfortunately it is not straightforward. Sensor response for red and grayscale is very different, and their balance is off. I think they my not be even linear to each other. I didn't found any solid solution, just did some trial and error guesses. Best results I managed to get are below.

I've also noticed, that side cameras and fisheye camera are different than main and narrow. Conversion I used for main camera gives different results for side ones, so their calibration/response/balance might be somehow different.

scene4_trafficlightstop_2.jpg scene4_trafficlightstop_1.jpg scene3_parkinglot_2.jpg image_1.jpg scene2_parkinglot_1.jpg lake_1.jpg