In @DamianXVI's post you can actually see one of the windshield defroster threads (ref the fisheye-pic)
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Hard to tell from a 140-char limited medium but I believe this Tweet by Elon potentially confirms that the camera data from more than the rear view can be routed to the cid. I recall earlier pages noting that there is only one video out to the cid seemingly connected only to rear view cam from a cable perspective but questions were raised about whether the data going out can be controlled via software. It appears so.
Elon Musk on Twitter
Alternate reading could be that based on the parent post, they could only capture and put on an attached flash drive, but I think thats less likely of something that Elon would state would be interesting to ship to end users.
Kind of off topic, but given you have access to the GPS screen, have you been able to figure out the accuracy? I've previously speculated whatever method they are using (maybe with inertial sensors) they are getting accuracy within 2-3 feet, not the 10 feet maximum on regular GPS. It would be nice to settle this question.Ok, to tie some loose ends from before.
The Tuning parameters screen requested:
View attachment 227921
The GPS screen (wow, 12 sats in my garage)
View attachment 227922
btw on ape there's an "inertializer" binary that I guess also might be doing stuff related to positioning?
And this is ape on a parked car (in park), happily blasting away analyzing my garage. Still not sure how much it really consumes.
View attachment 227923
But how would I verify it? Are there calibrated lots you go to that have every feet marked and you can compare the markings against the detected data, or how does that work?Kind of off topic, but given you have access to the GPS screen, have you been able to figure out the accuracy? I've previously speculated whatever method they are using (maybe with inertial sensors) they are getting accuracy within 2-3 feet, not the 10 feet maximum on regular GPS. It would be nice to settle this question.
But how would I verify it? Are there calibrated lots you go to that have every feet marked and you can compare the markings against the detected data, or how does that work?
but why do you think my phone is any more accurate than the car?you can use google maps and enter the lat/long and compare.
you can also use your phone and enter its lat/long and compare aswell.
but why do you think my phone is any more accurate than the car?
As for google maps, does it really have that much resolution to be able to tell it?
Satellite map overlays are definitely not accurate enough to determine this! And virtually every navigation app uses some sort of likelihood-of-driving-biased filtering to resolve your location (e.g. it statistically prefers to put your location as driving on a road).
You would need another very high resolution GPS system such as a military laser targeting system to get meaningful reference measurements.
(I used to work on high resolution location systems for the defense industry)
BTW anybody figured out how to simply turn .h265 files into a video?
I just created a (very color-wrong) video from the h265 frames finally, now to figure out how to tell ffmpeg to switch the image format underneath.
so far hevc stream implies yuv which is certainly wrong.
M8L has an advertised circular error probability of 1.5 meters.Kind of off topic, but given you have access to the GPS screen, have you been able to figure out the accuracy? I've previously speculated whatever method they are using (maybe with inertial sensors) they are getting accuracy within 2-3 feet, not the 10 feet maximum on regular GPS. It would be nice to settle this question.
nice!I used ffmpeg to convert it into "rawvideo" format. Ended up with stream of raw 16bit/px images, grayscale and quarter size red images interleaved. I didn't thought abut that for long and just modified the tool that I was creating for color computation to convert them frame by frame into color bmp files. Than compressed those bmps back into h264 mp4. So I end up with worse quality that the original, keep in mind.
Resulting replay from main camera: main_replay - Streamable
So I plugged the coordinates form raw and corrected fields into google maps.you can use google maps and enter the lat/long and compare.
you can also use your phone and enter its lat/long and compare aswell.
So presumably "corrected" is the road biased one (assuming your car is in the garage). I'm surprised you can get a lock in your garage, but even so, I do wonder if it affects accuracy.So I plugged the coordinates form raw and corrected fields into google maps.
Raw is in the middle of my garage, corrected is in the middle of road in front of my house.
So I plugged the coordinates form raw and corrected fields into google maps.
Raw is in the middle of my garage, corrected is in the middle of road in front of my house.
So presumably "corrected" is the road biased one (assuming your car is in the garage). I'm surprised you can get a lock in your garage, but even so, I do wonder if it affects accuracy.
I guess you can try the same when you have a chance at an open air parking lot and see where it places your car vs the actual parking spot.
I used ffmpeg to convert it into "rawvideo" format. Ended up with stream of raw 16bit/px images, grayscale and quarter size red images interleaved. I didn't thought abut that for long and just modified the tool that I was creating for color computation to convert them frame by frame into color bmp files. Than compressed those bmps back into h264 mp4. So I end up with worse quality that the original, keep in mind.
Resulting replay from main camera: main_replay - Streamable