So I guess some of you have already seen this electrek article:
A rare look at what Tesla Autopilot can see and interpret
The data it contains was obtained from my old unicorn snapshots and also another source I got recently that includes a lot more recent snapshots from a 18.10.4 car.
These videos below only use interpreted radar data (so it's not raw radar output, there's some classification and I am sure some culling too. In particular notice how overhead signs are quickly discarded).
And separately another set of snapshots from this other source had nice "autopilot detected objects" data included apparently used for debugging object depth.
The object description looks like this:
And the objects are reported separately for each camera (currently only main and narrow, though header includes every camera with luminance level from each so we now know they also use other cameras at least for luminance checks).
This one was interesting in that there's a stopped truck and it has a probability 25% of being an obstacle.
There's still some work going in here by other diving into the data to also check the raw radar stream (also included) vs the interpreted stream so some more data might come out of these snapshots.
And also hopefully we'll get some even more recent snapshots.
And also I wanted to again thank @DamianXVI for the awesome visualization tools!
A rare look at what Tesla Autopilot can see and interpret
The data it contains was obtained from my old unicorn snapshots and also another source I got recently that includes a lot more recent snapshots from a 18.10.4 car.
These videos below only use interpreted radar data (so it's not raw radar output, there's some classification and I am sure some culling too. In particular notice how overhead signs are quickly discarded).
And separately another set of snapshots from this other source had nice "autopilot detected objects" data included apparently used for debugging object depth.
The object description looks like this:
Code:
{
"vision_loc_x": "23.495266",
"vision_loc_y": "-3.82337713",
"vision_loc_z": "0",
"depth": "15.9356184",
"velocity": "4.39441586",
"log_likelihood": "3.50788665",
"rad_loc_x": "15.125",
"rad_loc_y": "-3.5",
"rad_loc_z": "0.5",
"rad_vx": "-3.5625",
"rad_vy": "-0.25",
"prob_obstacle": "0.96875",
"prob_existence": "0.96875",
"moving": "true",
"stopped": "false",
"stationary": "false",
"bbox_top_left_x": "413",
"bbox_top_left_y": "207",
"bbox_height": "59",
"bbox_width": "89",
"bbox_core_x": "457",
"bbox_core_y": "236"
}
This one was interesting in that there's a stopped truck and it has a probability 25% of being an obstacle.
There's still some work going in here by other diving into the data to also check the raw radar stream (also included) vs the interpreted stream so some more data might come out of these snapshots.
And also hopefully we'll get some even more recent snapshots.
And also I wanted to again thank @DamianXVI for the awesome visualization tools!