Great, you could have just posted that the first time.
The guy you were replying to to begin with did post feeds from all the cameras the first time. You still insisted maybe they weren't actually running like the interior camera isn't running.... which I pointed out was impossible since he just
showed you the cameras running
Then you changed your story to how can we be sure it's USING all that video (WTF would it bother wasting power capturing it if it were not BTW?)- so I pointed out multiple features of the car that are only possible if it's using that data.
You didn't like that, so I showed you actual data from the computers showing it using those camera feeds to ID and track objects (which, again, we already knew by the fact it has currently-working features that are only possible if it's using that data).
Now you're moving the goalposts yet again to wanting details on what it's doing with the data you didn't even believe it was using, after first not believing it was even capturing.... (and again SOME of the things it's doing with the data is obvious... blind spot detection, automatic lane changes, displaying nearby cars including by general vehicle type, etc)
So what does the NN actually do with that information? Where is the 3D environment map and overlay showing that the car is predicting vehicles coming from the side will cross its path?
You know like Waymo demonstrated back in 2016.
That's what makes this Tesla demo even more suspicious. If they had this data why did they not show it? Prove that it's not smoke and mirrors.
They already showed a 3D environment map at the recent presentation you apparently didn't bother to watch.
They displayed a 3D environment map based on a 6 second video capture from vehicle cameras to explain why LIDAR was so stupid a technology- since cameras could do detailed depth maps without it.
They also showed various clips of the system making predictions about pathing, objects, etc including beyond the actual range of the cameras based on the data it had in the moment.
Maybe go watch all the presentations from Monday before deciding where you want to move the goal posts next time?
Also, this has been out there for a while-
A rare look at what Tesla Autopilot can see and interpret
Radar data overlayed with camera/labeling data, a detailed list of objects autopilot sees and tracks. It reports things like the dimensions, relative velocity, confidence that the actual object exists, the probability that the object would be an obstacle and so on.
And remember this is just what hackers have been able to see- obviously folks with real developer access can see a lot more.