There is only one NN at this moment, we can clearly observe it's the one being run all the time. So given absense of any other neural nets on ape at this time, this one must be the one, right?
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
There is only one NN at this moment, we can clearly observe it's the one being run all the time. So given absense of any other neural nets on ape at this time, this one must be the one, right?
The current resolution makes sense if the current network is intended to be pretty much a drop in replacement for the old Mobileye unit, which uses lower resolution cameras.
Outside of this network, is the code largely still the same for AP1 and AP2? That would suggest that this may only be a temporary measure.
DetectedState
NotDetectedState
StrikeOutState
VisualWarningState
it's a bit hard to know what it does but it was never accessing any cameras before. Everything camera-related was in the vision task.
Apparently driver monitor states are somewhat cryptic:
Code:DetectedState NotDetectedState StrikeOutState VisualWarningState
Outside of this network, is the code largely still the same for AP1 and AP2? That would suggest that this may only be a temporary measure.
Just for fun I downsampled some frames grabbed from some of the HW2 cameras under different conditions (source) to 104x160. Kind of gives you a sense of what level of detail we're talking about. These are full size (Try zooming in 500x.)...Apparently 104x160 is adequate to meet the objectives of the AP2 40.1 application.
- You will work on the Camera software pipeline running on the target product platform, to deliver high resolution images at high framerate to a range of consuming devices (CPU, GPU, hardware compressors and image processors)
Just a thought -- this is very possibly why street signs are still not being read and interpreted. There are nowhere near enough pixels to pull meaningfully accurate data off the street sign at that resolution...I mean is this what the AP2.5 ECU is actually dealing with? Why on earth do they have such high-res cameras and camera sensors installed? Despite @jimmy_d's excellent posts, my brain doesn't seem to comprehend this. Instead, my brain screams the question: Wouldn't the reasonable thing to do be to exploit every single pixel from the new camera sensors? Shouldn't we expect to have an on-board computer that chews through all of it, even if it would spit much of it away after doing it's thing? Why downsample before Vision gets a chance to look over it?
And how about this Tesla job ad description:
Just asking. Hoping for intelligent answers...
I mean is this what the AP2.5 ECU is actually dealing with? Why on earth do they have such high-res cameras and camera sensors installed? Despite @jimmy_d's excellent posts, my brain doesn't seem to comprehend this. Instead, my brain screams the question: Wouldn't the reasonable thing to do be to exploit every single pixel from the new camera sensors? Shouldn't we expect to have an on-board computer that chews through all of it, even if it would spit much of it away after doing it's thing? Why downsample before Vision gets a chance to look over it?
Just a thought -- this is very possibly why street signs are still not being read and interpreted. There are nowhere near enough pixels to pull meaningfully accurate data off the street sign at that resolution...
Ya but isn't the concern really if the NN actually gets enough data in the first place to understand it is a stop sign? Or a bicyclist. Or which way the pedestrian over there is looking/heading. Or if those are rain drops we need to automatically wipe away...