Adding 6x more pixels to represent the same visual angle doesn't necessarily give you any more useful information than you already have, but it does mean you need (at least) 6x the processing power to process it.
The folks analyzing HW3 have concluded that it will do somewhere in the neighborhood of four times as many operations per second as what MobileEye plans to deliver in EyeQ5 in 2020. That means the MobileEye design can process one frame of data for every 24 frames that the Tesla design can process. The Tesla design is said to be able to handle over 200 FPS per camera with HW3. That means the MobileEye design, assuming similarly complex self-driving software, would only be able to process 8–10 FPS at full resolution, which would be wholly inadequate for real-world self driving.
Assuming those numbers are correct, then IMO, the only way that a 7.2 MP camera has a prayer of being usable for self-driving with their hardware would be if they sub-sampled it down to a much lower resolution for processing, and only used the 7.2 MP data in dashcam mode. Otherwise, it is simply way too much data to process in real time. (I suppose pedantically, they could use subsampled images for object detection, and higher resolution data for analyzing certain objects of interest, but even then, that's a
lot of data.)