Some people have claimed that they believe that detection of tractor trailers in the adjacent lane is lacking precision due to the high clearance of trailers. I would think that - if this is in fact an issue - it could be programmed out of it with some logic even with the current hardware. IE "If the camera detects a cab in the next lane at the 2 o'clock position, look for a wonky sonar signature directly next to the car - that will be where the trailer is" - or it could keep an eye out for vehicles approaching from behind via the rear-view camera. But I digress - why use sonar in the first place for the side view rather than initially choosing to use multiple cameras? Cost issue? Programming complexity which would have pushed out the delivery date of Autopilot 1.0 to an unacceptable timeframe? From what we know the current hardware "brain" - Mobileye Eyeq3 is capable of processing input from multiple cameras, but Tesla chose not to set it up this way in 1.0 OR - is there in fact an operational advantage to sonar? Cost savings seems a bit odd from my admittedly ignorant POV - digital camera sensors are already inexpensive. Eyeq3 is already capable of handling multiple camera inputs. So why, in Oct 2014, did they not just put a 360 camera view in the cars - perhaps in addition to the sonar? Even if it would take additional time to program the system they could have turned on the side cameras at a future software update. It's such an obvious question that they must have discussed it extensively when choosing the initial hardware suite for 1.0. Marketing a la Apple? Introduce new features step-wise to create a reason to upgrade to a new car in a few years? Because if you think about it - a modular system (aka ability to swap in a better CPU later) with an over-built data bus and 360 degree cameras with swappable sensors and lenses might have made a 2014 Tesla the last car you ever purchase. Mobileye's CEO claimed in early 2015 that the existing Eyeq3 used by Tesla since 2014 is only using 5% of its processing capability right now and was built with the ability to process input from multiple cameras. The battery is already swappable when chemistry gets denser. So if Tesla had baked in all the sensors and cameras you'll ever need for true self driving in version 1.0 you might not need another car purchase for 15 years or more.