To those trying to draw analogy to the problem of automating aircraft, I'd like to make a few comments from the perspective of a general aviation pilot and engineer in the aerospace industry. First of all, if you exclude takeoff,
commercial airliners can already automatically fly you to where you are going and land the plane, and it happens on a regular basis.
The problem from a control and procedures standpoint is extremely easy compared to autonomous cars. All routes and approach paths are planned in advance and guaranteed by your national aviation authority to be free of terrain and obstacles. Air traffic control guarantees that you are separated from other vehicles in the air and on the ground. Having this kind of consistent environment means that the only technology required is precise navigation (e.g. augmented GPS or high-integrity ground-based navigation beacons) and aircraft control algorithms, all of which are well-understood and proven technology. (See also:
Instrument landing system - Wikipedia and
PID controller - Wikipedia) No computer vision necessary. The situation is the exact opposite for autonomous cars.
The real problem for aviation is guaranteeing integrity of the system. Aviation is extremely risk-averse, to the point where infinitesimal risks of failure are sometimes not tolerated. To give you an idea of how little risk is tolerated, consider the FAA's WAAS, its GPS augmentation system. It enables instrument approaches (i.e. no vision) down to two hundred feet above a runway before the pilot must maneuver and land visually. The risk tolerance for the system outputting hazardously misleading information (HMI) on an approach is 2x10^-7 per approach, which is
two bad-information approaches out of ten million, and even getting bad information on an approach will not necessarily lead to a crash. (See also:
WAAS Performances - Navipedia) Keep in mind that this risk tolerance is for planes that have a trained human pilot in the loop who would be hand-flying the approach or monitoring the situation with autopilot. What would the tolerance be if no pilot were present? How much redundancy and system safety self-checks would be required to maintain integrity? How many flight hours would be needed to demonstrate that the system is actually safe?
The takeaway is that you can divide autonomous system development into multiple distinct problem areas:
- Route planning / procedures
- Control
- Integrity
For aviation, points #1 and #2 are handled already. #3 is the main obstacle to having fully autonomous aircraft. Don't discount the amount of integrity assurance that is provided by simply having a pilot on board.
For autonomous vehicles, I'm not too privy to the details, but my impression is that #2 is mostly if not completely solved in many R&D vehicles (over all companies, not necessarily restricted to or including Tesla). Looking at some of the Mobileye academic talks, it sounds like there is a lot of work done on #1. There is quite a lot of mathematical framework developed for autonomous tactical planning and driving policy. #3 feels very sticky to me. A lot of tactical planning relies on computer vision based on neural networks. To my non-expert knowledge, they are not very well-understood and there exist many simple tricks that can cause them to incorrectly identify objects (e.g. adding noise to an image). What would it take to prove the integrity of such a system? Given that the operational environment of driving is much more dynamic than that of aviation, and that the integrity problem is hard enough for aviation already, this seems nearly insurmountable.
Just wanted to share some scattered thoughts. Thanks for reading.