NASA’s Kenneth Goodrich, who is studying the problems of autonomous flight, imagines pilot and airplane sharing responsibilities. He speaks of “inner loop” and “outer loop” skills. The inner loops consist of basic ship-handling: staying right side up, managing power, maneuvering, maintaining speed and altitude, navigating among defined waypoints, even controlling the approach and landing. These are tasks, some more complex than others, that “are dealing with relatively straightforward/deterministic signals and physics.” In other words, either things are where they should be, or some clearly defined action is required to get them there; there are no ambiguities.
Goodrich compares a semi-autonomous airplane — one with just inner-loop capabilities — to a well-trained horse. “The airplane has instinctive or reactive intelligence (which is much simpler than general human intelligence) relative to expected environmental factors and is generally biased toward self-preservation in the absence of decisive pilot direction.” If you do the wrong thing, or do nothing, the airplane finds its way to some safe condition.
Outer loops involve more abstract types of perception and decision-making, ones for which we now consider the human mind indispensable. The variety of situations that can arise in flight, and the complexities of dealing with them, seem far beyond the grasp of any imaginable computer program. It is difficult to imagine a machine possessing the combination of situational awareness, initiative, judgment and resourcefulness that a good pilot possesses, and so pilots — not to mention everybody else — tend to be skeptical of the idea that full responsibility for the execution of a flight could be entrusted to automata. It is sufficient to mention Sullenberger and the Hudson, and the case is closed.
But even full autonomy may prove more attainable than we suppose. I suspect that in 1970 the people who operated what then passed for digital computers would have said that no non-professional could ever be expected to manage one; yet today we all use them routinely. It’s partly a matter of people learning new skills, and partly one of tasks being redefined to allow computers to handle them.
I can imagine — Moore and Goodrich suggest nothing of this sort — airplanes without pilots operating in a highly regimented environment under some sort of central or distributed external control. They would fly at altitudes and along routes chosen to mesh with other flights. A PAV might join a flock of others moving along a sort of three-dimensional city street, and formate more closely with them than normal pilots would dare. Conflicts would be avoided not by improvising a response to each new event, as humans do, but by ensuring that no unexpected event occurs. Philosophically, however, this model is opposite to that of the coming NextGen air traffic system, in which the role of central control is diminished rather than increased and decision-making is distributed among the airborne participants.
Whatever mix of autonomous control and piloting skills flying might eventually require, the implementation of the Zip Aircraft concept does not imply the extinction of aviation as we know it today. One area of current study is how to integrate large numbers of PAVs into present traffic. PAVs are expected to operate at low altitudes, from special airports or special parts of existing airports, and on routes that would avoid conflict with other types of traffic.
Of course, we know that pilotless airplanes are already here. It’s certain that they will increase in number and take on more and more diverse tasks, including the carriage of cargo, and will learn to mingle unobtrusively with piloted airplanes. But will they ever carry people? Before we prepare to hang up our goggles and scarves in the temple of Daedalus, we should take some comfort from Ken Goodrich. “Elevator-like autonomy,” he says, “could be an option in the distant future (20 to 30-plus years), but it’s far beyond the state of the art today.”