The Predicament of Air France 447 | Flying Magazine

The Predicament of Air France 447

What Air France 447 taught us about the interface between humans and complex flight decks.

Air France 447

Air France 447

** This was the situation, but the information
was not presented to the Air France 447 crew.
To the designers of the flight control software,
it must have seemed inconceivable.**

You're flying a twin-engine jet transport. The engines are at full power. The wings are rocking, but the heading is steady. The pitch attitude is 15 degrees nose up, but the VSI says that you are descending at 10,000 fpm. The flight director needles command a nose-up pitch.

What should you do?

It’s hard, isn’t it? The puzzle pieces don’t fit together. And if it’s hard when you’re sitting here reading a magazine, imagine how hard it is when you’re in the dark and in cloud, every warning and alarm in the cockpit is going off at once, you can’t tell which instrument indications are reliable and which may not be, and you have no idea what got you into this predicament in the first place.

That was Air France 447, going down over the Atlantic in 2009. It was one of those milestone accidents, like Grand Canyon and Tenerife and the 14th Street Bridge, that define a category. It was already the subject of countless articles and discussions before the Bureau d’Enquêtes et d’Analyses, or BEA, the French accident-­investigation office, published its final report on the accident in July. Since the A330’s data and voice recorders had been retrieved from the ocean floor by what must rank as one of the most remarkable recovery operations ever conducted, the report was extremely detailed. It raised, and left unanswered, many fundamental questions about crew training and the nature of interfaces between human crews and semiautonomous flight control systems.

The final report added few facts to what was already known about the accident. The cockpit voice recorder transcript has been available for a long time. It was well known that for 3½ minutes, during which the airliner, with 228 aboard, descended in a stalled, mushing glide toward the water, the crew floundered in a state of complete confusion and incomprehension. It was also well known that the precipitating event was a loss of reliable airspeed indications caused by ice crystals clogging the supposedly triple-redundant pitot tubes. Loss of airspeed caused the autopilot and autothrottles to disconnect, unceremoniously turning over control of the airplane, then cruising at FL 350, to the pilot flying, who happened to be the least experienced member of the crew. He reacted to this unexpected event — presumably without meaning to — by pulling the airplane up into a zoom climb and a stall.

Hand-flying an airliner, especially one with little or no static stability, at FL 350 calls for a light touch. Pilots know this. For the pilot to stall the airplane was a grievous failure of basic airmanship. Sarcastic old-timers were heard to ask: Had airline pilots, in their preoccupation with managing complex automated systems, forgotten how to fly? This was a rhetorical question; the basic flying skills of airline crews, like those of all pilots, vary widely. Indeed, the BEA enumerated other instances of pitot failure in which crews had reacted almost as badly, even though loss of airspeed was an emergency routinely practiced in the simulator. Strangely enough, in none of the previous cases — there were more than a dozen — had crews followed prescribed “unreliable airspeed” procedures. Here, for instance, is a Brazilian A330 crew dealing with a similar airspeed malfunction in 2003, according to a BEA report:

When the AP disengaged, both pilots made pitch-up inputs (one went to the stop) that resulted in an increase in pitch of 8°. On several occasions, the stall warning was triggered due to the nose-up inputs, and the crew reacted with strong pitch-down inputs. During the 4 minutes that the sequence lasted, the load factor varied between 1.96 g and -0.26 g, the pitch attitude reached 13° nose-up and the angle of attack reached 10°.

On the other hand, once the airplane was fully stalled and mushing downward, how many crews, however well trained, could have figured out what to do about it, since the situation lay outside the boundaries of any training or, for that matter, even any flight test scenario? If during several minutes the crew, including a captain with experience in a wide range of aircraft types, could not figure out what was going on, was attitude and flight path information being presented in the most usable way? A profile view of the airplane, nose up, with a flight path arrow angled steeply downward would have made everything perfectly clear in an instant. But in the normal course of events, what would be the use of such a display?

The English version of the BEA report frequently used the phrase “startle effect” to translate the rather less specific French surprise. The English expression seems intended to confer a sort of scientific prestige upon the common experience of alarm and confusion following a sudden, unexpected (and generally unwelcome) event. But many pilots have learned from experience that the combination of urgency and fear can produce a sort of cognitive paralysis, and the BEA report noted that this element is generally missing from airline pilots’ recurrent simulator training.

The BEA also noted that crews receive minimal simulator training in hand-flying at high altitude, and none at all in high-altitude stall recovery, even though cruising angles of attack in the upper flight levels are quite close to the stall.

But, while some sort of startle-induced reflex or muscular miscue might help to explain the pilot’s initial and disastrous pitch-up command, the subsequent confusion of the crew scarcely requires explanation. Even though the airplane had gotten into its predicament quite easily, it was now in the realm of the unknown. Simulators did not visit angles of attack of 40 degrees, in part because no one knew for sure how transports would behave there. Wind-tunnel investigations of high-altitude “upsets” produced confusing results and were unreliable because of scale effects.

Furthermore, the Airbus design philosophy makes a point of hiding “unnecessary” information from the pilots. Redundant cues are avoided. For example, the Airbus sidesticks do not communicate with one another in such a way that one pilot can tell, by the motion of his stick, what the other pilot is doing. They also lack proportionate resistance or “feel,” which might have alerted the pilot to his presumably unintended pitch command. Similarly, when the autothrottle is operating the throttle levers do not move, even though power is changing. Finally, Airbus pilots are scarcely aware of pitch trim, which automatically, continually and silently operates to zero out elevator actuation forces. In this case, however, trim was important, because the pilot’s continually holding the stick back had run the autotrim to its nose-up stop. If the crew had managed to understand that they needed to push over into a 35-degree dive to recover, they would probably have had to retrim manually.

The problems were not confined to cockpit ergonomics. The BEA criticized shortcomings in training as well. “The combination of the ergonomic design of the stall warning, the conditions in which airline pilots are trained and exposed to stalls during their professional training, and the process of recurrent training does not generate the expected behaviors with acceptable reliability.”

An A330 pilot once wrote to me that although “the systems design and presentation [are] superb ... safely flying the 320-, 330- and 340-series Airbus requires something of a nonpilot mindset.” The advice he gives new pilots is to treat the flight “as a video game.” Boeing applied a somewhat more classical, pilot-centric philosophy, and a richer array of secondary cues, to the design of its fly-by-wire airplanes (777 and 787), and pilots have for years argued passionately over the merits of the two approaches.

At this point it has become obvious, from this event and plenty of others, that the transition from computer-­mediated “protected” flight to manual “direct law” or anything close to it is fraught with difficulties. In fact, this was a well-known problem with ordinary autopilots long before fully-­digital fly-by-wire control systems came into use. “Out of the loop” of the handling of the airplane for long periods, human crews falter when they are thrust unexpectedly back into it. They don’t know where they are, what is real, what is spurious. Startled, frequently fixating on an incorrect interpretation of the situation, they may do more harm than good.

A very simple backup autopilot, without reliance on airspeed, could have kept the wings level and the pitch attitude at five degrees while the crew got things sorted out. But the titanically complex and carefully reasoned Airbus flight system made no effort to ensure a smooth transition from digital to human control.

The abrupt “Your airplane!” approach is particularly strange, because the edifice of digital fly-by-wire stands upon the premise that airplanes need to be protected from mistakes that human crews will make. In their zeal for protecting the airplane, Airbus programmers seem to have forgotten that human crews need a little protecting as well. Did they really think that what happened on Air France 447 was inconceivable? Do they still think so?