On May 2 at Edwards Air Force Base in California, U.S. Air Force Secretary Frank Kendall climbed aboard an F-16 test bed dubbed VISTA (Variable In-Flight Simulator Test Aircraft) that was retrofitted with an artificial intelligence platform. Secretary Kendall and a back-seat test pilot went along for the ride without touching the controls while another human-controlled F-16 engaged VISTA in a mock dogfight.
Reading between the lines of media reports, it would appear that the demonstration was a draw between the two fighters. I applaud Kendall for placing himself in harm’s way, but the grandstanding performance served a purpose. It was very much a public endorsement of AI for airborne warfare.
If you're not already a subscriber, what are you waiting for? Subscribe today to get the issue as soon as it is released in either Print or Digital formats.
Subscribe NowAirborne warfare is certainly a viable application for AI, especially if it can save lives. But what about civilian use of AI aboard airliners? Are pilots ready to accept this technology as a part of its integration into the cockpit? Will it reduce the required crew of two to one plus AI? Will it eventually become the ultimate replacement for all airline pilots? And will passengers embrace the concept of a pilotless aircraft—some of the advanced air mobility concepts are headed that way, so will airliners follow?
Before beginning such a philosophical discussion, it’s best to gain at least a basic understanding of AI because it’s a complex topic that seems to be mostly discussed in general terms. We’ve all seen examples of the technology being used to remarkably replicate famous personalities via photo, video, and voice. With that concept embedded into our psyches, it’s no wonder we perceive a West World scenario of robots turning on their human creators as an end state.
In my nine years flying the Boeing 777, I always marveled at how consistent the automation was at landing the aircraft smoothly, especially the flare and touchdown. Because of the unstated rivalry between my performance and the machine’s, I would rarely allow the autoland system to complete its job all the way to the concrete unless weather conditions dictated otherwise. It was a love/hate relationship.
That said, the autoland function, which utilized three separate autopilots, was a very basic form of AI. The system operated under a specific set of parameters. Pilots had to instruct the system through switchology and the programming of the flight management computer (FMC). It was a routine practiced in recurrent training. In today’s vernacular, an autoland probably wouldn’t qualify as AI.
Now AI is considered “generative.” Rather than just relying on human data input, generative AI utilizes predictive algorithms, a series of formulas or instructions, to create an action or multiple actions. In the case of text, the computer can generate original content—a novel, for example. These actions or creations are achieved through the extraction of numerous, and perhaps infinite, data sources, i.e. internet information.
It’s not a perfect system because sometimes the generated content can result in what’s called “hallucinations” in AI language. A portion of the material could be misinformation, just slightly wrong, or totally incorrect. Remember, some of the data is extracted from sources such as internet sites that are notoriously inaccurate in and of themselves.
- READ MORE: One ATC Sector of Separation
The VISTA F-16 developed its generative AI to maneuver in an aerial dogfight both through data obtained in specially equipped simulators and from the aircraft itself. Beyond that information, I’m certain the Air Force keeps a tight wrap on the project. But I’ll just make the safe assumption that hallucinations are kept out of the equation.
Artificial general intelligence (AGI) is the concept of the tech that most fear. For those of my vintage, AGI is the reason Yul Brynner’s 1973 gunslinger character of West World fame acts murderously outside of his human programming. But this phase of the technology is mostly theory. Computers aren’t quite capable of developing their own intelligence or personalities beyond the data that has been inputted or extracted.
So, how could generative AI assist and coexist in an airline cockpit? First, I hate to admit it, but airline pilots resist change. It’s in our nature to be skeptical. Introduce a new procedure or cockpit system and we’ll find a problem with it. Introduce AI and eyebrows will raise.
When I transitioned to the Boeing 767 from the Jurassic Jet (B-727), an aircraft that was still controlled with pulleys, cables, and a cantankerous autopilot, the idea of operating a machine from switches in the eyebrow panel was foreign to my being. On one occasion during simulator training, I rebelled and clicked off the autopilot, expressing my frustration to our check airman. I compelled him to allow me the dignity of completing a one-engine landing with my own bare hands, promising to comply with airline automation protocol from that point forward. Eventually, I succumbed to the technology, but it was a struggle.
As a use for AI in an airline cockpit, consider the following scenario: Flight XYZ is 30 minutes from its arrival at John F. Kennedy International Airport (KJFK). The reported RVR is at minimums. If the approach is conducted and a go-around is necessary, does the flight make another attempt or proceed to the flight plan alternate, or somewhere else?
The scenario described above is not an untypical situation. If it’s managed by a proactive crew, the decision is already determined prior to beginning the approach. But if data, inclusive of weather, fuel, distance to alternates, gate availability, hotel availability, passenger connections, crew duty legalities, mechanical status, etc., is available to an onboard AI system, it becomes a computer algorithm problem.
When the data is crunched, the crew can review the computer information, which might reinforce its decision, or the information might result in consideration of a different solution. The use of AI becomes collaborative, potentially reducing cockpit workload.
In July 1989, United Airlines DC-10 Captain Al Haynes famously coordinated one of the most significant moments involving crew resource management. When the airplane suffered a catastrophic, uncontained engine failure after a fan disc separated from the No. 2 (center) engine, severing all hydraulic lines, the airplane was only controllable through the use of differential power. Of the 296 passengers and crew on board, 184 survived the “impossible landing” in Sioux City, Iowa.
McDonnell Douglas indicated that the scenario of a complete hydraulic failure was impossible. Would AI have offered the same solution? Would AI have offered a better solution? Would AI have recommended the incredible crew coordination and ingenuity that was demonstrated? I am certainly no AI expert, but my answer would be negative.
Twenty years later, US Airways Flight 1549 landed on the Hudson River after a flock of geese were ingested into both engines at relatively low altitude causing a dual flameout. In my estimation, AI may have created a distraction that could have taken away from the succinct decisions and actions taken by Captain Chesley “Sully” Sullenberger and First Officer Jeffrey Skiles.
Based on the current status of AI technology, it would seem that a pilotless aircraft is not even in the distant future. Could AI be an asset to the cockpit in its present form? Sure, but not to replace one of the pilots. That’s fodder for a whole ’nother story
This column first appeared in the July/August Issue 949 of the FLYING print edition.
Subscribe to Our Newsletter
Get the latest FLYING stories & special offers delivered directly to your inbox