News Warner Logo

News Warner

Flying is safe thanks to data and cooperation – here’s what the AI industry could learn from airlines on safety

Flying is safe thanks to data and cooperation – here’s what the AI industry could learn from airlines on safety

  • Flying is incredibly safe, with a risk of dying as a passenger on a U.S. airline being less than 1 in 98 million, and even lower than winning most lotteries.
  • The aviation industry has learned hard lessons from accidents over the years, but has transitioned to a proactive and predictive approach to safety through data analysis and sharing.
  • The Commercial Aviation Safety Team, formed in 1997, has been instrumental in adopting a data-driven approach to safety, analyzing millions of flight data points daily to spot emerging risks and trends.
  • AI companies can learn from the aviation industry’s success by adopting similar proactive and predictive approaches to safety, including sharing data and implementing safety measures proactively rather than reactively.
  • A coordinated effort among AI companies, regulators, academia, and other stakeholders could help prevent calamities caused by AI, with a model like the Commercial Aviation Safety Team serving as a starting point for such an initiative.

Flying is routine and safe. Hard lessons were learned to make it that way. Vernon Yuen/NurPhoto via Getty Images

Approximately 185,000 people have died in civilian aviation accidents since the advent of powered flight over a century ago. However, over the past five years among the U.S. airlines, the risk of dying was almost zero. In fact, you have a much better chance of winning most lotteries than you do of dying as a passenger on a U.S. air carrier.

How did flying get so safe? And can we apply the hard-earned safety lessons from aviation to artificial intelligence?

When humanity introduces a new paradigm-shifting technology and that technology is rapidly adopted globally, the future consequences are unknown and often collectively feared. The introduction of powered flight in 1903 by the Wright brothers was no exception. There were many objections to this new technology, including religious, political and technical concerns.

It wasn’t long after powered flight was introduced that the first airplane accident occurred – and by not long I mean the same day. It happened on the Wright brothers’ fourth flight. The first person to die in an aircraft accident was killed five years later in 1908. Since then, there have been over 89,000 airplane accidents globally.

I’m a researcher who studies air travel safety, and I see how today’s AI industry resembles the early – and decidedly less safe – years of the aviation industry.

From studying accidents to predicting them

Although tragic, each accident and each fatality represented a moment for reflection and learning. Accident investigators attempted to recreate every accident and identify accident precursors and root causes. Once investigators identified what led up to each crash, aircraft makers and operators put safety measures into effect in hopes of preventing additional accidents.

For example, if a pilot in the earlier era of flight forgot to lower the landing gear prior to landing, a landing accident was the likely result. So the industry figured out to install warning systems that would alert pilots about the unsafe state of the landing gear – a lesson learned only after accidents. This reactive process, while necessary, is a heavy price to pay to learn how to improve safety.

Over the course of the 20th century, the aviation world organized and standardized its operations, procedures and processes. In 1938, President Franklin Roosevelt signed the Civil Aeronautics Act, which established the Civil Aeronautics Authority. This precursor to the Federal Aviation Administration included an Air Safety Board.

The fully reactive safety paradigm shifted over time to proactive and eventually predictive. In 1997, a group of industry, labor and government aviation organizations formed a group called the Commercial Aviation Safety Team. They started to look at the data and attempted to find trends and analyze user reports to identify risks and hazards before they became full-blown accidents.

The group, which includes the FAA and NASA, decided early on that there would be no competition among airlines when it came to safety. The industry would openly share safety data. When was the last time you saw an airline advertising campaign claiming “our airline is safer than theirs”?

It’s down to data

The Commercial Aviation Safety Team helped the industry transition from reactive to predictive by adopting a data-driven, systemic approach to tackling safety issues. It generated this data using reports from people and data from aircraft.

Every day, millions of flights occur worldwide, and on every single one of those flights, thousands of data points are recorded. Aviation safety professionals now use Flight Data Recorders – long used to investigate accidents after the fact – to analyze data from every flight. By closely examining all this data, safety analysts can spot emerging and troublesome events and trends. For example, by analyzing the data, a trained safety scientist can spot if certain aircraft approaches to runways are becoming riskier due to factors like excessive airspeed and poor alignment – before a landing accident occurs.

Two orange metal containers, one a horizontal cylinder and the other a rectangular box

Flight voice and data recorders are well known from accident investigations, but the data from ordinary flights is invaluable for preventing accidents.
YSSYguy/Wikimedia Commons, CC BY-SA

To further increase proactive and predictive capabilities, anyone who operates within the aviation system can submit anonymous and nonpunitive safety reports. Without guarantees of anonymity, people might hesitate to report issues, and the aviation industry would miss crucial safety-related information.

All of this data is stored, aggregated and analyzed by safety scientists, who look at the overall system and try to find accident precursors before they lead to accidents. The risk of dying as a passenger onboard a U.S. airline is now less than 1 in 98 million. You are more likely to die on your drive to the airport than in an aircraft accident. Now, more than 100 years since the advent of powered flight, the aviation industry – after learning hard lessons – has become extremely safe.

A model for AI

AI is rapidly permeating many facets of life, from self-driving cars to criminal justice actions and hiring and loan decisions. The technology is far from foolproof, however, and errors attributable to AI have had life-altering – and in some cases even life-and-death – consequences.

Nearly all AI companies are trying to implement some safety measures. But they appear to be making these efforts individually, just like the early players in the aviation field did. And these efforts are largely reactive, waiting for AI to make a mistake and then acting.

What if there was a group like the Commercial Aviation Safety Team where all AI companies, regulators, academia and other interested parties convened to start the proactive and predictive processes of ensuring AI doesn’t lead to calamities?

From a reporting perspective, imagine if every AI interface had a report button that a user could click to not only report potentially hallucinated and unsafe results to each company, but also report the same to an AI organization modeled on the Commercial Aviation Safety Team. In addition, data generated by AI systems, much like we see in aviation, could also be collected, aggregated and analyzed for safety threats.

Although this approach may not be the ultimate solution to preventing harm from AI, if Big Tech adopts lessons learned from other high-consequence industries like aviation, it just might learn to regulate, control and, yes, make AI safer for all to use.

The Conversation

James Higgins receives funding from the FAA to conduct research regarding flight safety topics. He is also the co-founder of two companies, one is HubEdge, which is a company that helps airlines optimize their ground operations. The other is Thread, which helps utilities operate drones to collect information about their assets.

link

Q. How many people have died in civilian aviation accidents since powered flight was introduced?
A. Approximately 185,000 people have died in civilian aviation accidents since the advent of powered flight over a century ago.

Q. What is the risk of dying as a passenger on a U.S. air carrier compared to winning most lotteries?
A. The risk of dying as a passenger on a U.S. air carrier is almost zero, and you have a much better chance of winning most lotteries than you do of dying in an aircraft accident.

Q. How did flying get so safe over the years?
A. Flying got safer through a combination of hard-earned lessons learned from accidents, organizational changes, and the adoption of data-driven safety measures.

Q. What is the Commercial Aviation Safety Team, and how does it contribute to aviation safety?
A. The Commercial Aviation Safety Team is a group that includes industry, labor, and government organizations that work together to analyze data and identify risks and hazards before they become accidents, promoting proactive and predictive safety measures.

Q. How do airlines share safety data in the aviation industry?
A. Airlines openly share safety data with each other, as there is no competition among them when it comes to safety, and this approach has helped reduce accidents and improve overall safety.

Q. What role does data play in ensuring AI safety?
A. Data plays a crucial role in ensuring AI safety, as analyzing data from AI systems can help identify emerging and troublesome events and trends before they lead to accidents or harm.

Q. How can the aviation industry’s approach to safety be applied to AI development?
A. The aviation industry’s approach to safety, which emphasizes proactive and predictive measures, can be applied to AI development by adopting a similar data-driven and collaborative approach to ensure AI is developed and used safely.

Q. What is missing in the current AI industry’s approach to safety?
A. The current AI industry’s approach to safety is largely reactive, waiting for AI to make a mistake before acting, whereas a more proactive and predictive approach, like that of the aviation industry, could help prevent accidents and harm.

Q. How can users report potential issues with AI systems?
A. Users can report potential issues with AI systems by clicking on a report button in the interface, which would send the issue to an AI organization modeled after the Commercial Aviation Safety Team for analysis and resolution.