Is Flying Safe?
Friday, April 08, 2011By Steven H. VanderLeest
On a recent trip I flew on a number of aircraft: an Airbus A320 narrow-body jetliner and an Embraer ERJ135 regional jet. This was right after the news of Southwest Airlines flight 812 making an emergency landing in Yuma, Arizona after a 5 foot tear ripped opened in the fuselage of the Boeing 737 flying at 30,000 feet. No one was seriously injured, but this event was obviously a cause for serious concern. Southwest grounded most of its 737s while they performed inspections of the lap joints for evidence of cracking (several planes had some cracking). Following this incident, Boeing issued new guidelines recommending more frequent inspection of 737s to look for fatigue cracks in lap joints of these aircraft.
We’ve all heard that flying is safer than driving, but it doesn’t feel safer to most of us. Perhaps it is because we are not in control of the aircraft like we are of the automobile. Perhaps it is because most of us drive much more frequently than we fly, so that we are more habituated to the risk of car travel. Perhaps it is because the spectacular nature of the rare failure so that airline crashes almost always make headlines while car crashes rarely does. Furthermore, perhaps that adage is not actually true. In a paper in the 2006 Christian Engineering Education Conference, Professor Gayle Ermer notes that a “risk level per mile for driving that is in the same range as for flying. In other words, contrary to the popular wisdom used to reassure fearful airplane passengers, it is not safer to fly than to drive on a per mile traveled basis.” (Ermer, Gayle, “Understanding Technological Failure: Finitude, Fallen-ness, and Sinfulness in Engineering Disasters,” Proceedings of the Christian Engineering Education Conference, 2006). Her paper looks at some of the causes of technology failures, many of which connect intimately with our human nature.
One possible cause of aircraft failure is a flaw in the design of the avionics hardware or software. The Federal Aviation Administration (FAA) oversees and certifies aircraft for flight worthiness. For electronics hardware the FAA imposes a guideline for a rigorous process of testing for robustness against a variety of environmental conditions in the DO-160 standard, a rigorous process of development and testing of digital logic in the DO-254 standard, and a rigorous process of development and testing of software in the DO-178 standard. Newly engineered technology is subjected to careful peer-review of the design and then substantial testing of the implementation. The DO-254 and DO-178 standards apply stricter standards for technology that, should it fail, would have a more dire impact. Design assurance level A is the strictest standard for avionics that would cause catastrophic failure (and likely multiple deaths) if it failed. Level E is the lowest level, for technology whose failure would not impact safety in any foreseeable event. Flight control systems are an example of level A; passenger entertainment systems are level E. Technology at the highest levels of safety criticality must be designed with redundancy so that no single point of failure in the hardware results in system failure.
I think back-up systems and redundancy for fault tolerance nicely reflects the virtue of humility because it recognizes that we cannot design perfect systems and must account for potential failures (which hopefully are handled gracefully by redundancy so that no injuries occur). This works relatively well for hardware, but we have not yet found a similarly strong approach for software. At one time, multi-version software was in vogue as a supposed redundant approach. In this method, the requirements for the software were given to several independent development teams. Each software version was run simultaneously (either on multiple processors or as independent parallel processes in a multitasking system), with a “golden” voter taking the result from each version to determine the actual action taken for the system. The thought was that it was unlikely that independent teams would make the same mistake in the same place. Thus as long as the majority of the versions got it right, any mistakes would be outvoted. The flaw in this approach was that it turns out that even very diverse teams often make similar mistakes for similar inputs and system situations. There are other ways to try to account for design flaws in software (which often show up for unusual boundary conditions that were not anticipated), such as checkpoints, built-in tests that check for sanity or test that the system still is operating within the expected bounds, and so forth. Down the road, software designers for safety critical markets anticipate that proofs (sometimes called “formal methods”) will be used to verify software is correct with mathematical certainty. For now, these methods tend to be too difficult to apply to anything more than small sections of software code.
Whether we are estimating the risk for flying or for other technology, such as the danger of nuclear power plant failures, the impact of energy technologies on climate change, or the risk of eating genetically modified food, I think we must be careful to avoid the trap of believing we can calculate the risk in a completely unbiased, objective way. In the end, risk assessment is not simply a mathematical formula (though quantitative analysis certainly is part of the process), but is a human decision that requires wisdom and insight. In a recent article, David Caudill notes various viewpoints on scientific knowledge, particularly with regard to weighing risk. He describes a perspectives which “views all risk assessments as judgment calls. Even a scientist’s degree of confidence is not a scientific matter, and our assessment of whether a scientific analysis is relatively certain is grounded in pragmatic decisions about what to study, which variables to consider, how accurate our measurements need to be, and how much potential error we’re willing to accept. When we say something is ‘safe’ or ‘injurious’ or we say that the evidence is ‘ample’ or ‘convincing’ or ‘reasonably certain,’ those words sound scientific but are actually non-scientific judgments.” (David S. Caudill, “Science in Law: Reliance, Idealization & Some Calvinist Insights,” Pro Rege, March 2010, pp. 1-9) Caudill then argues for a still more perspectival position that sees culture and worldview not only affecting our assessment of risk and uncertainty (where our values are applied to unbiased facts), but also affecting the way we interpret the facts themselves. He notes “multiple interpretive frames, which reflect values but which see facts differently… Our selection of facts and values is not so much conscious and voluntary as it is grounded in our cultural assumptions” (p. 7)
A machine cannot do science nor engineering – only a human can perform these tasks because they involve more than mindlessly following a recipe or rote formula. These are creative activities that require sophisticated thinking, insight, and wisdom. There is truth as well as beauty to be found in these activities, but perhaps like beauty, truth is also partly in the eye of the beholder. I’m not arguing for relativism here nor strong postmodernism. I do believe in an absolute truth, but I also believe that no mere mortal has a lock on that truth. We are all affected by sin. Even without sin, we are finite created beings and our limitations may prevent us from coming to a common understanding. Even beyond our finite and fallen nature, we each come with a socio-cultural interpretive framework that gives us slightly different lenses through which we view the world. I also believe in common grace—that all truth is God’s truth, and thus we each may hold a piece of the grand puzzle. When it comes to risk assessment, it is important to discuss risk together and not solely depend on the “experts” advice. Those conversations can help tease out our own particular values and worldviews so that we understand one another better and also understand our technology better.