Saturday, July 17, 2010By Steven H. VanderLeest
With a last name like “van der Leest”, of course I was cheering for the Netherlands to win the World Cup. What an incredible final game between two incredible teams! It was delightful to watch the fantastic athletic ability, the artistry of the game, and the enthusiasm of the crowd. My son had friends over to watch the event and we could hear oohs and ahs, cheers and groans, as they watched the game progress.
The technology that let us all participate in the audience and get a closer view that those sitting in the stadium itself was amazing too. There were cameras on the side, cameras in the net, cameras scooting above the field. Each recording a different angle, providing images with a variety of aspects. [Jason Dachman, “World Cup 2010: Camera Corps’ Net-Cam Captures Prime Angle of England’s Disallowed Goal,” Sports Video Group, 1 July 2010] .
High definition video was beamed around the world to an estimated 700 million people watching at least one of the World Cup games on television (HD or not). We got to see the astounding moves and plays live (at least within milliseconds of “live” by the time it reached us) and then again in replay. With such frame-by-frame precision and review after the fact, it is no wonder many viewers in hindsight denounced some questionable calls by referees.
Human judges of the game make mistakes, but the camera never lies. Or does it? The referee on the field sometimes has an angle that no camera can catch. The ref not only sees, but hears the action (at least if they can hear anything over the mutter of thousands of vuvuzelas). But we can add directional microphones to pick up audio too. Whatever the human can do, technology can mimic and go one better. Or can it?
The World Cup is a good case study of the interplay of human and technological prowess. Even though an Italian Ferrari is faster than the Dutch Arjen Robben and a Spanish cannon could fire a soccer ball past the goalie faster than the Spanish David Villa could, we wouldn’t dream of replacing human players with robots or other technology. There would be no point in watching the game then. Why is that we are willing to accept human shortcomings and celebrate human achievements in the players, but not as willing to accept mistakes by game officials? After some admittedly very bad calls, some turned to technology as a way to rescue the game: “FIFA deplores ‘when you see the evidence of refereeing mistakes’ and it would be ‘nonsense’ not to consider changes, Blatter said Tuesday. He added the only principle up for discussion is goal-line technology.” [Mike Foss, “Bad calls at World Cup prompt FIFA to study high-tech ref help,” USA Today, 2 July 2010] . Certainly a post-play review of a goal shot by multiple camera angles (and perhaps with a little technology sprinkled right into the ball so that we know its precise position millisecond by millisecond) would provide a more detailed picture of the game. But would that make the game better? If we stopped after each minute of play to spend five minutes carefully analyzing each movement for possible violations, the momentum and action would be seriously compromised, making the game much less exciting.
Adding technology to fix one problem can also add unforeseen consequences that cause new problems. For example, “one critic sees replay’s disadvantages. ‘Not all countries will be able to afford the cost,’ says Fernando Fiore, an analyst for Univision. ‘Soccer is too fast. You need more officials overseeing the game.’” [Foss] . Thus technology can become a matter of justice – while it might enhance the justice of the officiating in one game, it might also create injustice between games and between nations by separating the “haves” and the “have-nots”. Technological injustice is not inevitable however. If we conscientiously examine the design and deployment of technological products, we can identify many of the issues ahead of time and account for them. For example, if extra technology is fielded to provide more precise evaluation of game play, it could be done on an even-handed basis, perhaps only games from the Round of 16 to the final game. Every game before that, regardless of location or team origin is played without the technology and every game after with it. That provides a more even playing field for all.
Wednesday, July 07, 2010By Steven H. VanderLeest
Anthropomorphizing technology – suggesting human characteristics for non-human technological products – is a common pastime. We attribute motives to our gadgets, particularly malicious intent when our gadgets repeatedly fail at the most inauspicious times. We sometimes slip in a personal pronoun for a device. Our technology is often so advanced and complex that the ordinary user has very little clue about its inner workings. Arthur C. Clarke’s third “law” reflects this idea: “any sufficiently advanced technology is indistinguishable from magic.” Even so, I doubt that anyone actually believes that the technology is truly sentient. Rather, we joke about its seemingly human behavior, knowing full well that the computer, automobile, and space shuttle are all simply objects with no ability to act on their own, though they are sufficiently complex that we cannot always predict their behavior. There may be an added benefit to this humorous approach to our technology. Humor about technology can be a comfort because it recognizes that we share a common experience – who hasn’t lost something important when a computer crashed on them?
I would like to see technology help us share experiences and improve human relationships more often, but not just centered on technological failures. This would honor the design norm of integrity. This is the principle that a technology design ought to harmonize function and form, ought to integrate all the parts into an aesthetic whole, and ought to promote positive human relationships. Where technology brings people together, encourages peaceful interaction, brings out the best of what makes us human, then it observes the principle of integrity. One example would be collaborative document technologies that allow multiple users to edit a document simultaneously (though I think these product are still in their infancy, and I expect many improvements yet).
To “technopomorphize”, if that were actually a word, would refer to the use of technical analogies to understand human concepts and relationships. Our daily language is full of them: “switching gears”, “like clockwork”, “grease the skids”, “turn the crank”, “pull the plug”. “really pushed his buttons”, “need to wind down”. We use these as symbols to explain our own and others behavior. When understood as a simplification, as an abstraction in order to communicate succinctly, these analogies can be helpful. They reflect our God-given and God-reflecting character as tool makers and tool users. When mistaken to be the behavior itself rather than a symbol or analogy, then I think we do ourselves and our fellow humans an injustice. Trying to understand the human brain by comparing it to a computer can be a helpful approach; implying that the human brain is simply a biological computer, no more, leaves out the nuance and beauty, the mystery and complexity and soul of the human creature. Yes, you can push my buttons, but my output is not deterministically dependent on those inputs.
Guns Don’t Kill People
Wednesday, June 30, 2010By Steven H. VanderLeest
On Monday the U.S. Supreme Court ruled in McDonald vs. Chicago that the Second Amendment of the U.S. Constitution provides an individual (not simply collective) right to bear arms. This reminds me of the old saw from gun-rights advocates: “guns don’t kill people; people kill people”. This is a succinct argument for the neutrality of technology. It makes the point that technology (at least in the form of guns) has no responsibility because it has no ability to make a moral choice (i.e., it has not agency). While I agree with the slogan in its narrow sense (that technology has no ability to act, no agency), I do not draw with the broader implication (that therefore the technology is neutral). Consider plugging other technologies into this phrase, for example, “exploding gas tanks on Ford Pintos do not kill people; people kill people” or “o-rings on the space shuttle Challenger do not kill people; people kill people” or “crack cocaine does not kill people; people kill people.” In each case, while it is true that the technology in isolation does not act to kill (it has no intent, no volition), it still feels strange to say that the technology had nothing to do with the deaths. This is because the technology embodies the will of the designer and amplifies the power of the user. It is not neutral. It biases the user towards certain actions. We tend to use a hammer to pound nails more often than to paint pictures because it is designed to perform pounding functions. We tend to use dynamite to blow things up rather than to pound nails because it is designed to explode. Our actions when holding dynamite or a hammer or crack cocaine will tend to proceed in the ways that the technology is biased.
That’s not to say I couldn’t use a hammer to kill someone instead of pounding nails. But it is to say that we should recognize the bias in technology and use appropriate caution in light of the bias. Because the technology amplifies my abilities, I should take more care when swinging my arm around while holding a hammer than without. Some technologies are particularly powerful, like guns or automobiles or thermonuclear weapons. We must be acutely aware of their power and use it judiciously. We must recognize that rash decisions can lead to deadly results.
I think the court made a good decision because they balanced the decision, making it clear that while sweeping prohibitions against gun ownership were not constitutional, states and municipalities could use reasonable regulations regarding the sale and ownership of guns. This recognizes the power of the technology and the resulting need for careful judgment on the part of the user and the community. All the stakeholders – everyone affected by the technology – must be considered in making decisions about the design, manufacture, distribution, use, and disposal of technology.
TULA = Technology Uses a Lot of Acronyms
Tuesday, June 22, 2010By Steven H. VanderLeest
I’m at the ASEE (American Society for Engineering Education) conference this week, where LED stands not for Light Emitting Diode, but for Liberal Education Division (and that’s liberal as in “liberal arts” not as in politics). It reminded me of how many acronyms we tend to use in technical disciplines. We often use rather unwieldy shorthand, such as PCMCIA for Personal Computer Manufacturer Interface Adaptor (though some say it should really stand for People Can’t Memorize Computer Industry Acronyms). We even have acronyms that contain acronyms, such as VHDL, which stands for VHSIC Hardware Description Language, where VHSIC stands for Very-High Speed integrated Circuit. Of course that one has its origin with the military, also famous for the liberal use of acronyms (liberal, as in generous). The military also gets the prize for the longest acronym (perhaps mythical), ADCOMSUBORDCOMPHIBSPAC apparently used by the US Navy to designate Administrative Command: Amphibious Forces, Pacific Fleet, Subordinate Command.
Technology users love acronyms. They proliferate with every new innovation. Why do we see so many acronyms when it comes to technology? Sometimes the limitations of the technology encourage a burgeoning dictionary of short-cut words. The acronym LOL (Laugh Out Loud) was originally seen on Usenet, a short-hand form that cut down on typing when using slow connections. Text messages today use LOL to cut down on the number of characters one needs to thumb out on a tiny keyboard. The World Wide Web is now everywhere called simply “the web”, no more in need of additional qualifiers than the word satellite is in need of the clarification of “artificial”.
Another reason for new acronyms may be that when a new invention appears on the scene, we often need new words to describe it. The acronyms appear because we require new combinations of words to describe features of the new technology. These acronyms then make their way into conversation as the technology becomes more familiar.
We love inventing new language. Teens do it all the time—to hide their meaning from adults and at the same time assert some power over them by knowing something they don’t. I wonder if we sometimes use technical jargon in the same way, to assert power over those who are less technically savvy than us. Sometimes we recycle old words, turning a noun into a related verb (have you googled anyone lately?) or hijacking a word with an entirely new meaning. Sometimes we create new words by combining several shorter ones, or by shortening up a longer one. All this word play seems to come naturally to humans and I suspect it is part of what makes us human. Language is a tool in some senses (an instrument for communication), but I’m not sure I’d call it technology. The definition of technology gets fuzzy in this area. I think I would include computer programming languages, network protocols, and encryption codes. Strangely we call these artificial languages, while this thing I am typing and you are reading is called natural language. But if artificial means man-made, then isn’t even our natural language artificial by that measure?
Naming something, adding it to our lexicon, has power. The name is the symbol by which we conjure up the concept and definition and identify of the thing named. It helps us to communicate our thoughts to others, and indeed can help us think our own thoughts. Names are inevitably stereotypes that do not sufficiently represent the complexity of the thing they symbolize. The label is always a summary that abstracts away much of the detail.
I love reading and hearing about new technology. I revel in the new language and relish the novel terminology. It is a particular joy to participate in the naming itself, a privilege enjoyed by many engineers, scientists, and inventors. Perhaps we should use a little more ceremony when christening an innovation or discovery. Perhaps we should celebrate a little more when a new thing has come into being, a birth that must be recognized because it requires a name. Yes, let’s take care that we don’t become proud because of our creativity, but rather give glory to God for granting us the gift to unfold His creation.
Estimating Risk in the Face of Technological Complexity: The BP Oil Spill
Friday, May 28, 2010By Steven H. VanderLeest
The BP oil spill in the Gulf of Mexico that started with a well blowout on 10 April 2010 now appears to be a greater volume of oil than the 1989 disaster when the Exxon Valdez ran aground on a reef and dumped over 250,000 barrels worth of oil into the waters of Prince William Sound off the coast of Alaska. This makes it the largest oil spill in US history (though not the largest spill globally). We do not yet know the full extent of the environmental toll since oil is just starting to wash ashore in the marshlands of Louisiana. The upcoming hurricane season could accentuate the problem by driving oil further inland.
Disasters like this are often related to the technological complexity of the system. This oil rig was a deep water well, extending over 5,000 feet below the surface. Even though we have extensive knowledge of oil drilling in this country and the company involved was certainly experienced, it appears that once again we were pushing the envelope. As oil has become harder to find, the wells must go deeper, thus going beyond the current bounds of experience. For example, one of the methods BP attempted to cap the blown well, by lowering a large dome over it, had been used successfully before, but never at this great of depth. The greater depth resulted in unforeseen problems with methane crystals forming inside the dome, causing it to become clogged. Complexity also appears in the organizational structures. BP leased the rig from Transocean. Perhaps this was a good business arrangement or perhaps it also provided some legal separation to lower risk of liability in the event of an accident. The US government was involved with regulatory oversight before the spill and environmental cleanup after the spill. The news media provided informational coverage, resulting in public pressure to limit the damage from the spill quickly and effectively.
Measuring and limiting the risks inherent in complex systems is a difficult task. It is not merely a science but also an art, as it requires creative imagination so that unforeseen consequences are now foreseen. We regularly engineer new technologies or adapt old technologies to new situations. Each time, we face the question of what could go wrong (and how we would deal with such a failure). The difficulty of estimating the probability of a given failure makes it tempting to forego the cost of a backup system to deal with the potential failure. The risk of some types of failures might be so low and the cost of preventing or handling such a failure might be so high that we cannot justify the added protection.
Some argue that safety has no price tag too high, but that is not the reality. We implicitly estimate safety’s value in most technological products, both as individuals and as a society. For example, we are aware as individuals and as a society that there is a noticeable risk of injury or death from accidents while driving in an automobile. That risk could be significantly reduced if we required cars to be built like tanks and also severely limited their maximum speed. But we choose to accept a higher risk in trade for lower cost and greater convenience. In the present case, each stakeholder accepted some risk (perhaps too much so), including BP, Transocean, the US federal government, and even BP customer’s, all of us who purchase oil via its derivative products such as gasoline. Thus we all bear some share of responsibility for that risk and the consequences of failures.