The World in 24 Slices

Monday, August 29, 2011

By Steven H. VanderLeest

I love the interplay of engineering and technology with other aspects of society.  Our culture is a tapestry of interwoven threads.  Science, politics, sociology, history, literature, economics ... the thread count is incredibly high in the warp and woof of our communal lives.  Though we all like to define categories—academics are particularly adept at forming silos—life is much more continuous, complex, and downright messy. 

Consider time zones, for example.  As society drew closer with the invention of the high speed transportation provided by rail lines and steam-driven ship, the differences in locally defined solar time became more pronounced, making it difficult to keep a trans-national or international schedule.  The international Meridian Conference of 1884 forged a political agreement to define a global time standard, forming 24 time zones, with the base of zero set to pass through Greenwich and all other times defined relative to this Prime Meridian.  Thus each time zone was relative to Greenwich Mean Time (GMT).  The resulting vertical slices on the globe were entirely scientific and pleasingly geometric:  nice and neat. 

Not so fast.  Science does not exist in a vacuum.  Time is not only a phenomenon to be measured, it is a quality to be experienced.  Thus calendars and watches are defined not only by technical principles but also by human needs and wants.  Today’s time zone maps are roughly vertical slices, but with interesting variations.  Political considerations led China to choose a single time zone.  The United States are not so united, with zones meandering along the boundaries of states that fall on the meridians.  Countries that are nearly perfectly aligned along the same longitude sometimes make opposite choices about which time zone they will follow.  A few even flaunt the original international agreement to use only integer offsets.  Some choose halves, such as Iran, Afghanistan, India.  The agitators are not just in central Asia:  part of Australia and the Canadian island of Newfoundland also use a half-hour increment.  Even bolder, Nepal had the nerve to choose a 45 minute offset. 

Are the non-integer insurgents wrong?  Do they scoff at scientific evidence?  Not at all.  Each is merely recognizing that science and technology have a context.  We can design our tools to serve us, and serving us well requires adaptation to local custom, regional geography, and political boundaries.  As much as we’d like our technology to be objective, its human context make that wish for simplicity impractical.  The real world is messy and our technology must account for and even celebrate the great diversity and ambiguity of human culture, even in our time zones.

A Tale of Two Technologies

Friday, August 12, 2011

By Steven H. VanderLeest

Here are two technology stories:  before you trust the first, consider where that trust led with the second.  The first story is about autonomous military weapons.  As our military technology grows more sophisticated, we are asked to put more and more trust in the devices to carry out faithfully the commands that we issue to them.  War has evolved from early techniques (and rather messy business of dealing with death and violence on a very personal scale) as one soldier eviscerated another using a sword or pike while standing face-to-face on the battlefield.  The introduction of firearms was a sea change.  Based on gunpowder charges to propel ammunition at high velocity, death and violence became a bit more remote, separating the combatants by some distance, though still within sight of each other.  Triggered or timed explosives required not only gunpowder or its successors, but also a technology to control the volatile black powder.  Technologies such as safety fuses and blasting caps were intended for peaceful use of explosives, but also enabled effective bombs for military use directed by the state or subversive use directed by rebels with a cause.  Bombs could be left with a timer or in the form of a mine triggered by a footfall.  Now the war fighters no longer even saw their opponents – physical and temporal distance providing emotional distance from the violent act.  Bombs dropped from planes meant the bombardier could see individual buildings or bridges, but rarely individuals.  In a continuing effort to protect the individual solider who could still be injured or shot down even in a plane high overhead, the modern military is now moving quickly to the Unmanned Aerial Vehicles (UAV).  US pilots are now flying missions in Afghanistan while they sit in Nevada, thousands of miles away from the action they precipitate. 

Recent refinements of navigation guidance put even more distance between the soldier and his opponent.  Smart bombs in the form of sophisticated cruise missiles started appearing during the 1990-91 Persian Gulf War fought between a United Nations coalition led by the United States against Iraq following the Iraqi invasion of Kuwait.  Though not the first guided munitions, they were marketed as a major step in protecting lives of coalition soldiers and minimizing civilian injuries while having devastating effects on the military adversary.  Of course human adversaries learn to adapt, such as terrorist cells located in the middle of populated errors so that a military strike would have significant “collateral damage”.

Even more recent military development is moving towards more autonomous vehicles and weapons, computer directed machines that perform complex algorithms so that they can carry out sophisticated commands.  Rather than a human directing each motion of a UAV, including the command to launch a weapon, these devices are intended to replace the pilot, allowing the commanding officer to issue human-like directives to the machine directly and expect them to be carried out.  A recent IEEE Spectrum story concludes that such complex behavior will be non-deterministic:  “However, if the vehicle is making its own decisions, its behavior can’t be predicted. Nor will it always be clear whether the machine is behaving appropriately and safely. “  (Lora G. Weiss, “Autonomous Robots in the Fog of War”, IEEE Spectrum,  August 2011). 

On one level, I disagree with the author’s conclusion about unpredictable behavior.  Computers are entirely predictable because they are entirely deterministic.  A combinational logic system produces outputs that are 100% predictable given the inputs.  For example, the logic AND gate will always produce the output of 1 if the inputs are both 1.

         
Input 1 Input 2 Output
0 0 0
0 1 0
1 0 0
1 1 1

No matter how many inputs, in theory I can always predict the output.  A sequential digital logic system is not quite so easy, but still predictable.  Here there is some memory in the system, so the inputs alone do not predict the output.  The internal memory (called “state” in digital theory) remembers past inputs, so the output is a function of the internal state as well as the current inputs.  But this is still predictable.  If the internal state and current inputs are known, then the outputs can be predicted with 100% certainty.  Alternatively, if we know the structure of the internal logic, the initial condition, and the history of all inputs since that time up to the current moment, then we can predict the output with 100% certainty. 

On another level, I agree with the author.  Practically speaking, the number of possible internal states and the number of inputs is astronomical for a modern computer system, which is simply a really complex sequential digital system.  It is easy to see that a logic AND gate is predictable, because we can walk through all 4 combinations that represent all possible outcomes.  The number of combinations, 4, is equal to 2 to the power of 2 (where the first 2 is because we are working with a binary number system consisting of just 0 or 1, and the second 2 is because we have 2 inputs).  But for a modern computer system with 64 bits of input and hundreds or thousands of bits of state, the number of possible combinations is 2 to the power of several hundred.  That is more possible combinations than the number of atoms in the universe. Yes, really that many.  So in practice, we have difficulty predicting the behavior of a complex computer system.  Like the weather, we can probably say with relatively high confidence what it will do in the next few moments, but it gets increasingly difficult to predict what will happen in the next hours, days, or weeks.

This is why safe-guards are so important for technology.  Where technology cannot be reliable because it is unpredictable, we have a responsibility to limit that behavior within acceptable bounds.  The bounds themselves must be highly reliable, so that even if the software on the computer goes awry, the resulting behavior cannot cause unintended harm. 

Now consider a second story of technology.  Facial recognition is increasingly common as a tool to enhance security.  But sometimes that technology goes awry.  As reported in an IEEE blog, an effort to catch people using fake identification, the state of Massachusetts revoked the driver’s license of a citizen by accident because he looked too much like a known criminal.  Through no fault of his own (other than the looks he was born with), he then faced a bureaucratic nightmare for weeks trying to get his license back.  These false positives should give us pause, because they represent the unpredicted, unanticipated consequences of our technology.  Stories like this should give us even more pause when we consider delegating more of our decision-making to machines created by humans.  Yielding more of our autonomy to computers made by human hands should only be done with careful thought and intentional bounding of the resulting tool so that even if its behavior is not practically predictable, the limits we place around the tool are determined with high certainty.

Responsibility for Technology

Wednesday, August 03, 2011

By Steven H. VanderLeest

“The woman made me eat of the tree.”  Since the beginning humans have had trouble taking responsibility for their actions.  Adam blamed Eve.  Eve blamed the serpent.  Passing the buck comes naturally to us.  As a teacher, I often notice this problem even in the grammatical structure of papers I am grading.  Students frequently use passive rather than active voice, obscuring the actor: the responsible party.  They write that “The measurements were taken with the wrong resistor in place,” without mentioning themselves as the guilty party.  Even when there is no clear guilt or negative connotation, we sometimes avoid accountability.  They write that the “computer model was generated” as if it appeared magically without any human intervention.

This predilection for shifting responsibility is not limited to students.  For example, a story headline from the July 2011 issue of IEEE Spectrum (pg. 11) reads “Supercomputers Predict a Stormy Hurricane Season”.  Really?  All by themselves the computers did that?  Not really.  Technology does not have a mind of its own.  It might seem like computers have a malicious habit of blue-screening or freezing at just the wrong moment, or that they purposely lost that important document.  In reality, technology is just our tool and instrument.  In the case of the Spectrum article, the supercomputer and weather algorithms are simply tools of meteorologists and climatologists to mechanize their calculations for predicting the future (regarding weather).  Isn’t it odd that the reporter didn’t say that human meteorologists predicted a stormy hurricane season? Somehow it gives an extra aura of credibility to say the computer predicted it, perhaps more so than the old sure-fire phrase to gaining authority: “scientific studies show…”. 

Yet we all know computers are only as good as the data (and programs) that go in.  Garbage in, garbage out.  Bugs in the program, flaws in the hardware, or typos in the data can all lead to widely erratic and incorrect answers.  So why would we trust the computer more than the humans who built and programmed it?  Similarly, we say “my car drove off the road” rather than mentioning that we ourselves were distracted for a moment.  The door slammed on her foot; the printer ran out of paper; the milk spilled.  But they really didn’t just “happen” – a person was usually involved somehow. 

Technology is merely a human tool, even when the tool is complex.  There is nothing particularly super about a supercomputer, it is simply a human invention, designed by people and used by people.  Let’s take responsibility for our technology. We are accountable for the technology we design, for our choices about which technologies to use, and for the way we use it. 

Page 1 of 1 pages
(c) 2013, Steven H. VanderLeest