When Machines Think

Wednesday, December 05, 2012

By Steven H. VanderLeest

It wasn’t really the president, it was a machine.  When I was young, my family took a summer vacation trip to Walt Disney World in Orlando, Florida.  One of the memorable exhibits was the Hall of Presidents, where Animatronic likenesses of the presidents speak to the audience.  This was no static, stale wax museum where a few stiff movements might be jury-rigged into an arm or leg in a few of the displays.  This was all the US presidents, displaying life-like movement that looked quite real, at least to a young boy from the distance of a seat mid-way back in the amphitheater.  Of course even young children knew these were not truly real men but merely robotic impersonators.  Nevertheless it was fascinating to watch the show unfold and enjoy the android replicas. 

About that same time I started reading science fiction, a pastime that would become a lifelong appreciation for the genre.  I read every single science fiction book the Grandville, Michigan library had to offer (Dune, by Frank Herbert, was one of my early favorites). I bought more books at garage sales.  I borrowed more from friends.  I signed up for a mail-order book club that offered a special deal on a bonanza of books when you joined, adding dozens more books to my collection like Isaac Asimov’s Foundation series.  My enjoyment of science fiction was not limited to the written word, but spilled over to television and the cinema, where Star Trek and Star Wars quickly became favorites.

The thing about science fiction is that it doesn’t always stay fiction.  The fantastical babies grown in jars and the abhorrent eugenically-produced societal castes of Huxleys’ Brave New World were imaginative stories of technology.  However, only a few generations after his 1931 novel, those technologies became reality.  The first test tube baby was born in 1978, the first genetically modified crop appeared in 1982, and Dolly, the first cloned mammal, was born in 1996.  I found another imaginative story around futuristic technology in the story of Steve Austin, the eponymous main character of the 1970’s television show “The Six Million Dollar Man”.  Just a couple decades later, the technology of bionic limbs has become reality in the incredible robotic prosthetics that provide delicate control and feedback to amputees. 

Perhaps the most interesting science fiction technologies are machines that think.  Human-looking robots that also act human are no strangers to the silver screen of science fiction.  The replicants of Blade Runner and the android Lt. Cmdr. Data of Star Trek: The Next Generation are just two examples.  Have those imaginative stories become reality?  Not yet.  There are certainly fast computational devices with large databases of information, such as IBM’s Watson, which beat two human Jeopardy! champions recently.  Can Watson really think?  I think not.  Could a machine ever think?  Possibly. 

Machines that could think could also be machines that are dangerous.  Asimov considered that possibility in many of his science fiction stories and thus formed his famous three laws of robotics:

  1. A robot may not injure a human nor through inaction allow a human to come to harm.
  2. A robot must obey orders from humans, except if they conflict with the First Law.
  3. A robot must protect itself as long it does not conflict with the First or Second Law.

These laws seem to be reasonable protections for humans, but I see an interesting contradiction.  If even sophisticated robots are simply deterministic automatons, then it seems odd to bother with the last law.  Why grant self-preservation to a machine?  I suppose such a law might be simply reflect the interests of the robot’s owner in protecting valuable property.  But that third law could also imply that the robot might really be thinking and not simply following a computational recipe.  If we believe that we ourselves are really thinking, and not simply following a deterministic genetic and biological recipe, then we might grant some measure of self-protection to a thinking robot as well.  But if we think the robot thinks, then the second law seems rather like slavery.  I don’t think we can have it both ways:  a convenient mechanistic slave to obey my every command but also smart enough to interpret the world around it and creatively respond to the nuances and complexities of real world situations.  If I own a human-looking robot that is smart enough to also act human, may I hurt it?  May I torture it?  What does that say about the status of the robot?  More importantly, what does that say about my own humanity?

Perhaps as a way to avoid any uncomfortable questions, we might simply define humans carefully so that such human-like machines are obviously not in the club, so that we might treat them how I wish.  However, I am hesitant to draw lines around human-like androids, thus naming them simply machines with no obligations attached and no attendant responsibilities to worry me.  Why does it worry me?  As machines become more human-like, I wouldn’t want to be so stingy in defining what it means to be human that my rubric not only disenfranchises the machine but also boxes out the most vulnerable of humans, allowing us to treat them carelessly too, such as the unborn child, the accident victim lying in a coma, the student with a learning disability, the poor, or the terminally ill.  God calls his people to protect the weak, as a matter of justice.  God calls his people to be generous to the vulnerable, as a matter of mercy.  God calls his people to guard against pride that causes us to treat others shabbily, as a matter of humility. 

Page 1 of 1 pages
(c) 2013, Steven H. VanderLeest