What Makes Us Human? Part 1

Monday, August 23, 2010

By Steven H. VanderLeest

People used to think that a computer would never beat a human in chess.  Gary Kasparov, the grand master and world champion chess player claimed that a computer might be able to beat a mediocre human chess player by sheer brute strength of computation, but a machine could never match the creative play of a grand master.  Perhaps it was hubris or perhaps it was an instinctual reaction that there is something unique and special about humans that sets us apart and above the machines we create.  Perhaps it was more philosophical, believing that the created cannot surpass the creator.  Regardless, once the gauntlet was thrown down, computer designers and programmers at IBM worked tirelessly to improve their approach to computer chess play and in 1997 the IBM supercomputer named Deep Blue beat Kasparov in a regulation chess match

Once the computer had beaten the grand master at chess, some turned to other concepts as “truly human”, that were less mathematical.  Some pointed to recognizing faces or understanding human speech.  But computers eventually crossed those milestones as well. Even lower-end computing technology can now accomplish these feats —my camera recognizes familiar faces and focuses on them first; the navigation system in my car has voice recognition.  It seems that for each aspect of human behavior that we point out as distinctively human, sometime later a computer is reproducing it. 

It is not so much that computers are smart – they are simply very powerful tools to express the creativity of their human creators and programmers.  A computer can emulate almost any human behavior – as long as the human software engineer herself understands that behavior sufficiently in order to mimic it in a program.  Thus the pursuit of Artificial Intelligence (AI) has led to great strides in the understanding of ourselves.  Cognitive science has blended and strengthened our knowledge of the biology and psychology of the brain.  We now know more about how humans learn and acquire knowledge and understanding.  We have much better insight into how very young children pick up languages (and why it is easier to learn a second language at an early age than later in life). 

The aspects of humanity that are most difficult to mimic programmatically are those that we don’t really understand ourselves.  Self-awareness and consciousness are difficult to explain.  Creativity – the inventive spark – is well known as the basis of art, music, literature, and even engineering, yet it is difficult to teach this skill or describe how it works. 

Descartes proposed “Je pense, donc je suis” (I think, therefore I am) as a test of existence.  Contemplation of one’s existence is a good candidate as a test of self-awareness.  But if a computer regurgitated this phrase on the screen or even repeated the phrase with a synthesized voice, would we then claim it to be self-aware? What does it really mean to be human?  What does it mean to be created in the image of God?  Are there characteristics that we have that no mere animal or machine possesses because they do not have the imago dei ?  Our ability to think logically and rationally has been duplicated by the computer.  Does that fact alone mean that logical and rational thought is thus not part of the imago dei?  Or can some characteristics we inherit from our maker also be shared with other parts of the creation?

Beautiful Challenge

Monday, August 09, 2010

By Steven H. VanderLeest

I like a challenge.  Most engineering problems are challenges: obstacles waiting to be overcome.  There is a great satisfaction that comes after the frustration and striving of solving a problem, jumping a hurdle, scaling the heights to reach the summit.  Filling in a new part of the map was the goal that drove explorers into the frontier and past the edge of the known world.  I think humans need that challenge, the thrill of the unknown.  “In 1900 mathematician David Hilbert proposed 23 math problems he hoped would be solved in the 20th century (16 of them were). A problem ‘should not be too difficult lest it mock at our efforts,’ he said in presenting his challenges. ‘It should be to us a guidepost on the mazy paths to hidden truths….’” Gregory M. Lamb,  “‘Grand challenges’ spur grand results,” The Christian Science Monitor, January 12, 2006.  (See Wikipedia on Hilbert’s Problems for the complete list.)  Hilbert’s list has tantalized and fascinated us for over a century.  In the mythical tradition of the king setting grand challenges or puzzles before a suitor for his daughter’s hand in marriage, in order to test his mettle, we humans seem to relish the chase.  We enjoy participating in the contest or cheering our favorite team from the sidelines.  In the last 20 years, we have seen the gauntlet thrown down in three areas related to technology:  the Computational Grand Challenges , the Engineering Grand Challenges , and the 14 Health Challenges .

In a little more than a century, we have seen the rise of the automobile, refrigeration, the airplane, space travel, nuclear power, electronic computers, television, digital media, the Internet, and more. With such a dizzying array of new technologies, it is no wonder some feel like technological development has been accelerating, improving the lot of the human race at an exponential rate.  On the other hand, it has now been over 40 years since a human stepped on to the surface of the moon.  Moore’s law, which gauged the doubling of the number of transistors on a computer chip at every 2 years or so, has continued, but the attendant doubling of clock speed has fallen off as we hit the power barrier.  Instead, we now see multiple core processors instead of higher speed uni-processors.  There has been talk of fusion power for my entire life with little to show for it.  Have we reached the apogee?  Have we seen the rise and fall of the great technological wave? Is there any really big thing yet in technology?

I think the Grand Challenge lists give us a hint that there is more to be done, more to be explored.  Much of the list reminds me of the thrill of new ideas in a good science fiction story.  Health challenge number 3 is to develop needle-free delivery systems.”  That sounds like the cool little gadget Dr. “Bones” McCoy used in the original Star Trek (ST:TOS for trekkies) to give a crew member a painless injection (with just a little sound effect “hiss”).  The slow march towards virtualization from black & white television to color to HDTV to 3-D reminds me of the holodeck of Star Trek: The Next Generation (ST:TNG for trekkies).  Babel Fish translators and Google Voice transcription services are moving ever closer to the universal translator.

Some technological dreams are probably within reach during our lifetime.  We might see human exploration of an asteroid or even to Mars.  Artificial intelligence will probably make a few more small steps towards realization.  We may see some progress on curing genetic diseases on the basis of the human genome project.  The entire library of human knowledge may be accessible over the web if Google, Bing, Yahoo, and others continue their voracious digitization of content.  Other technological ideas seem less likely to occur soon, if ever.  At the risk of proving the second clause of Clarke’s first law of technology, let me risk naming a few candidates that well may be impossible.  The Star Trek transporter (near instantaneous travel between two points) and inexpensive space flight are not probable, certainly in the near term.  Safe disposal of spent nuclear fuel would be a great boon, but again, viable solutions seem far off.  Technology has been put in service as a fountain of youth, providing plastic surgery and correcting some of the effects of aging. However, actual immortality is unlikely – despite some hints of the possibility in recent advances, neither medical means (by cloning or by turn off aging in cells) nor machine means (such as transfer of our mental processes into a computer likeness of us) appear to be close at hand.  Nevertheless, this hasn’t stopped Ray Kurzweil from planning for it:  see Wolf’s story in Wired, “Futurist Ray Kurzweil Pulls Out All the Stops (and Pills) to Live to Witness the Singularity”.

Perhaps we should heed E. F. Schumacher, who almost 40 years ago pointed out the practical and philosophical problems with large, complex technological undertakings.  In his seminal book
Small is Beautiful, he noted that both big government and big corporations are too prone to spectacular failures and entrenched problems. Schumacher argued instead for small technology, innovations with three characteristics: “cheap enough so that they are accessible to virtually everyone; suitable for small-scale application; and compatible with mans’ need for creativity.” (E.F. Schumacher, Small is Beautiful:  Economics as if People Mattered, New York: Harper & Row, 1973, p. 34).  Schumacher was a rebel for his time and his work still resonates today.  He questioned the fundamental basis of our economic system.  “…[T]he idea of unlimited economic growth, more and more until everybody is saturated with wealth, needs to be seriously questioned on at least two counts:  the availability of basic resources and … the capacity of the environment to cope with the degree of interference implied….The Gross National Product may rise rapidly:  as measured by statisticians but not as experienced by actual people, who find themselves oppressed by increasing frustration, alienation, insecurity, and so forth.” (p. 30-31)

I agree with Schumacher on his high view of human work – that work is part of what makes us human, an activity (when at its best) that nourishes and enlivens us.  On the other hand, work that is “meaningless, boring, stultifying, or nerve-racking for the worker would be little short of criminal.”(p. 55)  However, he goes on to differentiate between technology that is a tool (which he defines as an aid to humans in their work) and technology that is a machine (which he defines as a replacement of humans, automating work).  I disagree with this implication.  Rather, I believe all technological products are tools when designed and used rightly.  They are only good technology when they act as an aid to human work (whether paid labor, creative hobby, family care-giving, or any other type of work).  If a technological product automates some aspect of human labor, thus replacing the laborer, I would not categorically condemn the device.  For example, a robotic minesweeper that replaces a human solider in finding Improvised Explosive Devices (i.e., road-side bombs) is an aid and tool that legitimately and rightly performs the labor in place of a human.  Rather than putting a soldier in danger, a human operator directs the movements of the robot, performing the delicate task of detecting and disarming bombs via remote manipulation.  You may ask about the worker on the factory line that gets replaced by a robot, and even here I would venture that work which is repetitive and mind-numbingly boring is not real work at all and ought to be done by automation if it is done at all.  At the same time, if a human is put out of work, replaced by a machine, then the employer has an obligation to move that person to a new, better, more creative, more respectful and meaningful position within the company.  Failing that, then the obligation is to provide re-training so that they can find gainful employment elsewhere. 

Thinking about Schumacher and the grand challenges, I believe he would agree with the challenge to find alternative energy sources (particularly smaller, more distributed sources).  But many other challenges he would likely see as hubris and unbridled greed that we misleadingly label as “need”.  Perhaps we should focus our innovative energy towards the small technologies Schumacher envisioned, those that are inexpensive and small enough to put on in every hand, those that aid the human creative spirit.  The next technological marvel need not be a huge, complex monolith.  It could be elegant in its simple solution to some difficult problem.  That’s the beautiful challenge.

Page 1 of 1 pages
(c) 2013, Steven H. VanderLeest