Experts and Novices

As has been mentioned already, a computer follows rules. These rules or procedures are embedded in a set of instructions both in hardware and software. The Dreyfus brothers, one a philosopher and one a computer scientist, in their book, Mind over Machine, suggest that this type of rule-following thinking is at a novice level. We might ask a novice, say, to do an experiment following very clear, written rules. Deviations from the procedure are not expected. And this level of work is context and contingency free in that necessarily, no why needs be given to the novice. Just do the experiment and send the results to those who requested it. In fact, the novice could be a computer! Others, experts (the language of the brothers Dreyfus) will interpret the work. These are the responsible human decision-makers. They make judgments about the experimental results based on historically gained wisdom using their ability to see the whole picture including context and contingencies. The novice (by definition) would drive through a green light regardless of whether or not someone else was running the red. The expert would account for the context and, hopefully, not go through the green even though the rules warrant it. It is difficult to capture these kinds of highly particularized contingencies in the computer although some try using artificial intelligent or AI algorithms. The Dreyfus brothers take an argued, philosophical stance in their writing against strong AI that suggests people think like computers hence, they, the computers, can be trained to think like people. This is important for us and we will return later to the issue of how people and computers think.

Rather than seeing ourselves as responsible experts (in contrast to the novice) the tendency I have seen in both industry and academia is that we shift our responsibility for good decision making to the computer. In some sense, the computer takes on a life of its own. The novice becomes the expert. It is so fast and can do so much, it must be expert. Listen to Tenner again:

Both the beauty and the risk of computerized analysis is the concreteness it can give our plans—even when our underlying data are doubtful and our models untested or even wrong.

Tenner
p. 263

Winograd and Flores in their book, Understanding Computers and Cognition, suggest that computers tend to take on a life of their own in that their output is always assumed to be relevant which can create an unintended transfer of power to the computer (pp. 152-157). This turns out to be a significant problem in engineering analysis and design where the output of computer simulations is often taken as the real thing. The pretty colored picture of varying forces on a computer-modeled machine part assumes a life of its own as truth. However, since the computer only manipulates models that are at best several times removed from real situations, these pictures must be responsibly used as guidelines for design not as truth per se.

As humans, created in the image of God (imago Dei), we are charged with responsibility. Part of this responsibility is to define culture and our work in it not to have them defined for us by the computer. Even though the computer is in part how we have defined culture, it should never take on a life of its own. Computer results should not become the message, the end in itself. Computations, communications, and stored data should all be handled responsibly with an acute sense of what really is. To diminish reality to that captured and manipulated by the computer (in zeros and ones) seems contrary to what God had intended for responsible, decision making human beings. He expects experts (wise interpreters) not novices (blind followers).