- The Washington Times - Sunday, May 4, 2003


   Can machines think? The question is tricky. Most of us probably remember the defeat of Garry Kasparov, the world chess champion, in 1997 by IBM’s computer, Deep Blue. The tournament was part of the company’s Deep Computing project, which designs monster computers for business and scientific research.
   When a calculator takes a square root, we don’t think of it as being intelligent. But chess is the premier intellectual game. Surely it requires intelligence?
   But a curious truth about artificial intelligence is that if you know how it works, it ceases to seem intelligent.
   Consider a computer programmed to find a route through a maze. It is impressive to watch the computer unerringly make its way through a complex tangle that a human would puzzle over for hours. When you find out that it is simply trying every potential path, but happens to be very fast at it, then it doesn’t seem very smart.
   So with a chess program. It typically has a “move generator” that makes a list of all potential moves from a given board position. It is a simple, mechanical, and apparently brainless process.
   For example, a pawn that has been moved before can move one space ahead if nothing is in the way, or one space diagonally to take an opposing piece if one is there. If it hasn’t been moved, it has the same choice with the additional possibility of moving two spaces ahead if nothing is in the way. A poodle might learn to do it.
   Then an “evaluation routine” looks at the move and decides yes or no according to mechanical rules. The program just does a lot of if-then checks.
   Doing this several moves into the future requires enormous computing power because the number of potential paths quickly becomes enormous. But the individual steps do not seem intelligent.
   Interesting point: If you had the time (perhaps millions of years) and were really, really careful, you could go through the program with a pencil, step by tiny brainless step, and beat Mr. Kasparov. The intelligence, if such it is, isn’t in the computer. It’s in the program.
   Whether machines can be intelligent depends of course on what you mean by intelligence. Most of us recognize it without being able to define it. A student who makes A’s at MIT and picks up good French on summer vacation is, most would agree, bright. People are obviously smarter than cocker spaniels in most instances. But that doesn’t tell us what intelligence is.
   Mathematician Alan Turing said that if a human being could not tell from the computer’s responses that it was not a human being, then it would be intelligent. By this test, Deep Blue is intelligent. Mr. Kasparov knew it was a computer, but he spoke of its play as if it were human. If he had played against it by mail, he would have thought it a very good human player.
   For practical purposes, and certainly in the business world, the answer seems to be that if it seems to be intelligent, it doesn’t matter whether it really is. If you tell a household robot, “Call Bob Smith for me,” and it does it, that’s enough.
   This is quietly approaching commercial reality. In Japan, companies are working on “intelligent” robots to help care for an aging population. A lot of technologies-speech recognition and generation, robotics vision, and so on are coming together to make useful, apparently intelligent machines.
   Where is it leading? AIBO, the robotic dog from Sony, is a toy that acts like a real dog. It knows its name, obeys commands, and so on.
   Arguably it begins to approach the intelligence of a real dog. But Sony makes it sound as if it is intended as a pet, i.e., intended to evoke an emotional response. Before long, someone may ask the question: Can machines feel?
   Maybe it won’t matter, if they seem to.
   

LOAD COMMENTS ()

 

Click to Read More

Click to Hide