- The Washington Times - Thursday, April 13, 2006

A curious technology that was thought to have great promise but didn’t go anywhere — well, sort of didn’t — is artificial intelligence, or AI. What happened? Why don’t we have computers that talk to us about the meaning of art? Is AI nonsense?

No, if you are reasonable about it. What happened is that AI was hyped to death, and then proceeded to be a far harder problem than many expected. You just can’t build Isaac Newton in a laboratory, not with anything resembling technology we have. This doesn’t mean you can’t build useful lesser degrees of intelligence.

“But,” you ask, “what is intelligence? How do you recognize it?” The answer is simple: We don’t know. Or at any rate, we don’t agree about it.

You can tell a robot that when it bumps into something, it should put its transmission into reverse for 10 seconds — i.e., back up — turn 20 degrees to the right, and go forward again. The results often will look like intelligent avoidance of the obstacle.

A blind person might do the same. For that matter, an automobile-navigation system can listen to you say, “Fourteenth and U,” show you the way on a moving map on its screen, and tell you in a pleasant voice, “Turn left at the next light,” and so on. If your spouse did this, you would regard it as intelligent. When a machine does it, is it? Why not?

A problem that proponents of AI regularly face is this: When we know how a machine does something “intelligent,” it ceases to be regarded as intelligent. If I beat the world’s chess champion, I’d be regarded as highly bright.

When a computer does it by sorting through all possible moves, people say, “That’s not thinking. It’s just being stupid real fast.” Well, but it won. Further, since any program for a digital computer consists of a list of exceedingly simple-minded instructions, it’s hard to see where the intelligence is.

A fellow named Alan Turing, one of the whizzes at Bletchley Park in England who cracked the German Enigma codes, came up with this test for intelligence: Put a person in a room with two teletypes. One is connected to a supposedly intelligent computer. The other connects to another room containing a human being. You don’t know which is which. You carry on a conversation with each by teletype. If you can’t tell which is the machine, then that machine is intelligent.

This is called the Turing Test, and there is a contest (the Loebner contest, www.rci.rutgers.edu/~cfs/472_html/Intro/NYT_Intro/History/MachineIntelligence1.html) with a $100,000 prize for the first AI machine that passes the Turing Test. Don’t hold your breath.

A problem with the Turing Test is that it defines intelligence as human intelligence. Yet chimpanzees are usually regarded as somewhat intelligent, German shepherds as more intelligent than beagles, mice that can solve a maze in a minute as more intelligent than those that can’t.

Making a robot that behaved indistinguishably from a mouse would be a lot easier. Would it qualify as intelligent at the mouse level?

We already have machines that do a lot of intelligent-seeming things, and we’ll have a lot more. We just won’t call them intelligent if (a) they have been around for more than a week or (b) we know how they do it.

This means that AI people can’t win. A machine translates languages the way Google does? Doesn’t count as intelligent: It isn’t perfect, and anyway I’m used to it.

LOAD COMMENTS ()

 

Click to Read More

Click to Hide