Whatever happened to artificial intelligence? There was a time, a couple of decades ago, when computers were expected soon to be able to behave intelligently — to talk to people in English, answer questions, and make complex decisions.
What people really had in mind was an artificial human. HAL, the computer in the movie “2001: A Space Odyssey,” comes to mind.
It didn’t happen. Today, although computers have advanced phenomenally in power, we see them doing very little that reasonably could be called intelligent. We still can’t talk to computers about the meaning of art or why Rome fell. Why?
A lot of research is done in what engineers call artificial intelligence. Most of the research isn’t about the design of an artificial brain. Why hasn’t such a brain come about? First, it’s harder than many thought it would be. When I was taking computer courses around 1970, we thought that if computers were just fast enough, they would beat the problem into submission.
It didn’t work that way. Computers were just the wrong tool for some jobs. Imagine trying to paint a picture with a claw hammer.
Computers are good at doing monstrous amounts of calculation in a hurry. Every time Intel Corp. and AMD Inc. come out with a faster processor, computers get better — at doing huge amounts of calculation. They don’t get much better at doing what they are not much good for in the first place.
Much of what people think of as “intelligence” involves language. We think in languages. Well, computers don’t. Computers do arithmetic in a hurry. It is a bear of a problem to turn arithmetic into an understanding of English.
You can fake it, and I’ve seen software that does a pretty good job of seeming intelligent about specific subjects. This works best when there is little ambiguity. For example, “tonsillectomy” means only one thing. “Run” means lots of things. Politicians run, paint runs, stockings run, contracts run, athletes run. People have the context and effortlessly make the associations necessary to understand language. Computers are miserable at it. However people think, it isn’t how computers do it.
Another reason for the apparent lack of machine intelligence is that, if you know how a computer does something, it no longer seems intelligent. We all remember when IBM’s Deep Blue beat world chess champion Garry Kasparov in 1997. If Deep Blue could outthink Mr. Kasparov, then it must be pretty smart.
Then you read that Deep Blue just tried all possible moves. This is a grotesque oversimplification of sophisticated algorithms constructed by very smart people, but it is essentially correct. “Oh,” you say, reasonably enough, “Then it doesn’t think. It’s just sort of mechanical.” Yes. But it won.
An example of what might be regarded as intelligent behavior is automated translation of language. This is done by Google, for example. Run a search on whatever you like in Spanish and then click on “Translate this page.” The resulting translation probably will contain mistakes, some ridiculous, and the mistakes will increase with the ambiguity of the material. But it’s unmistakably a translation.
Finally, the use in connection with computers of words such as “memory,” “language” and “logic” raised expectations of potential human likeness that weren’t supported by reality. Computers are still claw hammers.