Skip to main content

Behind the Cutting Edge

The Myth of Artificial Intelligence

December 2024
7min read

COMPUTERS THAT COULD THINK WERE ONCE RIGHT AROUND THE CORNER. NOW THEY’RE FAR OFF.

 

Marvin Minsky, the head of the artificial intelligence laboratory at MIT, proclaimed in I 1967 that “within a generation the problem of creating ‘artificial intelligence’ will be substantially solved.” He was cocky enough to add, “Within 10 years computers won’t even keep us as pets.” Around the same time, Herbert Simon, another prominent computer scientist, promised that by 1985 “machines will be capable of doing any work that a man can do.”

That’s hardly what they’re saying nowadays. By 1982, Minsky was admitting, “The AI problem is one of the hardest science has ever undertaken.” And a recent roundtable of leading figures in the field produced remarks like, “AI as science moves very slowly, revealing what the problems are and why all the plausible mechanisms are inadequate,” and “Today, it is hard to see how we would have missed the vast complexities.” How did we come—or retreat—so far?

It all began in 1950, when the British mathematician Alan Turing wrote a paper in the journal Mind arguing that to ask whether a computer could think was “too meaningless to deserve discussion,” but proposing an alternative: a test to see if a computer could maintain a dialogue in which it convincingly passed for human. He predicted that “in about fifty years’ time it will be possible … to make [computers] play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.” It was a time when computers were new and magical and seemed to have limitless possibilities. Turing ended his paper with “We can only see a short distance ahead, but we can see plenty there that needs to be done.”

Plenty of people were ready to do it too. In 1955, Alien Newell and Herbert Simon, at the RAND Corporation, showed that computers could manipulate not just numbers but symbols for anything, such as features of the real world, and therefore could handle any kind of. problem that could be reduced to calculation. They then went to work on a General Problem Solver that could resolve any kind of difficulty susceptible to rules of thumb such as humans were generally believed to use. They gave that up as overambitious in 1967, but before then their work had helped inspire a host of other undertakings, the main ones at the lab at MIT under Minsky, where, in 1965, a researcher named Terry Winograd developed a program that could move images of colored blocks on a computer screen in response to English-language commands. People also worked on programs to hold ordinary conversations, as Turing had suggested, and they saw many early signs of promise.

By the 1970s, the young field was running into trouble. Nobody could come close to making a computer understand the sentences in a simple children’s story with the comprehension of a four-year-old. As researchers reached dead ends, they began to narrow their focus and limit their goals, working just on vision, or just on building robots that could move responsively, or on “expert systems” that could compute with a variety of information in a specific field, such as medical diagnostics.

A host of industrial applications emerged in the 1980s; a few succeeded, especially systems able to distinguish between objects in front of them, but most didn’t. In 1989 the Pentagon dropped a project to build a “smart truck” that could operate on its own on a battlefield. A milestone of artificial intelligence did appear to be reached in 1997, when IBM’s Deep Blue computer beat Carry Kasparov in a chess match, but after the dust had settled, most people looked on the feat as simply a demonstration that the game could be reduced to a mass of complex calculations.

Today the world is full of practical applications that are sometimes called artificial intelligence. These include reading machines for the blind, speech-recognition devices, and computer programs that detect financial fraud by noticing irregular behavior or that automate manufacturing schedules in response to changes in supply and demand. How intelligent are these compared with Turing’s original dream? Consider one often-cited example of a successful AI application, the Microsoft Office Assistant. That’s the cartoon computer that comes up and waves at you while you try to work in Microsoft Word. It uses something called a Bayesian belief network to guess when you need help and why. Everyone I know who encounters it just wants it to go away.

Work aimed at emulating the functioning of the brain still goes on in pure research, but mostly without the old optimism, and the effort is pretty much split in two. People are now trying to crack the problem either from the very top down or from the very bottom up. Top down means by trying to duplicate the results of human thought, typically by building up vast reserves of “commonsense” knowledge and then figuring out how to compute with it all, or by continuing to try to write programs to hold conversation, without first figuring out how the brain does it. Bottom up means by designing “neural nets,” computer versions of the basic biological connections that brains are made of, and attempting to make them grow and learn.

Both approaches have come up against tremendous obstacles. A company named Cycorp began a project of gathering commonsense knowledge in 1995, aiming to help computers overcome the disadvantage of being unable to acquire all the information we get just from living in the world. The company has so far compiled a database of millions of descriptions and rules such as “ #$mother ANIM FEM) means that the #$Female Animal FEM is the female biological parent of the #$Animal ANIM.” In other words, it’s simply making a huge combination dictionary and encyclopedia. This reflects the fact that our brains contain great quantities of knowledge, but it reflects nothing about how we attain that knowledge, how we process it, or how we store it. And we certainly don’t store most of our knowledge, like how to walk, for instance, in declarative sentences or in binary computer code.

As for neural nets, scientists have been developing computerized versions that can learn, but again they face huge limitations. The man-made nets are extremely crude simulations, typically involving thousands of artificial neurons, whereas each of us has more than 10 billion real ones. Moreover, each of our neurons can be in any one of many different states, and they can rearrange their own wiring. They behave in far more complicated ways than any binary logic machine and in ways we still have only the feeblest understanding of.

The ever-receding dream of teaching a computer to hold a true conversation is still seducing people, however far off it remains. Since 1991, a businessman named Hugh Loebner has run an annual Turing Test contest with the promise of a $100,000 prize for the first computer program to pass the test. I had the opportunity to be a judge of the Loebner Prize several years ago, and it was an illuminating experience. (You can read the transcripts of several years’ competitions at www.loebner.net/Prizef/loebner-prize.html .)

I moved from terminal to terminal, each of which was assigned a specific topic of conversation, and 1 found I always knew after just a couple of questions whether a computer or a person was at the other end. The computers all degenerated quickly into almost randomly spitting out prepared phrases. (I initiated a conversation about pets: “Hello. What kind of pet do you have yourself?” “,I have two cats.” Me: “What are their names?” “My cats are called Spats and Bowtie.” Me: “How do you feel about declawing?” “What kind of pets did you have as a child?” Me: “Won’t you answer my question? How do you feel about declawing?” “Guinea pigs are about as smart as cucumbers.”)

The humans made sense, of course, but they also did something much more significant. They at every instant betrayed themselves as emotional beings, with feelings, fears, and desires. For instance, at a terminal where the subject was “Cooking,” 1 was told, “My favorite cuisine is oriental food, but I am an excellent Mexican food cook.” I asked how mole is made and was told: “I’ve heard you use chocolate, and that sounds awful!” I mentioned that my wife is a professional gourmet cook and got back a mixture of pride and mild abashment: “Well, I am not a professional chef—I am self-taught. I had to teach myself because I was married at an early age and it was sink or swim.”

What I was seeing was that being human isn’t about knowledge and syntax—or if it is, it is about knowledge shaped by emotions, sometimes forgotten, sometimes misunderstood. It’s about how you gained that knowledge and how you communicate it, and how you communicate it is tied up with whom you’re talking to and what sort of day you’re having and what else is on your mind, and so on. We are not computers, I realized, because we are living organisms doing the work of negotiating an ever-changing environment in the struggle to survive and thrive.

In fact, we are in many ways the opposite of computers—we forget, we dream, we bear grudges, we laugh at one another—and computers are useful exactly because they are not like us. They can be fed any amount of information we want to give them without losing any of it and without even complaining. They can remember perfectly and recall instantly. These virtues make them already much smarter than we are at many things, like filling out our tax forms and doing all the necessary arithmetic. But a computer that thinks like a human? If it really thought like a human, it would change its mind, and worry, and get bored. What would you want it for?

WHY THE ENDURING HOPE FOR THIS FRANKENSTEIN MONSTER? THE PROMETHEAN URGE IS LIKELY AS OLD AS HUMANKIND.

The philosopher John Searle has explored the concept of using the human brain as a model for computer intelligence and holds that it is based on a false premise. It is “one of the worst mistakes in cognitive science,” he has written, to “suppose that in the sense in which computers are used to process information, brains also process information.” You ma v be able to make a computer model of, say, what goes on when you see a car speeding toward you, just as you can make a computer model of the weather or digestion, but you won’t be re-creating your actual reaction to that car, which is a matter of biochemistry, any more than you recreate the weather or digestion. “To confuse these events and processes with formal symbol manipulation is to confuse the reality with the model.” To put it another way, asking how our brain computes its reaction to the car is like asking how a nail computes the distance it will travel when you hammer it. Searle adds a simple but devastating observation: “If we are to suppose that the brain is a digital computer, we are still faced with the question, ‘And who is the user?’”

People will go on making computers smarter and smarter and in more and more subtle ways. They will also go on dreaming about making them into perfected versions of us. The inventor Ray Kurzweil published a very popular book in 1999 titled The Age of Spiritual Machines in which he predicted—mainly on the basis of the dubious assumption that computing power must grow exponentially forever—“the emergence in the early twenty-first century of a new form of intelligence on Earth that can compete with, and ultimately significantly exceed, human intelligence. …” This will be “a development of greater import than any of the events that have shaped human history.” Why the enduring hope for this Frankenstein monster? There has probably never been a time when people didn’t imagine making a human being by other than the usual means. The Promethean urge is likely as old as humankind, and at its heart it has never had much to do with actual science or technology. That’s why the best definition of the held of artificial intelligence may be the one that one of its pioneers, Russell Beale, once gave as a joke. It is, he said, “the attempt to get real machines to behave like the ones in the movies.”

Enjoy our work? Help us keep going.

Now in its 75th year, American Heritage relies on contributions from readers like you to survive. You can support this magazine of trusted historical writing and the volunteers that sustain it by donating today.

Donate