Final Jeopardy (Alexandra Cooper Mysteries) - Linda Fairstein [15]
At the dawn of Artificial Intelligence (AI), a half century ago, scientists predicted that computers would soon be speaking and answering questions fluently. A pioneer in the field, Herbert Simon, predicted in 1965 that “machines w[ould] be capable, within twenty years, of doing any work a man can do.” These were the glory days of AI, a period of boundless vision and bounteous funding. Machines, it seemed, would soon master language, recognize faces, and maneuver, as robots, in factories, hospitals, and homes. In short, computer scientists would create a superendowed class of electronic servants. This led, of course, to failed promises, to such a point that Artificial Intelligence became a term of derision. Bold projects to build bionic experts and conversational computers lost their sponsors. A long AI winter ensued, lasting through much of the ’80s and ’90s.
What went wrong? In retrospect, it seems almost inconceivable that leading scientists, including Nobel laureates like Simon, believed it would be so easy. They certainly appreciated the complexity of the human brain. But they also realized that a lot of that complexity was tied up in dreams, memories, guilt, regrets, faith, desires, along with the controls to maintain the physical body. Machines wouldn’t have to bother with those details. All they needed was to understand the elements of the world and how they were related to one another. Machines trained in the particulars of sick people; ambulances and hospitals, for example, could conceivably devote their analytical skills to optimizing emergency services. Yet teaching the machines proved extraordinarily difficult. One of the biggest challenges was to anticipate the responses of humans. The machines weren’t up to it. And they had serious trouble with even the most basic forms of perception, such as seeing. For example, researchers struggled to teach machines to perceive the edges of things in the physical world. As it turned out, it required experience and knowledge and advanced powers of pattern recognition just to look through a window and understand that the oak tree in the yard was a separate entity. It was not connected to the shed on the other side of it or a pattern on the glass or the wallpaper surrounding the window.
The biggest obstacle, though, was language. In the early days, it looked beguilingly easy. It was just a matter of programming the machine with vocabulary and linking it all together with a few thousand rules—the kind you’d find in a grammar book. If the machine still underperformed? Well, just give it more vocabulary, more rules.
Once the electronic brain mastered language, it was simply a question of teaching it about the world. Asia’s over there. This is the United States. We have a democracy. That’s the Pacific Ocean between the two. It’s big, and wet. If researchers kept adding facts, millions of them, and defining their relationships, by the end of the grant cycle they might have a talking, thinking machine that “knew” what humans did.
Language, of course, turns out to be far more complicated. Jaime Carbonell, a top researcher at Carnegie Mellon University, has been teaching language to machines for decades. The way he describes it, our minds are swimming with cultural and historical allusions, accumulated over millennia, along with a complex scheme of who’s who. Words, when spoken or read, vary wildly according to context. (Just imagine if the cops in New York raced off to Citi Field, sirens wailing, every time someone was heard saying, “The Mets are getting killed!”)
Carbonell, sitting in his Pittsburgh office, gave another example. He issued a statement: “I