Final Jeopardy (Alexandra Cooper Mysteries) - Linda Fairstein [3]
There was one small problem. For months, he couldn’t get any takers. Jeopardy, with its puns and strangely phrased clues, seemed too hard for a computer. IBM was already building machines to answer questions, and their performance, in speed and precision, came nowhere close to that of even a moderately informed person. How could the next machine grow so much smarter?
And while researchers regarded the challenge as daunting, many people, Horn knew, saw it precisely the other way. Answering questions? Didn’t Google already do that?
Horn eventually enticed David Ferrucci and his team to pursue his vision. Ferrucci, then in his midforties, wore a dark brown beard wrapped around his mouth and wire-rimmed glasses. An expert in Artificial Intelligence (AI), he had a native New Yorker’s gift of the gab and an openness, even about his own life, that was at times jolting. (“I have a growing list of potentially mortal diseases,” he said years later. “People order an MRI a week for me.”) But he also had a wide and ranging intellect. Early in his tenure at IBM he and a friend tried, in their spare time, to teach a machine to write fiction by itself. They trained it for various literary themes, from love to betrayal, and they named it Brutus, for Julius Caesar’s traitorous comrade. Ferrucci was comfortable talking about everything from the details of computational linguistics to the evolution of life on earth and the nature of human thought. This made him an ideal ambassador for a Jeopardy machine. After all, his project would raise a broad range of issues, and fears, about the role of brainy machines in society. Would they compete for jobs? Could they establish their own agendas, like the infamous computer HAL, in 2001: A Space Odyssey, and take control? What was the future of knowledge and intelligence, and how would brains and machines divvy up the cognitive work? Ferrucci was always ready with an opinion. At the same time, he could address the strategic questions—how these machines would fit into hundreds of businesses, and why the project he was working on, as he saw it, went far beyond Google.
The Google question was his starting point; until people understood that his machine was not just a souped-up search engine, the project made little sense. For certain types of questions, Ferrucci said, a search engine could come up with answers. These were simple sentences with concrete results, what he and his team called factoids. For example: “What is the tallest mountain in Africa?” A search engine would pick out the three key words from that sentence and in a fraction of a second suggest Kenya’s 19,340-foot-high Kilimanjaro. This worked, Ferrucci said, for about 30 percent of Jeopardy questions. But performance at that low level would condemn Watson to defeat at the hands of human amateurs.
A Jeopardy machine would have to master far thornier questions. Just as important, it would have to judge its level of confidence in an answer. Google’s algorithms delivered users to the statistically most likely outposts of the Web and left it to the readers to find the answers. “A search engine doesn’t know that it understood the question and that the content is right,” Ferrucci said. But a Jeopardy machine would have to find answers and then decide for itself if they were worth betting on. Without this judgment, the machine would never know when to buzz. It would require complex analysis