Final Jeopardy (Alexandra Cooper Mysteries) - Linda Fairstein [66]
Meanwhile, other scientists in the field pursue a different type of question-answering system—a machine that actually knows things. For two generations, an entire community in AI has tried to teach computers about the world, describing the links between oxygen and hydrogen, Indiana and Ohio, tables and chairs. The goal is to build knowledge engines, machines very much like Watson but capable of much deeper reasoning. They have to know things and understand certain relationships to come up with insights. Could the emergence of a data-crunching wonder like Watson short-circuit their research? Or could their work help Watson grow from a dilettante into a scholar?
In the first years of the twenty-first century, Paul Allen, the cofounder of Microsoft, was pondering Aristotle. For several decades in the fourth century BC, that single Greek philosopher was believed to hold most of the world’s scientific knowledge in his head. Aristotle was like the Internet and Google combined. He stored the knowledge and located it. In a sense, he outperformed the Internet because he combined his factual knowledge with a mastery of language and context. He could answer questions fluently, and he was reputedly a wonderful teacher.
This isn’t to say that as an information system, Aristotle had no shortcomings. First, the universe of scientific knowledge in his day was tiny. (Otherwise it wouldn’t have fit into one head, no matter how brilliant.) What’s more, the bandwidth in and out of his prodigious mind was severely limited. Only a small group of philosophers and students (including the future Alexander the Great) enjoyed access to it, and then only during certain hours of the day, when the philosopher turned his attention to them. He did have to study, after all. Maintaining omniscience—or even a semblance of it—required hard work.
For perhaps the first time since the philosopher’s death, as Allen saw it, a single information system—the Internet —could host the universe of scientific knowledge, or at least a big chunk of it. But how could people gain access to this treasure, learn from it, winnow the truth from fiction and innuendo? How could computers teach us? The solution, it seemed to him, was to create a question-answering system for science, a digital Aristotle.
For years, Allen had been plowing millions into research on computing and the human brain. In 2003, he directed his technology incubator, Vulcan Inc., of Seattle, to sponsor long-range research to develop a digital Aristotle. The Vulcan team called it Project Halo. This scientific expert, they hoped, would fill a number of roles, from education to research. It would answer questions for students, maybe even develop a new type of interactive textbook. And it would serve as an extravagantly well-read research assistant in laboratories.
For Halo to succeed in these roles, it needed to do more than simply find things. It had to weave concepts together. This meant understanding, for example, that when water reaches 100 degrees centigrade it turns into steam and behaves very differently. Plenty of computers could impart that information. But how many could incorporate such knowledge into their analysis and reason from it? The idea of Halo was to build a system that, at least by a liberal definition of the word, could think.
The pilot project was to build a computer that could pass the college Advanced Placement tests in chemistry. Chemistry, said Noah Friedland, who ran the project for Vulcan, seemed like the ideal subject for a computer. It was a hard science “without a lot of psychological interpretations.” Facts were facts, or at least closer to them in chemistry than in squishier domains, like economics. And unlike biology, in which tissue scans and genomic research were unveiling new discoveries every month or