Online Book Reader

Home Category

Final Jeopardy (Alexandra Cooper Mysteries) - Linda Fairstein [67]

By Root 299 0
two, chemistry was fairly settled. Halo would also sidestep the complications that came with natural language. Vulcan let the three competing companies, two American and one German, translate the questions from English into a logic language that their systems could understand. At some point in the future, they hoped, this digital Aristotle would banter back and forth in human languages. But in the four-month pilot, it just had to master the knowledge and logic of high school chemistry.

The three systems passed the test, albeit with middling scores. But if you looked at the process, you’d hardly know that machines were involved. Teaching chemistry to these systems required a massive use of human brainpower. Teams of humans—knowledge engineers—had to break down the fundamentals of chemistry into components that the computers could handle. Since the computer couldn’t develop concepts on its own, it had to learn them as exhaustive lists and laws. “We looked at the cost, and we said, ‘Gee, it costs $10,000 per textbook page to formulate this knowledge,’” Friedland said.

It seemed ludicrous. Instead of enlisting machines to help sort through the cascades of new scientific information, the machines were enlisting humans to encode the tiniest fraction of it—and at a frightful cost. The Vulcan team went on to explore ways in which thousands, or even millions, of humans could teach these machines more efficiently. In their vision, entire communities of experts would educate these digital Aristotles, much the way online communities were contributing their knowledge to create Wikipedia. Work has continued through the decade, but the two principles behind the Halo thinking haven’t changed: First, smart machines require smart teachers, and only humans are up to the job. Second, to provide valuable answers, these computers have to be fed factual knowledge, laws, formulas, and equations.

Not everyone agrees. Another faction, closely associated with search engines, is approaching machine intelligence from a different angle. They often remove human experts from the training process altogether and let computers, guided by algorithms, study largely on their own. These are the statisticians. They’re closer to Watson’s camp. For decades, they’ve been at odds with their rule-writing colleagues. But their approach registered a dramatic breakthrough in 2005, when the U.S. National Institute for Standards and Technologies held one of its periodic competitions on machine translation. The government was ravenous for this translation technology. If machines could automatically monitor and translate Internet traffic, analysts might get a jump on trends in trade and technology and, even more important, terrorism. The competition that year focused on machine translation from Chinese and Arabic into English. And it drew the usual players, including a joint team from IBM and Carnegie Mellon and a handful of competitors from Europe. Many of these teams, with their blend of experts in linguistics, cognitive psychology, and computer science, had decades of experience working on translations.

One new player showed up: Google. The search giant had been hiring experts in machine translation, but its team differed from the others in one aspect: No one was expert in Arabic or Chinese. Forget the nuances of language. They would do it with math. Instead of translating based on semantic and grammatical structure, the interplay of the verbs and objects and prepositional phrases, their computers were focusing purely on statistical relationships. The Google team had fed millions of translated documents, many of them from the United Nations, into their computers and supplemented them with a multitude of natural-language text culled from the Web. This training set dwarfed their competitors’. Without knowing what the words meant, their computers had learned to associate certain strings of words in Arabic and Chinese with their English equivalents. Since they had so very many examples to learn from, these statistical models caught nuances that had long confounded machines.

Return Main Page Previous Page Next Page

®Online Book Reader