Final Jeopardy (Alexandra Cooper Mysteries) - Linda Fairstein [71]
Hassabis had followed an unusual path toward AI research. At thirteen, he was the highest ranked chess player of his age on earth. But computers were already making inroads in chess. So why dedicate his brain, which he had every reason to believe was exceptional, to a field that machines would soon conquer? (From the perspective of futurists, chess was an early sighting of the Singularity.) Even as he played chess, Hassabis said later, he was interested in what was going on in his head—and how to transmit those signals to machines. “I knew then what I wanted to do, and I had a plan for getting there.”
The first step was to drop chess and dive into an area that was attracting (and arguably, shaping) the brains of many in his generation: video games. By seventeen, he was the lead developer on the game “Theme Park.” It sold millions of copies and won industry awards. He went on to Cambridge for a degree in computer science and then founded a video game company, Elixir Studios, when he was twenty-two. While running the company, Hassabis participated in the British “Mind Sports Olympiad” every year. This was where brain games aficionados gathered to compete in all kinds of contests, including chess, poker, bridge, go, and backgammon. In six years, he won the championship five times.
The way Hassabis described it, this was all leading to his current research. The video game experience gave him a grounding in software, hardware, and an understanding of how humans and computers interacted (known in the industry as man-machine interface). The computer science delivered the tools for AI. And in 2005 he went for the last missing piece, a doctorate in neuroscience.
In his current research at the Gatsby Computational Neuroscience Unit at University College, London, Hassabis focuses on the hippocampus. This is the part of the brain that consolidates memories, sifting through the torrents of thoughts, dialogues, sounds, and images pouring into our minds, and dispatches selected ones into long-term memory. Something singular occurs during that process, he believes. He thinks that it leads to the creation of concepts, a hallmark of human cognition.
“Knowledge in the brain can be separated into three levels,” he said: perceptual, conceptual, and symbolic. Computers can master two of the three, perceptions and symbols. A computer with vision and audio software can easily count the number of dogs in a kennel or measure the decibel level of their barks. That’s perception. And as Watson and others demonstrate, machines can associate symbols with definitions. That’s their forte. Formal ontologies no doubt place “dog” in the canine group and link to dozens of subgroups, from chihuahuas to schnauzers.
But between perceiving the dog and identifying its three-letter symbol, there’s a cognitive gap. Deep down, computers don’t know what dogs are. They cannot create “dog” concepts. A two-year-old girl, in that sense, is far smarter. She can walk into that same kennel, see a Great Dane standing next to a toy poodle, and say, “Doggies!” Between seeing them and naming them, she has already developed a concept of them. It’s remarkably subtle, and she might be hard-pressed, even as she grows older, to explain exactly how she figured out that other animals, like groundhogs or goats, didn’t fit in the dog family. She just knew. Philosophers as far back as Plato have understood that concepts are essential to human thought and human society. And concepts stretch far beyond the kennel. Time, friendship, fairness, work, play, love, cruelty, peace—these are more than words. Until computers can grasp them, they will remain stunted.
The concept generator in the brain, Hassabis believes, is the hippocampal neocortex consolidation system. It has long been known that the hippocampus sifts through traces of