Final Jeopardy (Alexandra Cooper Mysteries) - Linda Fairstein [69]
So when it came to teaching knowledge machines at the end of the first decade of the twenty-first century, it was a question of picking your poison. Computers that relied on human teachers were slow to learn and frightfully expensive to teach. Those that learned automatically unearthed possible answers with breathtaking speed. But their knowledge was superficial, and they were unable to reason from it. The goal of AI—to marry the speed and range of machines with the depth and subtlety of the human brain—was still awaiting a breakthrough. Some believed it was at hand.
In 1859, the British writer Samuel Butler sailed from England, the most industrial country on earth, to the wilds of New Zealand. There, for a few years, he raised sheep. He was as far away as could be, on the antipodes, but he had the latest books shipped to him. One package included the new work by Charles Darwin, On the Origin of Species. Reading it led Butler to contemplate humanity in an evolutionary context. Presumably, humans had developed through millions of years, and their rhythms, from the perspective of his New Zealand farm, appeared almost timeless. Like sheep, people were born, grew up, worked, procreated, died, and didn’t change much. If the species evolved from one century to the next, it was imperceptible. But across the seas, in London, the face of the earth was changing. High-pressure steam engines, which didn’t exist when his parents were born, were powering trains across the countryside. Information was speeding across Europe through telegraph wires. And this was just the beginning. “In these last few ages,” he wrote, referring to machines, “an entirely new kingdom has sprung up, of which we as yet have only seen what will one day be considered the antediluvian prototypes of the race.” The next step of human evolution, he wrote in an 1863 letter to the editor of a local newspaper, would be led by the progeny of steam engines, electric turbines, and telegraphs. Human beings would eventually cede planetary leadership to machines. (Not to fear, he predicted: The machines would care for us, much the way humans tended to lesser beings.)
What sort of creature [is] man’s next successor in the supremacy of the earth likely to be? We have often heard this debated; but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race.
Butler’s vision, and others like it, nourished science fiction for more than a century. But in the waning years of the twentieth century, as the Internet grew to resemble a global intelligence and computers continued to gain in power, legions of technogeeks and philosophers started predicting that the age of machines was almost upon us. They called it the Singularity, a hypothetical time in which progress in technology would feed upon itself feverishly, leading to transformational change.
In August 2010, hundreds of computer scientists, cognitive psychologists, futurists, and curious technophiles descended on San Francisco’s Hyatt hotel, on the Embarcadero, for the two-day Singularity Summit. For most of these people, programming machines to catalogue knowledge and answer questions, whether manually or by machine, was a bit pedestrian. They weren’t looking for advances in technology that already existed. Instead, they were focused on a bolder challenge, the development of deep and broad machine intelligence known as Artificial General Intelligence. This, they believed, would lead