Story of Psychology - Morton Hunt [365]
In one of the best known, conducted in 1973, subjects put on headphones after being told by the experimenters, James Lackner and Merrill Garrett, to pay attention only to what they heard with the left ear and to ignore what they heard with the right one. With the left ear they heard ambiguous sentences, such as “The officer put out the lantern to signal the attack”; simultaneously, with the right ear some heard a sentence that would clarify the ambiguous one if they were paying attention to it (“He extinguished the lantern”), while others heard an irrelevant sentence (“The Red Sox are playing a doubleheader tonight”).
Neither group could say, afterward, what they had heard with the right ear. But when asked the meaning of the ambiguous sentence, those who had heard the irrelevant sentence with the right ear were divided as to whether the ambiguous one had meant that the officer snuffed out or set out the lantern, but nearly all of those who had heard the clarifying sentence chose the snuffed out interpretation. Apparently, the clarifying sentence had been processed simultaneously and unconsciously along with the ambiguous one.103
This was one of several reasons why, during the 1970s, a number of psychologists began to hypothesize that thinking does not proceed serially. Another reason was that serial processing could not account for most of human cognitive processes; the neuron is too slow. It operates in milliseconds, so human cognitive processes that take place in a second or less would have to comprise no more than a hundred serial steps. Very few processes are that simple, and many, including perception, recall, speech production, sentence comprehension, and “matching” (pattern or face recognition), require vastly greater numbers.
By 1980 or so, a number of psychologists, information theorists, physicists, and others began developing detailed theories of how a parallel-processing system might work. The theories are extremely technical and involve high-level mathematics, symbolic logic, computer science, schema theory, and other arcana. But David Rumelhart, one of the leaders of the movement, summed up in simple language the thinking that inspired him and fifteen colleagues to develop their version, “Parallel Distributed Processing” (PDP):
Although the brain has slow components, it has very many of them. The human brain contains billions of such processing elements. Rather than organize computation with many, many serial steps, as we do with systems whose steps are very fast, the brain must deploy many, many processing elements cooperatively and in parallel to carry out its activities. These design characteristics, among others, lead, I believe, to a general organization of computing that is fundamentally different from what we are used to.104
PDP also departed radically from the computer metaphor used until then in its explanation of how information is stored. In a computer, information is retained by the states of its transistors. Each is either switched on or off (representing a 0 or a 1), and strings of 0’s and 1’s stand for numbers symbolizing information of all sorts. When the computer is running, electric current maintains these states and the information; when you turn it off, everything is lost. (Permanent storage on a disk is another matter altogether; the disk is outside the operating system, much as a written memo is outside the mind.) This cannot be the mind’s way of storing information. For one thing, a neuron is not either on or off; it adds up inputs from thousands of other neurons and, reaching a certain level of excitation, transmits an impulse to still other neurons. But it does not remain in an active state for more than a fraction of a second, so only very short-term memory is stored in the mind by neuronal states. And since memories are not lost when the brain is turned off in sleep or in the unconsciousness caused by anesthesia, it must be that longer-term storage in the brain is achieved in some other fashion.
The new view,