Shop Class as Soulcraft_ An Inquiry Into the Value of Work - Matthew B. Crawford [90]
5 Daniel Bell, The Coming of Post-Industrial Society: A Venture in Social Forecasting (New York: Basic Books, 1973), pp. 29-30.
6 Ibid., p. 32. In this and many other passages in the book, one isn’t sure if Bell himself adheres to the argument on offer. The passage I have quoted is in fact Bell’s paraphrase of an argument by one Jay Forrester. Bell seems to distance himself from it on the next page (he calls the project of trying to rationally order society through the deployment of intellectual technology a utopian dream that has faltered), yet the whole thread of the book depends on its validity, and indeed Bell affirms it in statements published later. Kevin Robins and Frank Webster detail Bell’s contradictions and suggest they are “functional”—they do important rhetorical work. See their “Information as Capital: A Critique of Daniel Bell,” in Jennifer Daryl Slack and Fred Fejes, eds., The Ideology of the Information Age (New York: Ablex Publishing Corporation, 1987), pp. 95-117.
7 As quoted by Bruce Bower, “Seeing through Expert Eyes: Ace Decision Makers May Perceive Distinctive Worlds,” Science News 154, no. 3 (July 18, 1988), p. 44. Klein says further that “When difficulties arise, experts find opportunities for improvising solutions.”
8 See especially Michael Polanyi, The Tacit Dimension (Chicago: University of Chicago Press, 1966).
9 Hubert L. Dreyfus and Stuart E. Dreyfus, “From Socrates to Expert Systems: The Limits and Dangers of Calculative Rationality,” available at http://socrates.berkeley.edu/~hdreyfus/html/paper_socrates.html.
10 A. D. De Groot, Thought and Choice in Chess (The Hague: Mouton, 1965).
11 This elegant variation on De Groot’s original study, using the random condition, was conducted by W. G. Chase and H. A. Simon, “Perception in Chess,” Cognitive Psychology 4 (1973), pp. 55-81.
12 See, above all, Michael Wheeler, Reconstructing the Cognitive World: The Next Step (Cambridge, Mass.: MIT Press, 2005).
13 For an excellent account, see Jean-Pierre Dupuy, The Mechanization of the Mind: On the Origins of Cognitive Science (Princeton: Princeton University Press, 2000).
14 It may be interesting to note that the origins of computer science coincide with an insight that seems to grant the human its due. The discipline grew out of early-twentieth-century developments in logic. Gödel’s theorem proves logically that some true statements, the truth of which is easily seen by human beings, cannot be proved to be true by the application of any formal system of rules. A computer that tries to do so will chase its tail indefinitely, never halting at an answer (the so-called halting problem). Alan Turing recognized that this meant that human minds are able to perform “uncomputable” operations. According to Andrew Hodges, Turing’s 1938 Ph.D. thesis posed the question: “What is the consequence of supplementing a formal system with uncomputable deductive steps? In pursuit of this question, Turing introduced the definition of an ‘oracle’ which can supply on demand an answer to the halting problem for every Turing machine,” that is, for every digital computer. Turing “effectively identified the uncomputable ‘oracle’ with intuition” of the sort used by mathematicians in proving theorems, and in particular “the human act of seeing the truth of a formally unprovable Gödel statement” (Andrew Hodges, “Uncomputability in the Work of Alan Turing and Roger Penrose,” a lecture available at www.turing.org.uk/philosophy/lecture1.html). The essential feature of the oracle is that it performs steps which cannot be realized by any mechanical process.
During World War II, Turing participated in the Enigma code-breaking program, which used highly routinized methods. Through this experience he came to be more interested in what machines can do than in what they cannot. “Turing concluded that the scope of computability was not limited to processes where the mind follows an explicitly given rule. Machines