Complexity_ A Guided Tour - Melanie Mitchell [99]
Finally, there is the ever-thorny issue of meaning. In chapter 12 I said that for traditional computers, information is not meaningful to the computer itself but to its human creators and “end users.” However, I would like to think that Copycat, which represents a rather nontraditional mode of computation, perceives a very primitive kind of meaning in the concepts it has and in analogies it makes. For example, the concept successor group is embedded in a network in which it is linked to conceptually similar concepts, and Copycat can recognize and use this concept in an appropriate way in a large variety of diverse situations. This is, in my mind, the beginning of meaning. But as I said in chapter 12, meaning is intimately tied up with survival and natural selection, neither of which are relevant to Copycat, except for the very weak “survival” instinct of lowering its temperature. Copycat (and an even more impressive array of successor programs created in Hofstadter’s research group) is still quite far from biological systems in this way.
The ultimate goal of AI is to take humans out of the meaning loop and have the computer itself perceive meaning. This is AI’s hardest problem. The mathematician Gian-Carlo Rota called this problem “the barrier of meaning” and asked whether or when AI would ever “crash” it. I personally don’t think it will be anytime soon, but if and when this barrier is unlocked, I suspect that analogy will be the key.
CHAPTER 14
Prospects of Computer Modeling
BECAUSE COMPLEX SYSTEMS ARE TYPICALLY, as their name implies, hard to understand, the more mathematically oriented sciences such as physics, chemistry, and mathematical biology have traditionally concentrated on studying simple, idealized systems that are more tractable via mathematics. However, more recently, the existence of fast, inexpensive computers has made it possible to construct and experiment with models of systems that are too complex to be understood with mathematics alone. The pioneers of computer science—Alan Turing, John von Neumann, Norbert Wiener, and others—were all motivated by the desire to use computers to simulate systems that develop, think, learn, and evolve. In this fashion a new way of doing science was born. The traditional division of science into theory and experiment has been complemented by an additional category: computer simulation (figure 14.1). In this chapter I discuss what we can learn from computer models of complex systems and what the possible pitfalls are of using such models to do science.
What Is a Model?
A model, in the context of science, is a simplified representation of some “real” phenomenon. Scientists supposedly study nature, but in reality much of what they do is construct and study models of nature.
Think of Newton’s law of gravity: the force of gravity between two objects is proportional to the product of their masses divided by the square of the distance between them. This is a mathematical statement of the effects of a particular phenomenon—a mathematical model. Another kind of model describes how the phenomenon actually works in terms of simpler concepts—that is, what we call mechanisms. In Newton’s own time, his law of gravity was attacked because he did not give a mechanism for gravitational force. Literally, he did not show how it could be explained in terms of “size, shape, and motion” of parts of physical objects—the primitive elements that were, according to Descartes, necessary and sufficient components of all models in physics. Newton himself speculated on possible mechanisms of gravity; for example, he “pictured the Earth like a sponge, drinking up the constant stream of fine aethereal matter falling from the heavens, this stream by its impact on bodies above the Earth causing them to descend.” Such a conceptualization might be called a mechanistic model. Two hundred years later, Einstein