Caves of Steel - Isaac Asimov [66]
The roboticist looked very gratified. “That is exactly what I mean, Mr.…”
Baley waited a moment, then carefully introduced R. Daneel: “This is Daneel Olivaw, Dr. Gerrigel.”
“Good day, Mr. Olivaw.” Dr. Gerrigel extended his hand and shook Daneel’s. He went on, “It is my estimation that it would take fifty years to develop the basic theory of a non-Asenion positronic brain—that is, one in which the basic assumptions of the Three Laws are disallowed—and bring it to the point where robots similar to modern models could be constructed.”
“And this has never been done?” asked Baley. “I mean, Doctor, that we’ve been building robots for several thousand years. In all that time, hasn’t anybody or any group had fifty years to spare?”
“Certainly,” said the roboticist, “but it is not the sort of work that anyone would care to do.”
“I find that hard to believe. Human curiosity will undertake anything.”
“It hasn’t undertaken the non-Asenion robot. The human race, Mr. Baley, has a strong Frankenstein complex.”
“A what?”
“That’s a popular name derived from a Medieval novel describing a robot that turned on its creator. I never read the novel myself. But that’s beside the point. What I wish to say is that robots without the First Law are simply not built.”
“And no theory for it even exists?”
“Not to my knowledge, and my knowledge,” he smiled self-consciously, “is rather extensive.”
“And a robot with the First Law built in could not kill a man?”
“Never. Unless such killing were completely accidental or unless it were necessary to save the lives of two or more men. In either case, the positronic potential built up would ruin the brain past recovery.”
“All right,” said Baley. “All this represents the situation on Earth. Right?”
“Yes. Certainly.”
“What about the Outer Worlds?”
Some of Dr. Gerrigel’s self-assurance seemed to ooze away. “Oh dear, Mr. Baley, I couldn’t say of my own knowledge, but I’m sure that if non-Asenion positronic brains were ever designed or if the mathematical theory were worked out, we’d hear of it.”
“Would we? Well, let me follow up another thought in my mind, Dr. Gerrigel. I hope you don’t mind.”
“No. Not at all.” He looked helplessly first at Baley, then at R. Daneel. “After all, if it is as important as you say, I’m glad to do all I can.”
“Thank you, Doctor. My question is, why humanoid robots? I mean that I’ve been taking them for granted all my life, but now it occurs to me that I don’t know the reason for their existence. Why should a robot have a head and four limbs? Why should he look more or less like a man?”
“You mean, why shouldn’t he be built functionally, like any other machine?”
“Right,” said Baley. “Why not?”
Dr. Gerrigel smiled a little. “Really, Mr. Baley, you were born too late. The early literature of robotics is riddled with a discussion of that very matter and the polemics involved were something frightful. If you would like a very good reference to the disputations among the functionalists and anti-functionalists, I can recommend Hanford’s History of Robotics. Mathematics is kept to a minimum. I think you’d find it very interesting.”
“I’ll look it up,” said Baley, patiently. “Meanwhile, could you give me an idea?”
“The decision was made on the basis of economics. Look here, Mr. Baley, if you were supervising a farm, would you care to build a tractor with a positronic brain, a reaper, a harrow, a milker, an automobile, and so on, each with a positronic brain; or would you rather have ordinary unbrained machinery with a single positronic robot to run them all. I warn you that the second alternative represents only a fiftieth or a hundredth the expense.”
“But why the human form?”
“Because the human form is the most successful generalized form in all nature. We are not a specialized