I, Robot - Isaac Asimov [79]
Quinn raised polite eyebrows, “Why not, doctor?”
“Because, if you stop to think of it, the three Rules of Robotics are the essential guiding principles of a good many of the world’s ethical systems. Of course, every human being is supposed to have the instinct of self-preservation. That’s Rule Three to a robot. Also every ‘good’ human being, with a social conscience and a sense of responsibility, is supposed to defer to proper authority; to listen to his doctor, his boss, his government, his psychiatrist, his fellow man; to obey laws, to follow rules, to conform to custom—even when they interfere with his comfort or his safety. That’s Rule Two to a robot. Also, every ‘good’ human being is supposed to love others as himself, protect his fellow man, risk his life to save another. That’s Rule One to a robot. To put it simply—if Byerley follows all the Rules of Robotics, he may be a robot, and may simply be a very good man.”
“But,” said Quinn, “you’re telling me that you can never prove him a robot.”
“I may be able to prove him not a robot.”
“That’s not the proof I want.”
“You’ll have such proof as exists. You are the only one responsible for your own wants.”
Here Lanning’s mind leaped suddenly to the sting of an idea, “Has it occurred to anyone,” he ground out, “that district attorney is a rather strange occupation for a robot? The prosecution of human beings—sentencing them to death—bringing about their infinite harm—”
Quinn grew suddenly keen, “No, you can’t get out of it that way. Being district attorney doesn’t make him human. Don’t you know his record? Don’t you know that he boasts that he has never prosecuted an innocent man; that there are scores of people left untried because the evidence against them didn’t satisfy him, even though he could probably have argued a jury into atomizing them? That happens to be so.”
Lanning’s thin cheeks quivered, “No, Quinn, no. There is nothing in the Rules of Robotics that makes any allowance for human guilt. A robot may not judge whether a human being deserves death. It is not for him to decide. He may not harm a human—variety skunk, or variety angel.”
Susan Calvin sounded tired. “Alfred,” she said, “don’t talk foolishly. What if a robot came upon a madman about to set fire to a house with people in it. He would stop the madman, wouldn’t he?”
“Of course.”
“And if the only way he could stop him was to kill him—”
There was a faint sound in Lanning’s throat. Nothing more.
“The answer to that, Alfred, is that he would do his best not to kill him. If the madman died, the robot would require psychotherapy because he might easily go mad at the conflict presented him—of having broken Rule One to adhere to Rule One in a higher sense. But a man would be dead and a robot would have killed him.”
“Well, is Byerley mad?” demanded Lanning, with all the sarcasm he could muster.
“No, but he has killed no man himself. He has exposed facts which might represent a particular human being to be dangerous to the large mass of other human beings we call society. He protects the greater number and thus adheres to Rule One at maximum potential. That is as far as he goes. It is the judge who then condemns the criminal to death or imprisonment, after the jury decides on his guilt or innocence. It is the jailer who imprisons him, the executioner who kills him. And Mr. Byerley has done nothing but determine truth and aid society.
“As a matter of fact, Mr. Quinn, I have looked into Mr. Byerley’s career since you first brought this matter to our attention. I find that he has never demanded the death sentence in his closing speeches to the jury. I also find that he has spoken on behalf of the abolition of capital punishment and contributed generously to research institutions engaged in criminal neurophysiology. He apparently believes in the cure, rather than the punishment of crime. I find that significant.”
“You do?” Quinn smiled.