Rule 34 - Charles Stross [124]
“For starters, in modern societies, the law is incredibly complex: There are at least eight thousand offenses on the books in England and about the same in this country, enough that you people have to use decision-support software to figure out what to charge people with, and perhaps an order of magnitude more regulations for which violations can be prosecuted—ignorance may not be a defense in law, but it’s a fact on the ground. To make matters worse, while some offenses are strict-liability—possession of child porn or firearms being an absolute offense, regardless of context—others hinge on the state of mind of the accused. Is it murder or manslaughter? Well, it depends on whether you meant to kill the victim, doesn’t it?”
He pauses. “Are you following this?”
“Just a sec.” You flick your fingers at the virtual controls, roll your specs back in time a minute to follow MacDonald, who is on a professorial roll. “Yes, I’m logging you loud and clear. If you’ll pardon me for asking, though, I asked about automated social engineering? Not for a lecture on the impossibility of policing.” Perhaps you let a little too much irritation into your voice, as he shuffles defensively.
“I was getting there. There’s a lot of background . . .” MacDonald shakes his head. “I’m not having a go at you, honestly, I’m just trying to explain the background to our research group’s activity.”
Kemal leans forward. “In your own time, Doctor.” He doesn’t look at you, doesn’t make eye contact, but he’s clearly decided to nominate you for the bad-cop role. Which is irritating, because you’d pegged him for that niche.
“Alright. Well, moving swiftly sideways into cognitive neuroscience . . . in the past twenty years we’ve made huge strides, using imaging tools, direct brain interfaces, and software simulations. We’ve pretty much disproved the existence of free will, at least as philosophers thought they understood it. A lot of our decision-making mechanics are subconscious; we only become aware of our choices once we’ve begun to act on them. And a whole lot of other things that were once thought to correlate with free will turn out also to be mechanical. If we use transcranial magnetic stimulation to disrupt the right temporoparietal junction, we can suppress subjects’ ability to make moral judgements; we can induce mystical religious experiences: We can suppress voluntary movements, and the patients will report that they didn’t move because they didn’t want to move. The TMPJ finding is deeply significant in the philosophy of law, by the way: It strongly supports the theory that we are not actually free moral agents who make decisions—such as whether or not to break the law—of our own free will.
“In a nutshell, then, what I’m getting at is that the project of law, ever since the Code of Hammurabi—the entire idea that we can maintain social order by obtaining voluntary adherence to a code of permissible behaviour, under threat of retribution—is fundamentally misguided.” His eyes are alight; you can see him in the Cartesian lecture-theatre of your mind, pacing door-to-door as he addresses his audience. “If people don’t have free will or criminal intent in any meaningful sense, then how can they be held responsible for their actions? And if the requirements of managing a complex society mean the number of laws have exploded until nobody can keep track of them without an expert system, how can people be expected to comply with them?
“Which is where we come to the ATHENA research group—actually, it’s a joint European initiative funded by the European Research Council—currently piloting studies in social-network-augmented choice architecture for Prosthetic Morality Enforcement.”
You look at Kemal, silently: Kemal looks at you. And for a split second you can read his mind. Kemal is thinking exactly the same thought as you, or any other cop in your situation. Which