Rule 34 - Charles Stross [133]
MacDonald blinks rapidly. “Didn’t you know?”
You take a deep breath. “We’re just cops: Nobody tells us anything. Humour us. Um. What sort of, uh, general cognitive engines are we talking about? Project ATHENA, is that one?”
“Loosely, yes.” He rubs at his face, an expression of profound bafflement wrinkling his brows. “ATHENA is one of a family of research-oriented identity-amplification engines that have been developed over the past few years. It’s not all academic; for example TR/Mithras. Junkbot.D and Worm/NerveBurn.10143 are out there now. They’re malware AI engines; the Junkbot family are distributed identity simulators used for harvesting trust, while NerveBurn . . . we’re not entirely sure, but it seems to be a sand-boxed virtual brain simulator running on a botnet, possibly a botched attempt at premature mind uploading . . .” He rubs his face again. “ATHENA is a bit different. We’re an authorized botnet—that is, we’re legal; students at participating institutions are required to sign an EULA that permits us to run a VM instance on their pad or laptop, strictly for research in distributed computing. There’s also a distributed screen-saver project for volunteers. ATHENA’s our research platform in moral metacognition.”
“Metacognition?”
“Loosely, it means we’re in consciousness studies—more prosaically, we’re in the business of telling spam from ham.” He shrugs apologetically. “Big contracts from telcos who want to cut down on the junk traffic: It pays our grants. The spambots have been getting disturbingly convincing—last month there was a report of a spearphishing worm that was hiring call girls to role-play the pick-ups the worm had primed its targets to expect. Some of them are getting very sophisticated—using multiple contact probes to simulate an entire social network—big ones, hundreds or thousands of members, with convincing interactions—e-commerce, fake phone conversations, the whole lot—in front of the victim. Bluntly, we’re only human; we can’t tell the difference between a spambot and a real human being anymore without face-to-face contact. So we need identity amplification to keep up.
“The ATHENA research group is working on the spam-filtering problem by running a huge distributed metacognition app that’s intended to pick holes in the spammers’ fake social networks.”
MacDonald magicks up a big diagram in place of the graphs; it looks like a tattered spider-web. “Here’s a typical social network. Each node is a person. They’ve got a lot of local connections, and a handful of long-range ones.” Thin strands snake across the web, linking distant intersections. “Zoom in on one of the nodes, and we have a bunch of different networks: their email, chat, phone calls, online purchases . . .” A slew of different spider-webs, cerise and cyan and magenta, all appear centred on a single point. They’re all subtly different in shape. “Spambots usually get their networks wrong, too regular, not noisy enough. And we can deduce other information by looking at the networks, of course. You know the old one about checking the phone bills for signs that your partner’s having an affair, right? There are other, more subtle signs of—well, call it potential criminality. Odds are, before your partner snuck off for some illicit nookie, there was a warm-up period, lots of chatter with characteristic weighted phrases—we’re human: We talk in clichés the whole time, framing the narrative of our lives. Or take some of the commoner personality disorders: pre-ATHENA, we had diagnostic tools that could diagnose schizophrenia from a sample of email messages with eerie accuracy. Network analysis lets us learn a lot about people. Network injection lets us steer people—subject to ethics oversight, I hasten to add—frankly, the possibilities are endless, and a bit frightening.”
“Can you give me an example of what you mean by steering people?” Kemal nudges.
“Hmm.” MacDonald’s chair squeals as he leans back. “Okay, let’s talk hypotheticals: