Cascadia's Fault - Jerry Thompson [124]
The prediction team had also started work on what they called a “real-time seismic warning system.” Japanese scientists were hoping to use super-fast technology to reduce the extent and severity of damage once a fault had begun to slip. They loaded a supercomputer with 100,000 preprogrammed scenarios based on the magnitude and exact location of the coming temblor. As soon as the ground began to shake instruments would feed data to the computer and the computer would spit out the most likely scenario—one minute later.
But on June 16, 2008, a magnitude 7.8 shockwave hit northern Japan, killing at least nine people, destroying homes and factories throughout the region. The real-time system did signal that a powerful jolt was happening—roughly three and a half seconds after it started—but the source of the quake was too close to be of any use to places like Oshu, which was only eighteen miles (30 km) from the epicenter. People there received 0.3 seconds of warning. The unfortunate reality is that those closest to the strongest level of shaking will always be the ones who receive the shortest notice. Even if the system works exactly as it should, a real-time warning system will benefit primarily those farther away. On the other hand, it could stop or slow the spread of fires and speed the arrival of emergency crews. So in Japan—at least for the foreseeable future—the supercomputer and the six wise men still have a job to do.
Not only was the Parkfield earthquake a dozen years late but the densely woven grid of seismographs, strainmeters, lasers, and other equipment that made the area one of the most closely watched rupture patches in the world had apparently failed to spot any obvious symptoms or definite precursors. In 1934 and 1966 the Parkfield main shocks had been preceded by apparently identical magnitude 5 foreshocks, each about seventeen minutes prior to the magnitude 6 main event. But not this time.
In 1966 as well, the fault had seemed to creep a bit more than normal in the weeks before the failure. There were reports of new cracks in the ground and a water pipe crossing the zone broke the night before the rupture. Nothing like that happened before the 2004 event. No obvious foreshocks or slip before the main event. Seven “creepmeters” were deployed along the rupture zone with nothing to show for the effort. But all was not lost according to Allan Lindh, who in early 2005 wrote an opinion piece for Seismological Research Letters defending the work at Parkfield. His paper sounded a new rallying cry for prediction science.
Looking closely at where the break occurred, how strong it was, and the aftershock pattern that followed, he argued that a key part of their original prediction had come true. What happened in 2004 was physically “a near-perfect repeat” of the 1966 event, according to Lindh. The same earthquake happened again—rupturing the same fifteen-mile-long (25 km) segment of the San Andreas between the same two little bends or “discontinuities” in the rock and with the same overall magnitude—what some had called Parkfield’s signature or “characteristic” earthquake. One might have expected the magnitude to be greater than 6 because the jolt was twelve to fifteen years later than expected and therefore had more time to accumulate strain in the rocks. But the magnitude was 6, just like its predecessors. Hence, Lindh argued, it was a repeat of the same event.
One curious twist was that the 1966 event had ripped the fault from north to south while this time it unzipped from south to north. And according to Lindh, there may have been a “small premonitory signal” at three or four Parkfield strainmeters. Holes had been drilled hundreds of feet down into the fracture