Cascadia's Fault - Jerry Thompson [138]
Dan Cox and his team then programmed a sophisticated set of computer-controlled mechanical paddles at the far end of the basin. The system was capable of generating a scaled-down version of Cascadia’s wave: one-fiftieth the size of the real one oceanographers and marine geologists expect to see crossing Seaside beach some day in the unpredictable future.
A special-effects camera team filmed the experiment (for the ShockWave documentary) so others could observe the results. To visually slow down a lump of water moving fifty times faster in the tank than the real wave would sweep across the beach at Seaside, we used a special high-speed camera that could shoot up to 1,500 frames per second and still deliver a high-definition color picture. We used a snorkel attachment to create a pedestrian’s eye view of the tsunami as it moved up Broadway. We were able to play back the wave experiments on a largescreen, flat-panel TV display. On a work table beside the giant monitor, a computer terminal had been set up by Patrick Lynett, a scientist from Texas A&M University who had been working for months on a parallel experiment to refine a numerical model designed to match the bathymetry and layout of Dan Cox’s model of Seaside. They would run their waves simultaneously and compare results.
For Lynett and the many others involved in the computer modeling of tsunamis, the running of a wet physical replica of Cascadia’s wave in a test basin like this at OSU would provide a crucial benchmark—a reality check for the mathematics. If the two models showed pretty much the same results, then an extra measure of confidence would be gained for the computer simulations. A large physical replication in concrete and plywood for each of the dozens of communities threatened by Cascadia would never be affordable, either in dollars and cents or in the amount of time it would take. But if a computer model could reliably tell you the same thing, physical models wouldn’t be necessary.
In principle, if Lynett’s model worked well for Seaside, then it could be reprogrammed and modified with new bathymetric and street grid details for the next town on the coast, and a more realistic appraisal of the inundation zone and specific levels of risk could be had much sooner and at lower cost. At least that was the theory and the reason that people like Dan Cox and Patrick Lynett were eager to see what happened next.
Chris Goldfinger, back from his research cruise to Sumatra, offered a sobering caution. The numerical simulation of anything as sloppy as moving water is extremely difficult to do. It was hard enough to work on a broad, oceanwide scale as Vasily Titov had done, but even more challenging when you tried to zoom in to detailed street grids and individual buildings in a single town. The tighter the grid, the more exacting the model, the greater the chances for error.
“The best computer models now are working hard at quantifying the flow [of water] around one or two objects,” Goldfinger explained, “a cylinder, a bridge piling, something like that—a relatively simple case—just because the computational time is enormous.” When the myriad three-dimensional obstacles in a real harbor and town are assigned numerical values—the friction coefficient for water moving over the sandy ocean bottom, a different level of friction and drag once the swell crests, crashes over the seawall, and begins moving over dry ground cluttered with buildings, cars, trucks, trees, and lumpy terrain—it gets a lot more difficult.
After a quick check by portable radio with the crew standing by in the control room to confirm that the computer and the paddles were ready, Cox turned to his visitors. “So what you’re going to see next,” he explained, “is the rough equivalent of the five-hundred-year-event.” Meaning the full-margin rupture