Powering the Dream_ The History and Promise of Green Technology - Alexis Madrigal [137]
Our machines can see much more clearly how the air behaves. Cheap sensors and better algorithms are creating a new type of “environmental awareness” that allows wind engineers to understand the complex natural system they’re tapping. Wind turbines, outfitted with laser remote sensing systems, can adjust themselves to the wall of wind approaching their blades. The laser sensors and control systems into which they feed are a sterling example of what the late-twentieth-century technology can do. It’s a testament to how far knowledge of the wind has come over the last several hundred years, accelerating in just the last fifty thanks to more measurements and better data crunching.
Humans are excellent but imprecise wind sensors: We can’t see it, but we can feel it. Wind has a force, and in keeping with the methods of science at the time, medieval scientists wanted to measure that force. The term “anemometer” (anemo means wind in Greek) came into the world in the early eighteenth century, when Peter Daniel Huet, bishop of Avranches in France, designed a strange device to measure the force of the wind, which, sadly, as he related in his autobiography, was never constructed. Huet wrote,
There had settled at Paris an Englishman named Hubin, a man of ingenuity, and a skillful and industrious workman in mechanics. I went to him and as soon as I mentioned my idea of weighing and measuring the wind, he thought it a matter of jest, and supposed I was ridiculing him. I then produced the figure of a machine by which the force of the wind might easily be weighed as in a balance, and which might be termed an anemometer.32
The dominant design for an instrument to measure the wind was sketched out by the Irish astronomer Thomas Robinson in 1846: It’s called the cup anemometer and it consists of a few little hemispheres that spin around in the wind. If we attach them to some gears and a counting apparatus, we can get a decently accurate determination of wind speed.33 Various improvements were made to this data-gathering apparatus over the next hundred years. The cup anemometer is so good now that Leif Kristensen of Denmark’s flagship energy laboratory calls them “perhaps even the best” instrument for measuring the main thrust of the wind over time.34
Then, in the late 1970s the Department of Energy embarked on an ambitious plan to map out the nation’s wind resources. Before that, the best wind spots were more the stuff of local lore than scientific fact. So the Pacific Northwest National Laboratory and the Solar Energy Research Institute began publishing the first regional maps of the country in the early ’80s and a national wind atlas in 1986. Ever since, wind analysts have been trying to increase the resolution of those maps in both time and space.35
As cheaper computers became available, the company AWS Truewind began marrying large-scale weather models with small-scale topographic maps. They created a parallel process for crunching wind data and ran it on small, fast PCs to get supercomputer-level power at low cost. Then they refined their estimates with data from 1,600 wind measurement towers. Now, high-resolution maps are available that can almost be used to site individual wind turbines.36
But that average wind speed data doesn’t reflect the second-to-second variations in the wind. Between 1975 and 1985 scientists began to understand that the wind didn’t blow the way they thought it did.37 The wind wasn’t like a nice, smooth plane hitting a wind turbine; instead, it had its own structure and it hit like a fist with all those knuckles and bones. Two sites with the same mean wind speed might have very different power production potential because of the individual characteristics of the structure of the wind at each site.
Most wind turbines now come equipped with cup anemometers