The Omnivore's Dilemma - Michael Pollan [46]
“First we separate the corn into its botanical parts—embryo, endosperm, fiber—and then into its chemical parts,” Johnson explained as we began our tour of the plant. When a shipment of corn arrives at the mill, it is steeped for thirty-six hours in a bath of water containing a small amount of sulphur dioxide. The acid bath swells the kernels and frees the starch from the proteins that surround it.
After the soak, the swollen kernels are ground in a mill. “By now the germ is rubbery and it pops right off,” Johnson explained. “We take the slurry to a hydroclone”—basically a centrifuge for liquids—“where the germ floats off. After it’s dried, we squeeze it for corn oil.” Corn oil can be used as a cooking or salad oil, or hydrogenated for use in margarine and other processed foods: Atoms of hydrogen are forced into the fat molecules to make them solid at room temperature. (Though originally designed as a healthy substitute for animal fats, medical researchers now think these trans fats are actually worse for our arteries than butter.)
Once the germ has been removed and the kernels crushed, what’s left is a white mush of protein and starch called “mill starch.” To draw off as much of the protein as possible, the mill starch undergoes a progressively finer series of grindings and filterings and centrifuges. The extracted protein, called gluten, is used in animal feed. At each step more fresh water is added—it takes about five gallons to process a bushel of corn, and prodigious amounts of energy. Wet milling is an energy-intensive way to make food; for every calorie of processed food it produces, another ten calories of fossil fuel energy are burned.
At this point the process has yielded a white slurry that’s poured out onto a stainless steel table and dried to a fine, superwhite powder—cornstarch. Cornstarch comprised wet milling’s sole product when the industry got its start in the 1840s. At first the laundry business was its biggest customer, but cooks and early food processors soon began adding cornstarch to as many recipes as they could: It offered the glamour of modernity, purity, and absolute whiteness. By 1866, corn refiners had learned how to use acids to break down cornstarch into glucose, and sweeteners quickly became—as they remain today—the industry’s most important product. Corn syrup (which is mostly glucose or dextrose—the terms are interchangeable) became the first cheap domestic substitute for cane sugar.
I remember an elementary school science experiment in which we were instructed to chew—and chew and chew—a cracker until the slurry of starch turned suddenly sweet on our tongues. The teacher explained that the enzymes in our saliva had broken the long starch molecules into shorter molecules of glucose. Much the same process—it’s called “enzyme hydrolysis”—revolutionized corn refining in the 1940s. As enzymes replaced acids, refiners were able to produce progressively sweeter sweeteners from corn. Yet none were quite as sweet as sugar (or, to be more precise, sucrose). That threshold wasn’t crossed until the late 1960s, when Japanese chemists “broke the sweetness barrier,” in the words of the Corn Refiners Association’s official history of high-fructose corn sweetener. They discovered that an enzyme called glucose isomerase could transform glucose into the much sweeter sugar molecule called fructose. By the 1970s the process of refining corn into fructose had been perfected, and high-fructose corn syrup—which is a blend of 55 percent fructose and 45 percent glucose that tastes exactly as sweet as sucrose—came onto the market. Today it is the most valuable food product refined from corn, accounting for 530 million bushels every year. (A bushel of corn yields