All the Devils Are Here [72]
The rating agencies’ new business model came with an obvious conflict: now that they were being paid by bond issuers, the rating agencies were potentially beholden to the same people whose bonds they were rating. For a long time, the potential conflict had kept Moody’s and S&P from taking that step. In 1957, for instance, a Moody’s executive told the Christian Science Monitor, “We obviously cannot ask payment for rating a bond. To do so would attach a price to the process, and we could not escape the charge, which would undoubtedly come, that our ratings are for sale. . . .” Now Moody’s was insisting it could manage this conflict.
The second change came in 1975, when the Securities and Exchange Commission began to use ratings to determine how much capital broker-dealers had to hold. The higher a bond’s rating, the less capital the broker-dealer had to hold against it. This made ratings even more important, but it also begged the question of whose ratings would count toward reducing capital. To prevent a proliferation of fly-by-night bond raters, the SEC decreed that Moody’s, S&P, and Fitch were nationally recognized statistical rating organizations, or NRSROs.
By the time mortgage-backed securities arrived on the scene, ratings were ingrained in the very fiber of the capital markets. Lenders put ratings triggers in bond agreements—stipulations that a ratings downgrade could cause a debt payment to accelerate or collateral to come due. The government had literally hundreds of rules based on ratings. One said that 95 percent of the bonds held by low-risk money market funds had to have an investment-grade rating. Another said that schools participating in government financial aid programs needed to maintain a certain rating. State regulators used ratings to determine the capital that insurers had to hold. “The resulting web of regulation is so thick that a thorough review would occupy hundreds, perhaps thousands of pages,” wrote Frank Partnoy, a professor at the University of San Diego School of Law and a longtime critic of the rating agencies.
As well intentioned as many of these rules were, they overlooked two problems. The first is that the bond market was essentially outsourcing its risk management to the rating agencies. The universal acceptance of the ratings resulted in almost no independent research by the fund managers who actually bought the bonds. They simply assumed that if the rating agency had given a bond a double-A or a triple-A, it must be safe. Nor was this some dark secret. As the Office of the Comptroller of the Currency put it in 1997, “Ratings are important because investors generally accept ratings . . . in lieu of conducting a due diligence investigation of the underlying assets. . . .”
Second, the rules imbued the rating agencies with an “almost Biblical authority,” to borrow a phrase first used in 1968 by New York City’s finance administrator, Roy Goodman. But that authority wasn’t remotely deserved. The agencies had charts and studies showing that their ratings were accurate a very high percentage of the time. But anyone who dug more deeply could find many instances when they got it wrong, usually when something unexpected happened. The rating agencies had missed the near default of New York City, the bankruptcy of Orange County, and the Asian and Russian meltdowns. They failed to catch Penn Central in the 1970s and Long-Term Capital Management in the 1990s. They often downgraded companies just days before bankruptcy—too late to help investors. Nor was this anything new: one study showed that 78 percent of the municipal bonds