unSpun_ Finding Facts in a World of Disinformation - Brooks Jackson [74]
TRANSPARENCY
Look for transparency whenever a claim is made. Is the publisher of a poll telling you the statistical margin of error and exactly how the poll takers asked the question? If not, don’t give much weight to the result. Political candidates who are challenging entrenched incumbents like to release polls showing that they are “closing the gap” or even have a lead, in order to convince potential donors they can win. But such polls can be tailored to produce a positive result by including loaded questions. The challenger might ask, “Did you know the incumbent is a wife-beater?” These so-called push questions nudge the respondent toward the desired answer, and a poll containing them is called a push poll. Questions can also be worded in ways that bias the result. One survey conducted by the Annenberg Public Policy Center found a dramatic difference in support for school vouchers depending on whether such phrases as “taxpayers’ money” or “private schools” were included in the question. And polls asking about support for public financing of political campaigns come out one way if the poll taker asks about “banning special-interest contributions from elections” and another if they ask about “giving tax money to politicians” as a substitute.
When reading a news story or article, ask whether the reporter or author is telling you where the information came from. We supply footnotes at FactCheck.org, with links to the sources we are using if they are available free on the Internet, so that readers may find more information or check that we’re getting it right. When you see somebody claim that “a study” has backed up their claims, ask how it was conducted, how many people participated and under what conditions, and whether it really supports what’s being said.
PRECISION
Sometimes evidence isn’t nearly as precise as portrayed. A good example is a pair of studies that produced shocking headlines about deaths in Iraq, studies that have since been widely questioned and disparaged. Both studies were published in the British medical journal The Lancet, and both were produced by a team from Johns Hopkins University in Baltimore. The first was released five days before the 2004 presidential election, and estimated that 98,000 Iraqis had died as a result of the invasion ordered by President George W. Bush in March 2003. The second was released less than a month before the 2006 midterm House and Senate elections, and estimated that the Iraqi death toll had reached 654,965 from the invasion and the violent aftermath. Both were several times higher than other generally accepted estimates.
However, neither estimate was an exact count, just the midpoint of an exceptionally broad range of possibilities. For the first estimate, the authors calculated that their “confidence interval” was somewhere between 8,000 deaths and 194,000 deaths. In the language of statistics, that means a 95 percent probability that the actual figure fell somewhere within that huge range. Put another way, there was 1 chance in 40 that the actual number was less than 8,000 and an equal chance that it was greater than 194,000. As the critic Fred Kaplan put it in an article for the online magazine Slate, “This isn’t an estimate. It’s a dart board.” For the second estimate the dart board was larger, between 393,000 and 943,000 deaths. Such wide ranges of uncertainty are much larger than the plus or minus 2 or 3 percent we are used to seeing in U.S. public opinion polls, and should tell us to beware.
The exceptionally imprecise estimates of the Lancet studies stem from the relatively small sample used to produce them. The estimates came from interviews in 33 clusters for the first study, 47 for the second. Using such randomly chosen “clusters” is a statistical method commonly used when it isn’t practical to draw a random