Blog

More on Peer Review and Climategate

By Kennedy Maize

Some additional damaging brush strokes on “Climategate,” these related to statistical analysis and peer review.

When the story of the climate emails surfaced, and the apologists insisted that there was nothing behind the alleged doctoring of evidence, I first thought about the NAS review of the Mann “hockey stick” representation. It was far from favorable, although the climate evangelists spun it as supporting the hockey stick analysis by climate researcher Michael Mann.

So I resurrected the NAS report and reported here that it wasn’t a clean bill of scientific health for Mann et al. Far from it.

At the same time, I recalled from the recesses of my increasingly unreliable memory that there was a profound critique of the statistical analysis and alleged peer review of the Mann analysis about the same time as the NAS report. It pains me to attack Mann, who now works at Penn State, as I am a proud Penn State graduate, as is my wife, and as were my mother, father, uncles and aunts. My father was on the faculty, teaching mining engineering. But I still recalled a devastating critique of Mann’s work.

So I did some searching – of my mind and the internet – and came up with a 2006 House Energy and Commerce Committee hearing, where a panel of statistical experts, pro bono, analyzed the Mann analysis and found it lacking in statistical heft and displaying a form of peer review that might be better characterized as “a circle of friends.”

The major statistical problem, found the panel chaired by George Mason University statistician Edward Wegman, and including David Scott of Rice University and Yasmin Said of Johns Hopkins University, was the “centering” of the proxy data, a topic that was central to the email traffic revealed by the Climategate revelations. “Centering” involves where Mann and his colleagues chose to balance, or “calibrate” their series of climate data.

Wegman testified in the 2006 hearings, “The reasons for setting 1902-1995 as the calibration period presented in the narrative of [Mann’s work] sounds plausible on the surface and the error may be easily overlooked by someone not trained in statistical methodology.  We note that there is no evidence that Dr. Mann or any of the other authors in the paleoclimate studies have significant interactions with mainstream statisticians.”

In short, Mann and his colleagues are statistical neophytes, and appear to have been doctoring their analysis to suit their preconceived notions. That’s the case that the emails, revealed four years later, makes clear. The earlier congressional testimony bolsters the case.

As for peer review, the Wegman panel explored “the social network of authorships in the temperature reconstruction area.  We found that at least 43 authors have direct ties to Dr. Mann by virtue of coauthored papers with him.  Our findings from this analysis suggest that authors in this area of the relatively narrow field of paleoclimate studies are closely connected.” Translating, Wegman is charging that the circle of paleoclimate scientists routinely review each others’ work in so-called peer-reviewed journals, providing a gloss of review that isn’t independent.

Wegman’s group suggested remedies for what it viewed as shoddy scientific method revealed in the case of the Mann calibrations. The first recommendation: “Especially when massive amounts of public monies and human lives are at stake, academic work should have a more intense level of scrutiny and review.  It is especially the case that authors of policy-related documents like the [UN’s Intergovernmental Panel on Climate Change] report should not be the same people as those that constructed the academic papers.”

As the Climategate emails demonstrated, the researchers went to great lengths to prevent public disclosure of their data, frustrating those who wanted to try to duplicate the analysis. The Wegman report four years ago concluded, “We believe that federally funded research agencies should develop a more comprehensive and concise policy on disclosure. All of us writing this report have been federally funded.  Our experience with federal funding agencies has been that they do not generally articulate clear guidelines to the investigators as to what must be disclosed.  Federally funded work, including code, should be made available to other researchers upon reasonable request, especially if the intellectual property has no commercial value.”