Last month, in an interesting coincidence, two wire-service stories addressing the course of fracking research came out on the same day, but with diametrically opposed themes. The first, from Bloomberg, declared, “Frackers Fund University Research That Proves Their Case.” The second, from the Associated Press, reported, “Experts: Some Fracking Critics Use Bad Science.” Both got extensive replay through the wires and Twitter.
The Bloomberg story reviewed several examples of industry-funded research on gas production via hydraulic fracturing. Among them:
- A 2009 Penn State College of Earth and Mineral Sciences study that purported to show how a proposed severance tax on gas production in Pennsylvania would cause significant economic damage. The study did not disclose that it was funded by the Marcellus Shale Coalition.
- The lead author of a widely cited study from the Energy Institute at the University of Texas, Charles Groat, did not disclose that he was a board member of PXP, an independent gas producer with operations in the Haynesville shale, and that he held more than a million dollars worth of PXP stock.
The AP story, by contrast, discussed several widely disseminated claims about fracking hazards that have proven to have little basis in serious research.
What is one to make of all this?
I’ve written previously for the POWER Blog about the damage that gets done to science in the pursuit of non-scientific agendas. This is a big issue (way too big for a single editorial), so for this issue I’m going to focus on the how—How does scientific research get done at all? And what does it mean?
Assume Good Faith
One—hopefully obvious—thing that needs to be recognized at the outset is that scientists are human beings. Like anyone else, they have their own interests, opinions, and families to support. Too often ones sees pundits and interest groups portraying “friendly” researchers as noble, altruistic crusaders bravely eschewing self-interest in the pursuit of truth, while scientists supporting the other side are described as venal, amoral shills willing to sell their opinions to the highest bidder.
Despite having worked with scientists for most of my career (and having been married to one for more than 20 years), I can confidently say that I have yet to meet a single researcher who fits either description. The folks I do know are certainly capable of mistakes and misjudgments, but they are universally passionate about their work and genuinely believe in what they’re doing.
Earning a PhD and a getting a research post at a university takes an awful lot of hard work. It’s the sort of work no one does unless they deeply enjoy, and care about, their field. Certainly, no one starts down that path plotting to one day serve as an opinion for hire.
Does this mean scientists have no opinions or biases? Hardly. Like anyone else, researchers develop a worldview about their field, one that is shaped by the experiences that drew them to it, the instructors they had and the coursework they chose, the opportunities they were offered (and were not), and the things they’ve learned as part of their research. Two prospective scientists starting from the same point as undergraduates may reach very different worldviews later on.
The Method Is the Message
This is not to suggest that one of these scientists is “right” in their worldview and the other is “wrong.” Many laymen have the well-meaning but erroneous idea that science is about eradicating all opinion from the process and “focusing on the facts,” a reasoned “objective” view of which will lead to the truth. Although many scientists would love it if science were this simple, as a practical matter it almost never is.
To keep us on the subject of fracking, let’s consider one example: the competing studies on the environmental impact of natural gas put out over the past year or so by two Cornell research groups led by Robert Howarth and Lawrence Cathles. Cathles’ work has tended to show that a shift toward natural gas would have major benefits in reducing climate change over the next century. Howarth’s work has shown the opposite. Howarth, naturally, has been widely cited by those who oppose fracking, while Cathles is cited by those who support it.
They can’t both right, can they? Is Cathles an industry shill, or is Howarth an environmental extremist bent on destroying the U.S. economy? In fact, there is little reason to think anything other than that both are principled scientists for whom the facts are telling different stories.
But aren’t the facts, facts? Not necessarily. How one goes about one’s research can have a very large effect on one’s results. Although there are established, accepted methodologies for many sorts of research, this is most definitely not the case in all fields. Especially in newer fields, quite a lot of research and debate goes into deciding the most reliable and accurate methodologies.
In the case of Cathles and Howarth, the two have been unable to agree on the best means of measuring methane leakage, and the most important time frame to view—whether the next 20 years are most important, or whether a view of the next century provides the most accurate picture of natural gas’s impact. One may ultimately prove to be correct, and the other misguided, but for now, much of the dispute is an element of opinion.
Show Me the Money
We come now to the next piece of the puzzle—how all this gets paid for. “Publish or perish” is still the rule in academia. The problem is that scientific research is expensive, often obscenely so.
The money for it comes from a variety of sources. Public support of science, while huge, does not come close to funding all the worthwhile research that might be performed, and the competition for it is fierce. University endowments pay for much of it, but that money ultimately comes from outside donors. There is no shortage of private companies and foundations willing to step into this gap, but the competition for this money is also quite keen.
The sums at stake here are not small. A study released in June by the American Association of University Professors (AAUP) estimated that industry funding represented around 6% of all research financing, which still comes to around $3 billion. This figure, however, does not include non-research funding such as outright gifts, faculty endowments, consulting fees, licensing, and so forth. These latter figures can be considerable, and some estimates are that as much as 25% of all research has some industry connection.
Six percent might not sound like a lot, but this money is not evenly distributed. The AAUP study noted that some schools have far higher levels of industry funding, in some cases up to 50% of the school’s R&D budget. Likewise, certain fields, such as medicine and engineering, have much higher levels of industry support. In fact, most funding for biomedical research in the U.S. now comes from private industry. However you want to add it up, science in the U.S. would look very different than it currently does without corporate support.
Allies But Not Lackeys
Though one can make too much of it, there is some reason for concern. The AAUP study was motivated in part by disclosures (via recent litigation) that the tobacco industry had spent decades deliberately funding and manipulating academic medical research in attempt to downplay the health effects of tobacco. And there are certainly numerous other examples one could cite.
That said, millions of industry dollars flow through the average research university every year without destroying academic integrity. It would be a grave mistake to think the academic community is unaware of the potential for conflicts, and safeguards are in place to prevent them. When an outside company or group proposes to fund a study, it’s not just a matter of tossing a bag of cash into the researcher’s office. The proposal typically must go through the dean of the researcher’s university school and the university’s grant department. A contract for the study is written up, spelling out what will be done and how. Once the check gets written, the money is disbursed by the department’s business office, and expenditures are audited by a contract officer.
And the oversight doesn’t end when the study does. Most journals in fields where industry funding is common (especially medicine) require submitted research to state, among other things, what funding was received, whether the sponsor was shown the results before publication, and whether any changes were made as a result.
Most researchers will tell you the problem is not taking the money but failing to properly disclose it. When contacted by Bloomberg, the director of the UT Energy Institute agreed that Groat’s connection to PXP should have been disclosed, though he stood by the integrity of the study, which was funded by the university, not PXP. And though UT Provost Steven Leslie announced a few days later that the university would conduct an independent review of the matter, he likewise told the Austin-American Statesman that he did not feel Groat’s board position created a conflict of interest. “The issue is one of disclosure,” he said.
Of course, this analysis skips over a critical question: Why do some researchers get industry funding and others don’t? The implication is that the money is used to influence researchers’ views, but the reality is that nearly all scientists have formed a clear view of their field well before they’re in a position to compete for meaningful industry funding. And what that means is that when an industry group goes looking to fund a study they hope will advance their agenda, odds are they’re going to fund a researcher who has already published a substantial body of work that supports their aims. After all, trying to buy someone’s reputation and integrity is a lot harder and messier than just funding someone who already agrees with you.
Groat, the author of the UT fracking study, has spent more than 40 years working as a geologist for the U.S. Geological Survey, the American Geological Institute, and the UT Bureau of Economic Geology, among other organizations. Which is the more reasonable interpretation? That Groat was offered a seat on PXP’s board because of his extensive experience and connections as a geologist, or that he decided to risk a four-decade career and reputation by publishing a bogus pro-fracking study in hopes of supporting the value of his PXP stock? This doesn’t mean Groat’s study is necessarily correct, just that his PXP board membership likely had nothing at all to do with its content or conclusions. His interest in, and support of, the natural gas industry certainly predated PXP’s very existence.
Money Isn’t Everything
The gist of all this is that calling a study “industry funded” is to ask a question (several of them, actually), not answer one. How was the money obtained? Was it properly disclosed? Did the donor have any involvement in the research? The answers to those questions, among others, will tell you much of what you need to know about the study’s reliability. The mere presence of industry money generally does not.
And there’s a flip side to that: The absence of industry money doesn’t imply much of anything either. Public funding, which remains by far the largest source of research dollars, is hardly free from political and economic agendas. Any researcher who has spent more than a few years competing for state and federal grant money will tell you that political inclinations in Washington and the state capitals have a major effect on what research gets funded.
Good science is good science no matter who’s paying for it: Sound methodology and reasonable conclusions drawn from the data are what determine a study’s reliability, not who’s signing the checks.
—Thomas W. Overton, JD is POWER’s gas technology editor. Follow Tom on Twitter.