EPA Administrator Scott Pruitt promised in late April to raise standards for the science behind environmental regulations — restricting usable studies to those that can be replicated or independently verified. That’s the rationale for a rule he proposed called “Strengthening Transparency in Regulatory Science.”
It sounds harmless on a superficial level, but there’s always a trade-off: The more certainty required of the science, the greater the risk people may be exposed to dangerous pollutants or toxic products.
Pruitt has framed the discussion to focus on the possibility of false positives — studies that exaggerate risks. But any serious effort to improve the environmental science used to make regulations would need to start by determining if indeed there is an excess of false positive results, or if instead there’s a problem with false negatives that underreport or miss real risks. If history is any guide, false negatives could prove to be more costly.
The focus on false positives is premised on the assumption that problems recently uncovered in psychology and some areas of medical research also extend to environmental science. In a 27-page document describing the rule, Pruitt made reference to the so-called “replication crisis” — concerns over systematic reviews revealing that more than half of published studies could not be replicated, but only in those limited fields. There’s no evidence so far that the same kind of crisis affects physics, astrophysics, chemistry, climatology and other fields.
Scientists quickly expressed opposition to Pruitt’s transparency rule, arguing that he would use it as an excuse to roll back regulation critical for protecting public health. It wasn’t clear from Pruitt’s proposal whether the transparency was supposed to apply only to future regulations, or whether it would allow him to change existing ones.
UCLA statistics and epidemiology professor Sander Greenland helped me consider the importance of false negatives. He said that the tendency to produce false positives or negatives varies from one field to another, and may depend on the incentives faced by researchers.
This echoed a discussion on incentives I had last year with Dana Farber Cancer Center biologist Bill Kaelin. He told me that there’s major pressure to get positive results in his field. The publish-or-perish culture has intensified, while rigor has declined.
Before about 1980, he said, scientists strove to publish papers that proved one thing in multiple ways. Now, he said, top journals want multiple things proven one way. The claims have to be bigger — loftier. But the evidence can be weaker.
In psychology, too, researchers face a strong incentive to get positive results — indications of some real effect. A study by a Harvard psychologist, for example, appeared to prove that assuming certain “power poses” gave people a hormonal boost. That soon led to record-breaking TED talk and bestselling book — though subsequent analysis cast doubt on her conclusions.
But in other fields, there are incentives to find negative results. In pharmaceutical research, for example, it’s much better for a drug’s inventors if no serious side effects are found.
It’s hard to imagine a more egregious case of bad U.S. environmental science than the studies that purported to show leaded gasoline was likely to be harmless. Now, there are multiple lines of evidence showing that during the 1960s and 1970s, millions of children suffered irreversible brain damage, their blood lead levels higher than those measured in victims of the more recent lead contamination disaster in Flint, Michigan.
The harm from this false-negative science isn’t just a matter of weighing a small number of lives against the economy — the economic impact of all that brain damage may have outweighed any economic benefits accrued by continuing to use leaded gasoline and lead paint.
Replication has become a buzzword, but it’s not well-defined. Some scientists quoted in the news worried that Pruitt’s new rule would disqualify data on the Deepwater Horizon oil spill, for example, because it was taken from an unintended experiment nobody wants to repeat.
Columbia University statistics professor Andrew Gelman brought up this ambiguity in a blog post on Pruitt’s rule, suggesting that even with data on oil spills or global warming, independent parties can still verify claims by re-analyzing the same data. UCLA’s Greenland said that, indeed, such a re-analysis was all that was necessary to debunk some of the more infamous irreproducible psychology studies. People with an understanding of statistical reasoning could simply look at the same data and find the errors that led to the original, false conclusion.
But what if the data aren’t there? That happens, said Greenland, and it doesn’t necessarily mean someone is trying to hide something. Some important studies are so old that the data have been lost or discarded. Another factor is that laws such as the Health Insurance Portability and Accountability Act can prevent researchers from getting access to health-related data. Greenland said he’s run into this barrier while trying to get data from the Nurses’ Health Study, which has tracked nutrition, supplement use, exercise and other personal habits among more than 200,000 nurses.
The scientists who’ve objected to Pruitt’s proposed rule are right to worry. The rule is vaguely worded and premised on assumptions and innuendo. It’s all aimed at an alleged problem with false positive results — but Pruitt hasn’t supplied any direct evidence that there’s a proliferation of false positives in environmental science, or that such false results are causing any harm.
Pruitt’s rule could go into effect after a 30-day comment period, though The Washington Post reports that it could face opposition in court. Pruitt is promoting “Transparency in Regulatory Science” as a way to avoid regulations based on weak or flawed evidence. But the rule itself is based on an alleged problem for which there’s no solid evidence at all.
Faye Flam is a Bloomberg View columnist.