Truth Frequency Radio


Jun 06, 2015

Updated by on June 3, 2015, 12:00 p.m. ET

Ditty_about_summer

Michael LaCour is a very, very bad scientist. He’s the UCLA PhD at the center of one of the biggest academic scandals in years: he faked a political science study purporting to show that gay canvassers could change voters’ minds about same-sex marriage through brief conversations. After it turned out that he fabricated data and never even worked with the survey company he claimed to have used, Science retracted the paper.

“How could this happen?” asked the New York Times’s editorial board this week. Their answer is that fraud is mostly the result of deceptive or overly ambitious actors who misbehave, and researchers who don’t scrutinize the raw data on which a study is built. The title of the op-ed is “Scientists who cheat.”

But focusing solely on scientists’ cheating ways misses a bigger issue here. It’s not just bad apples themselves who are to blame. The scientific process itself has serious structural flaws — flaws that make it hard to catch fraud and, in some cases, even discourage researchers from exposing it.

Most studies aren’t replicated — and researchers are discouraged from doing so

Consider the problem of replication. One of the principles of the scientific method is that researchers should attempt to falsify previous findings by replicating their experiments. This was how LaCour’s fraud was uncovered: another researcher, David Broockman, tried to repeat his study and found he couldn’t.

The problem, though, is that this kind of work is extremely rare. “The vast majority of scientific articles never get built upon at all,” explained Harvard’s Sheila Jasanoff, a critical theorist on science. Researchers are often discouraged from replicating the work of others — because it’s not considered as important or worthy as discovering new things.

It’s telling that other academics tried to dissuade Broockman from peering into LaCour’s work. He was encouraged to build his career by doing new research, not tearing others’ down. As Jesse Singal’sexcellent blow-by-blow in New York Magazinenoted: “Throughout the entire process, until the very last moment when multiple ‘smoking guns’ finally appeared, Broockman was consistently told by friends and advisers to keep quiet about his concerns lest he earn a reputation as a troublemaker, or — perhaps worse — someone who merely replicates and investigates others’ research rather than plant a flag of his own.”

This is a problem. Not only does it make it more difficult for scientists to uncover fraud, but it also makes it harder to root out bad work. When scientists have taken replication seriously, they’ve discovered that a great deal of prominent research can’t actually be reproduced. Famously, one review found that researchers at Amgen were unable to reproduce 89 percent of landmark cancer research findings for potential drug targets. Another team of researchers recently published their attempt to reproduce 100 of psychology’s biggest experiments; only 39 passed the test. The problem is that these sorts of checks are done too rarely.

This problem isn’t new; scientists have been talking about it for decades. For this reason, it’s now common to hear statements like this one from The Lancet editor Richard Horton: “Much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.”

Today, meta-researchers — who conduct research on research — are helping to uncover some of the gaps and flaws in the scientific process and organize replication projects. Still, it’s a truth acknowledged by nearly everyone who works in science that we should be putting a heavier emphasis on replication — not only to root out fraud but help tamp down on honest but poorly conceived research.

Fear and power can cause fraud to go uncovered

There are also problematic power structures at play that can distort science, and these were prominently displayed in the LaCour case.

LaCour reached out to a leading political scientist — Donald Green from Columbia University — to be a co-author on the gay-marriage paper with him. Green never actually saw the raw data on which the findings were based, nor was he involved in carrying out the experiment, but agreed to sign off on the paper. Simply having Green’s name in the byline ensured that the paper was published in a top-tier journal.

“If LaCour was the only author, Science would not have published the paper,” says Jasanoff. But with Green’s name on the paper, the study made it intoScience — one of the most competitive journals in the world.

Green’s reputation also encouraged people accept the extreme findings in the study, even though it contradicted the vast majority of similar studies showing it’s nearly impossible to change peoples’ minds. And the fact that Green was a co-author was a big reason that academics discouraged Broockman from investigating the study.

Yet few questioned this “reputational inflation effect” or asked why such collaborations are allowed in the first place.

Adam Marcus and Ivan Oransky, the doctors behind the popular Retraction Watch blog, emphasized another aspect of the power problem in science. They noted that there appears to be a correlation between the number of retractions and the quality of the journal: higher impact journals tend to have more. “It could be that these prominent periodicals have more, and more careful, readers, who notice mistakes,” they write. “But there’s another explanation: Scientists view high-profile journals as the pinnacle of success — and they’ll cut corners, or worse, for a shot at glory.”

Marcus and Oransky smartly argue that we need to rethink incentive structures, so that it’s not just flashy research published in top-tier journals that’s rewarded. “Until those incentives change, we’ll all get fooled again,” they write.

Broockman also called for another systemic change: “I think my discipline needs to answer this question: How can concerns about dishonesty in published research be brought to light in a way that protects innocent researchers and the truth — especially when it’s less egregious?” he told told Singal in New York Magazine. “I don’t think there’s an easy answer. But until we have one, all of us who have had such concerns remain liars by omission.”

They’re right. Science is a human enterprise. It will inevitably be flawed. People will sometimes lie and cheat, or simply push sloppy and incorrect findings through the publishing machine. We know replication could help address some of these flaws. We know addressing power imbalances, could too. Instead of having the same conversations about rotten scientists, we need to build up science’s systems to mitigate the errors and frauds we know will continue to come between us and the truth.

MORE NEWS BY NEWS >>