21 May 2015

What to do with people who commit scientific fraud?

Another story of apparent scientific fraud has hit the headlines.  I'm sure that most people who are reading this post will have seen that story and formed their own opinions on it.  It certainly doesn't look good.  And the airbrushing of history has already begun, as you can see by comparing the current state of this page on the website of the MidWest Political Science Association with how it looked back in March 2015 (search for "Fett" and look at the next couple of paragraphs).  Meanwhile, Michael LaCour hastily replaced his CV (which was dated 2015-02-09) with an older version (dated 2014-09-01) that omitted his impressive-looking list of funding sources (see here for the main difference between the two versions); at this writing (2015-05-22 10:37 UTC), his CV seems to be missing entirely from his site.

This rapidly- (aka "hastily-") written post is in response to some tweets calling for fraudsters to be banned from academia for life.  I have a few problems with that.

First, I'm not quite sure what banning someone would mean.  Are they to have "Do Not Hire In Any Academic Context" tattooed on their forehead?  In six languages?  Or should we have a central "Do Not Hire" repository, with DNA samples to prevent false identities (and fingerprints to prevent people impersonating their identical twin)?

Second, most fraudsters don't confess, nor are they subjected to any formal legal process (Diederik Stapel is a notable exception, having both confessed in a book [PDF] and been given a community service penalty, as well as what amounts to a 6-figure fine, by a court in the Netherlands).  As far as I can tell, these people tend to deny any involvement, get fired, disappear for a while, and then maybe turn up a few years later teaching mathematics at a private high school or something, once the publicity has died down and they've massaged their CVs sufficiently.  Should that be forbidden too?  How far do we let our dislike of people who have let us down extend to depriving them of any chance of earning a living in future?

After all, we rehabilitate people who kill other people; indeed, in some cases, we rehabilitate them as academics.  And as the case of Frank Abagnale shows, sometimes a fraudster can be very good at detecting fraud in others.  Perhaps we should give the few fraudsters who confess a shot at redemption.  Sure, we should treat their subsequent discoveries with skepticism, and we probably won't allow them to collect data unsupervised, but by simply casting them out, we miss an opportunity to learn, both about what drove (and enabled) them to do what they did, and how to prevent or mitigate future cases.  We study all kinds of unpleasant things, so why impose this blind spot on ourselves?

Let's face it, nobody likes being the victim of wrongdoing.  When I came downstairs a couple of years ago to find that my bicycle had been stolen from my yard overnight, the one time that I didn't lock it because it was raining so hard when I arrived home that I didn't want to stay out in the rain a second longer to do it, I was all in favour of the death penalty, or at the very least lifelong imprisonment with no possibility of parole, for bicycle thieves.  The inner reactionary in me had come out; I had become the conservative that apparently emerges whenever a liberal gets mugged.  Yet, we know from research (that we have to presume wasn't faked --- ha ha, just kidding!) that more severe punishments don't deter crime, and that what really makes a difference [PDF] is the perceived chance of being caught (and/or sentenced).  And here, academia does a really, really terrible job.

First, our publishing system is, to a first approximation, completely broken.  It rewards style over substance in a systematic way (and Open Access publishing, in and of itself, will not fix this).  As outside observers of any given article, we are fundamentally unable to distinguish between reviewers who insist on more rigour because our work needs more rigour, and those who have missed the point completely; anyone who has had an article rejected from a journal that has also recently published some piece of "obvious" garbage will know this feeling (especially if our article was critical of that same garbage, and seems to be being held to a totally different set of standards [PDF]).

Second, we --- society, the media, the general public, but also scientists among ourselves (I include myself in the set of "scientists" here mostly for syntactic convenience) --- lionize "brilliant" scientists when they discover something, even though that something --- if it's a true scientific discovery --- was surely just sitting there waiting to be discovered. (Maybe this confusion between scientists and inventors will get sorted out one day; I think it's a very fundamental problem. Perhaps we would be better off if Einstein hadn't been so photogenic.) And that's assuming that what the scientist has discovered is even, as the saying goes, "a thing", a truth; let's face it, in the social sciences, there are very few truths, only some trends, and very little from which one can make valid predictions about people with any worthwhile degree of reliability. (An otherwise totally irrelevant aside to illustrate this gap: one of the most insanely cool things I know of from "hard" science is that GPS uses both special and general relativity to make corrections to its timing, and those corrections go in opposite directions.) We elevate the people who make these "amazing discoveries" to superstar status. They get to fly business class to conferences and charge substantial fees to deliver a keynote speech in which they present their probably unreplicable findings.  They go on national TV and tell us how their massive effect sizes mean that we can change the world for $29.99.

Thus, we have a system that is almost perfectly set up to reward people who tell the world what it wants to hear.  Given those circumstances, perhaps the surprising thing is that we don't find out about more fraud.  We can't tell with any objectivity how much cheating goes on, but judging by what people are prepared to report about their own and (especially) their colleagues' behaviour, what gets discovered is probably only the tip of a very large and dense iceberg. It turns out that there are an awful lot of very hungry dogs eating a lot of homework.

I'm not going to claim that I have a solution, because I haven't done any research on this (another amusing point about reactions to the LaCour case is how little they have been based on data and how much they have depended on visceral reactions; much of this post also falls into that category, of course).  But I have two ideas.  First, we should work towards 100% publication of datasets, along with the article, first time, every time.  No excuses, and no need to ask the original authors for permission, either to look at the data or to do anything else with them; as the originators of the data, you'll get an acknowledgement in my subsequent article, and that's all.  Second, reviewers and editors should exercise extreme caution when presented with large effect sizes for social or personal phenomena that have not already been predicted by Shakespeare or Plato.  As far as most social science research is concerned, those guys already have the important things pretty well covered.


(Updated 2015-05-22 to incorporate the details of LaCour's CV updates.)

6 comments:

Dr. R said...

Dear Nick Brown,

there are many types of fraud and the law distinguishes between them. I think we need a clear definition of scientific fraud and a clear legal framework to deal with scientific fraud.

I think it can be compared to counterfeiting. There is a clear incentive to the fraudster (e.g., a job at Princeton) and a clear damage to the community.

Counterfeiting

Anyone who produces counterfeit currency, documents, or goods also commits a type of fraud. Counterfeiting currency is a federal crime, but state laws can also apply if, for example, you forge false birth certificates, driver's licenses, or other documents. Manufacturing goods and selling them while claiming they are a name brand item, such as selling counterfeit shoes, is also considered an act of fraud.

Sentencing and rehabilitation are important issues. It is also important to consider the broader cultural environment. If in the USA Black teenagers can be sentenced to years in prison, I think mostly White academics deserve equal treatment.

Sincerely, Dr. R

Dale Barr said...

OK, so is "banishment from academia forever" a punishment, or a reward disguised as a punishment? ;)

Once it becomes known that a scientist has committed fraud, they will no longer be able to get research funding. Ever. So that is a kind of punishment already built into the system.

However, there should be much stronger penalties than that. Anyone who has knowingly committed fraud on a federal grant should be prosecuted for misuse of public funds. This did happen to Stapel, who had misused something like 2M Euros in public funds, but IIRC, unfortunately all he got was community service. That's outrageously light given the scale of his deception. Imagine if it was discovered that a private company received 2M Euros for a job and was discovered to have faked all the work! It shouldn't be any different for us in academia: if you're going to take the money, you need to do the work, and if you're not going to do the work, you should give it back or face the consequences. Unfortunately too many academics see grants as "gifts" rather than as loans to be paid back with knowledge.

So how bad would the fraud have to be to rise to the level of criminal/civil prosecution? What if it's just p-hacking, or maybe changing a few cells in a spreadsheet? These are hard questions, but I do think that there is a clear difference between more smaller scale data fudging and the larger scale, willful deception by people like Stapel (and possibly LaCour?) But another part of the solution is for funders should stop giving money to labs that fail to adopt safeguards against fraud and questionable research practices.

Dr. R said...

Dear Dale Barr,

obviously I agree with you. I think it is shameful that governing bodies in psychology (APA, APS, SPSP, etc.) have failed to come up with clear guidelines which research methods are legitimate and which one's are deceptive, dishonest, and discouraged.

Optional stopping is ok when it is disclosed.

Removing outliers is ok, if it is not done systematically in one direction.

Reporting p = .054 as p = .05, significant is not ok. Saying p = .054, and interpreting as "assuming this effect can be replicated, it would mean...." is ok.

Conducting multiple studies is ok, only reporting the significant one's is not.

It is not hard to distinguish between good and bad research practices, and it is time to abandon the term questionable research practices that blurs the line between good and bad.

Of course, deceptive rounding is different from making up a whole study. So there can be different consequences depending on the severity of misconduct, but first we need a clear definition of misconduct.

Sincerely, Dr. R

Brent Donnellan said...

This is a cool post! I like these suggestions for changing the incentive structure and culture of science. Moreover, I agree that the Pete Rose rule was a reaction to the events of the day more than a completely well thought out suggestion about how to deal with fakers. My initial thought was that people who have been shown by a preponderance of the evidence to have passed faked datasets as legitimate should be banned from receiving grants and publishing papers for life. I would like to think more deeply about my suggestion but I posted an initial defense on my blog so I don’t hijack yours! Here is a link: http://tinyurl.com/lk8c476

Dr. R said...

Dear Brent,

I am relatively new to the world of blogs and twitter. Is commenting on somebody's blog considered bad (hijacking)?

Personally, I find nothing more disappointing than posting a blog without comments. So, I keep responding to these issues here.

I still like the comparison of dishonest research practices with doping in sports. There are clear rules which substances are banned and considered doping and there are clear rules about punishment of using these substances. Sometimes athletes are banned for a limited time and sometimes they are banned for life.

Lance Armstrong is banned for life and cannot even participate in an official amateur race. Some people may think this is what he deserves and some people may think that this is too harsh.

I think the more important issue right now is that we do not have anything close to the rules in the professional sports entertainment industry in science. I wonder why?

Sincerely, Dr. R


Anonymous said...

Dear Dr. R,

You wrote: "It is not hard to distinguish between good and bad research practices, and it is time to abandon the term questionable research practices that blurs the line between good and bad. "

What do you think of the following article, which seems to state that QRP's should be called "questionable reporting practices" ?

http://link.springer.com/article/10.1007/s11336-015-9445-1/fulltext.html

Kinds regards