It is not difficult find instances of fraud in science:
- Ranjit Chandra faked medical research results. He pocketed the money meant for running the experiments.
- Woo-suk Hwang faked human cloning, among other terrible things.
- Jan Hendrik SchÃ¶n faked a transistor at the molecular level.
How did these people fare after being caught?
- Ranjit Chandra still holds the Order of Canada, as far as I can tell. According to Scopus, his 272 research papers were cited over 3000 times. As for his University? Let me quote wikipedia: University officials claimed that the university was unable to make a case for research fraud because the raw data on which a proper evaluation could be made had gone missing. Because the accusation was that the data did not exist, this was a puzzling rationale.
- According to Scopus, Woo-suk Hwang has been cited over 2000 times. Despite having faked research results and having committed major ethics violations, he has kept his job and… he is still publishing.
- Despite all the retracted papers, Jan Hendrik SchÃ¶n has still 1,200 citations according to Scopus. He lost his research job, but found an engineering position in Germany.
Conclusion: Scientific fraud is a low-risk, high-reward activity.
What is more critical is that we still equate peer review withÂ correctness.Â The argument usually goes as follows: if it is important work, work that people rely upon, and it has been peer reviewed, then it must be correct. In sum, we think that conventional peer review + citations means validation. I think we are wrong:
- Conventional peer review is shallow. Chandra, Hwang and SchÃ¶nÂ published faked results for many years in the most prestigious venues. The truth is that reviewers do not reproduce results. They usually do not have access to the raw data and software. And even if they did, they are unlikely to be motivated to redo all of the work to verify it.
- Citations are not validations. Chandra, Hwang and SchÃ¶n were generously cited. It is hardlyÂ surprising: impressive results are more likely to be cited. And doctored results are usually more impressive. Yet, scientists do not reproduce earlier work. Even if you do try to reproduce someone’s result, and fail, you probably won’t publish it. Indeed, publishing negative results is hard: journals are not interested. Moreover, there is a risk that it may backfire: the authors could go on the offensive. They could question your own competence.
- There are many small frauds. Even without making up data, you can cheat by misleading the reader, by omission. You can present the data in creative ways, e.g. turn meaningless averages into hard facts by omitting the variance (see theÂ fallacy of absolute numbers). These small frauds increase the likelihood that your paper will be accepted and then generously cited.
How do we solve the problem? (1) By trusting unimpressive results more than impressive ones. (2) By being suspicious of popular trends. (3) By running our own experiments.
Source: Seth Roberts.