Science follows a conservative process. It takes a long time for a fact or a law to be accepted. Several scientists must verify and reproduce the same results before acceptance is granted.
So goes the theory.
In practice, science is not such a clean process. Routinely, facts and theories become widely accepted quickly, without criticism. Mostly because they are convenient. Other proposals get shot down immediately: perhaps for good reasons, perhaps not. Negative results, including any challenge to the convenient—but poorly reviewed—facts, are frowned upon.
In some fields, there is a bias against simplicity. If you show that a simple technique works well, even if it works better than more complicated or expensive techniques, people will dismiss your work as too easy. I believe we should have the opposite bias: we should try to steer away from complicated solutions. Complicated techniques should have the burden of the proof: do we need something so difficult? But complexity is often convenient: it raises the barrier of entry to a field. If anyone can do your work using simple techniques, then why are you getting paid?
I believe that to minimize the effects of such biases, we should encourage diversity in science. Here are a few clues on how to get more diversity:
- pick numerous and different reviewers: the composition of program committees should be different year after year;
- encourage the multiplication of conferences, journals and workshops;
- provide funding to more researchers (spread the money more evenly);
- mix researchers from different organizations (universities, government, industry);
- do not reward researchers who always publish in the same small set of conferences or journals (the same where they often act as reviewers);
- mix researchers having different backgrounds.
Finally, I believe that we need to stress reproducibility a lot more. Researchers need to open up their data and their code. This will ensure that more people can check the facts. It should lead to better science and more diversity.
the simpler approach is to accept that peer review isn’t perfect?
Perfection is one thing. Biases are another.
I can live with a random variable that is not exactly 0. If I average enough of these variables, they will converge to zero. But what if they are not independent?
Not that I object to your recommendations, especially since I’m not on the hook for funding them.
How would changing the composition of program committees every year cost anything? In some fields, you take the 3 most prestigious conferences and look at the program committees: you find that a third of the members are on all 3, or at least 2 program committees. Worse: you find this exact same composition every year (+/- 10%).
In some fields (I won’t name them), you pick the 3 most prestigious journals and look at the board: you will find that 60% of the board members are on all 3 journals…
Has anyone done research on the problems you’re describing anecdotally?
We can observe entire fields becoming entirely focused on one tangent even if this one direction has not proven itself in the least. See for example, Lee Smolin, The Trouble With Physics.
As for the bias toward complexity, that’s a personal observation, and I do not care to invest much time in proving it. I usually just take my papers elsewhere when I am unhappy with the peer review. Trying to prove objectively that the peer review was “unfair to me” appears to be a lot of work… for little potential gain… at least, at the individual level.
Re cost: I thought you were advocating the need for more reviewers per submission. Diversification is certainly a good idea in principle, but tell that to the journals that would have to forgo the top-shelf names on their boards.
Ok. Maybe that was not such a great proposal.
I suppose the “we” in your post is the academic / research community. It seems to me that a more productive line of attack would start a bit smaller. Is there anything a single journal or department could do differently that would advance along the lines you’re suggesting without making a significant sacrifice? Or is this one giant prisoner’s dilemma?
Certainly. I think numerous conferences have begun to worry about “refreshing” their program committees, or at least extending them. Other conferences (and no doubt, journals) have begun stressing reproducibility a lot more. Funding agencies worry more about open research than ever before. (More open usually means less of a barrier to entry, and thus more diversity.)
In what fields there really such a bias against simplicity? Mitz’s anecdote not withstanding, I’ve seen researchers do exactly the opposite, questioning what seem to be arbitrary choices made by an algorithm or heuristic.
Are you perhaps offering a complicated solution to a non-problem, where the simpler approach is to accept that peer review isn’t perfect?
Not that I object to your recommendations, especially since I’m not on the hook for funding them.
Points taken. And I don’t doubt that cognitive bias infects the peer review process much as it infects all other decisions we make. But I am still hesitant to overgeneralize from Mitz’s anecdote. Has anyone done research on the problems you’re describing anecdotally? Or are you implicitly arguing that such research would never be published?
Re cost: I thought you were advocating the need for more reviewers per submission. Diversification is certainly a good idea in principle, but tell that to the journals that would have to forgo the top-shelf names on their boards.
I suppose the “we” in your post is the academic / research community. It seems to me that a more productive line of attack would start a bit smaller. Is there anything a single journal or department could do differently that would advance along the lines you’re suggesting without making a significant sacrifice? Or is this one giant prisoner’s dilemma?