We need a more negative culture

There is a strong bias in science, at least in Computer Science, toward positive results. For example, showing that algorithm A is better than algorithm B, will get you published. Reporting the opposite result is likely to get your paper rejected.

One justification for the value of positive results is that it gives you more information. Indeed, there is infinite number of possibilities. Listing all the cases that are of no interest would take too long. We better focus on what works!

This argument is fallacious since it ignores one of the pillars of science: reproducibility. By taking away the possibility of publishing negative results, we basically throw away the most important reason why we require reproducibility: to verify what others have done.

Times and times again, I come across falsehoods in science. Typically, they occur when reporting experimental results that are either badly interpreted or badly implemented. Here is a typical scenario:

  • Researcher A publishes some paper where he makes some false statement.
  • The statement is compelling. It matches people’s intuition.
  • The work becomes well known and is repeatedly cited.
  • Other researchers build upon the falsehood. They either do not verify the statement (where is the profit in that?) or if they do, they avoid denouncing the falsehood.

Eventually, the statement because an accepted fact. Anyone who wants to challenge it has the burden of proof, and it is easy to cast doubts on any experimental procedure. I claim that this happens often. As someone who crafts my own experiments, I see it all the time. I am repeatedly unable to reproduce “accepted facts”. Yet, I never (or almost never) report these problems because trying to do so would ensure that whatever paper I produce is frowned upon. Moreover, I believe few people ever attempt to verify published results. What makes matters worse is that trying to reproduce experiments is never considered serious work in Computer Science. Often, it is quite a difficult task too: either the data or the code is missing or barely available.

What bothers me is not so much the falsehoods, but the fact that it tends to feed into the biases of entire communities. People expect certain things, and they filter out any “negative” result, and protect “positive” results even when such results are not solid. Entire fields are therefore being built on shaky foundations.

We have made some progress recently in Computer Science regarding reproducibility. There are more conferences and journals asking researchers to make their data and code available. However, I believe that culturally, we still have a long way to go.

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

7 thoughts on “We need a more negative culture”

  1. This topic came up at an ICML workshop on evaluation methods in machine learning I recently attended.

    I’ve written up a summary of what I though were the highlights. Janez Demsar made the argument that there is too much emphasis on positive results and, consequently, as a reviewer it is difficult to prefer a paper that shows big improvements over existing techniques over one that does but puts forward an otherwise interesting idea.

  2. Publishing bad results is also good as an alert sign for future researchers. If bad results are unpublishable then we are faced with a situation where future generations are condemn to repeat intuitions that have been proved wrong previously.

    Refuting a thesis should be as valid as proving it. And it is.

  3. This is also a problem in psychology, where the same mindset prevails: once an experiment has “proven” something, it is rarely redone (although it does happen).

    In biomedicine, they have a Journal of Negative Results (http://www.jnrbm.com/). We need more of those types of journals.

  4. There is a once well-known paper by Drew McDermott from the ACM SIGART Bulletin (Issue 57 (April 1976), Pages: 4 – 9) entitled “Artificial Intelligence Meets Natural Stupidity” that concludes with:

    “…AI as a field is starving for a few carefully documented failures. Anyone can think of several theses that could be improved stylistically and substantively by being rephrased as reports on failures. I can learn more by just being told why a technique won’t work than by being made to read between the lines.”

Leave a Reply

Your email address will not be published. Required fields are marked *

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax