Can Science be wrong? You bet!

A common answer to my post on the reliability of science, was that fraud was marginal and that, ultimately, science is self-correcting. That is true on one condition: that the science in question is bona fide science. Otherwise, I disagree that institutional science is self-correcting. It is self-correcting about as much as human beings are rational. That is, not often. A lot of what passes for science is actually cargo cult science. What looks like rigorous science, may not be, no matter what the experts tell you. Don’t fool yourself: science is not the process of getting published in prestigious journals or a tool to get a tenured job. Richard Feynman defined science as the belief in the ignorance of experts.

Institutional science can be wrong or not even wrong for decades without any remorse:

  • Economists failed to predict or explain the last financial crisis. Yet they can’t put into questions their models. Philip Mirowski explains why: “The range in which dissent happens is so narrow. (…) The field got rid of methodological self-criticism.”
  • A large fraction of AI researchers have convinced themselves that intelligence must emerge from Prolog-like reasoning engines. This gave us twenty years of predictions that the future was in expert systems, and the last ten years spent predicting the rise of the Semantic Web. This ever-growing community of AI researchers are oblivious to their own failure to produce any useful result.
  • Like Fred Brooks, I’m amazed that in 2010, the waterfall method is taught in software engineering school as the reference model. There is no evidence that it is beneficial and, in fact, much evidence that it is hurtful. That is, students would be better off learning nothing rather than learning to use the waterfall method. Yet, entire Ph.D. thesis are still built on the assumption that the waterfall method is sound. Accordingly, criticizing the waterfall method on campus is a risky business.
  • The dominant paradigm of modern Theoretical Physics is String theory, which is not even a scientific theory.

We should not trust that self-correction will happen. Instead, biases are often self-reinforcing. Rather, we must ask how self-correction can happen. I think that all science must be verified by independently designed and reproduced experiments. For example, it is insufficient to verify the speed of light with one reproducible experiment. It must be possible for different researchers to come up independently with different experiments, which are all reproduced independently several times. And if everyone is working from the same data, the limitations of the data may never be revealed. And if there is no experiment, you are doing Mathematics or art, not science.

Peer review does not lead to self-correction. Peer review increases quality, but it can also reinforce biases. In Information Retrieval, we often talk about the trade-off between precision and recall. Peer review improves precision, but degrades recall. If your primary goal is to please your peers, you won’t be tempted to point out the flaws in their research!

However, I am optimistic for the future. The rise of Open Scholarship will allow outsiders to participate in the research process and keep it more honest.

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

16 thoughts on “Can Science be wrong? You bet!”

  1. I can’t help but take the bait, Daniel.

    “A large fraction of AI researchers have convinced themselves that intelligence must emerge from Prolog-like reasoning engines.”

    Well, maybe that was true for ~ 10 years around the heyday of Lisp Machines (1980-1990), perhaps. But today, Good Old Fashioned AI is all but dead. Everything is probabilistic models, stochastic algorithms etc. No one (lamentably) seems to work in the area of “logics for AI” – which, it seems to me still deserves study (viz. non-monotonic logics, default reasoning etc.), though not because that’s necessarily the best way to build intelligent machines, but because logic, complexity theory, automated theorem provers etc. are all valid (scientific) disciplines of research for which criteria of advance, progress, explanatory power, predictive power etc. are quite as applicable as in any other field.

  2. Quoted for truth.

    If you are willing to look beside mainstream, you may find some interesting alternative lines of thought. Here are some examples:

    * Economists: Take a look at physics, chaos theory and fractals. Fractal theory won’t allow you to predict when some big change is going to happen. It gives you a framework that helps to explain what has happened, however. See “The Black Swan” by Taleb for reference.

    * AI: I guess the interesting research paths lie within self-organizing and agent based systems. It’s going to be interesting to see if biotech and AI are going to converge somehow.

    * Software engineering: Yeah. I agree that just “waterfall” is a bit troublesome. I tend to see various processes as a spectrum ranging from waterfall (controlled, prescriptive) to agile (compromise?) and lean (demand driven).

    * Physics: I guess it would be nice to have a “theory of everything”. I suppose that’s kind of unattainable considering resources are limited (it’s impossible to build big enough experiments).

  3. You seem to mix two types of problems in the discussion of fraud.

    What is commonly perceived is fraud is the “false positive”. A scientific claim that gets published, which contains false data, sloppy analysis, and at the end, incorrect conclusions.

    We also have the “false negatives”, the failure to identify the false scientific claims. The whole discussion about the unwillingness to challenge the dominant scientific paradigm falls in this category. We see some results and do not perform research to prove them wrong.

    I believe that incorrect results (including the fraudulent ones) will always be present. However, if such results become prominent and widely used, they will be uncovered. The case of fraud for Hwang in cloning was discovered exactly for this reason. Same thing for Schön and molecular transistors. People could not verify their highly visible results that made them famous. So, depending on fraud to reach high status is a highly risky proposition. This is just due to the rivalry of other scientists to be the “correct” ones. The higher you climb in the research ladder, the more people will check and double check your results. Exposing an error in the work of a famous scientist will bring fame to the person that uncovered the problem. Whether the problem was due to fraud, or due to incorrect assumptions, or due to sloppiness, it does not matter. So the self correction does happen for the important developments.

    The fraudulent results that will not get uncovered are the ones that do not have any significant impact, and (often) do not challenge the established paradigm. So, yes, in a sense it is possible to have a perfectly fine career in science producing results that do not challenge the status quo, and at the end are not worth reading or replicating. The fact that whole research communities do not want to challenge their unimportant results is not so important, imho. The fact that nobody uses whatever they generate is the most severe punishment. For the financial models, you see the self-correction, as everyone now works to identify the problems with the prior models.

    It may take some time to reach the point of self-correction. It took several hundred years for Newtonian physics to be proven wrong. But it happens when the scientific results are just not “useful enough” anymore.

  4. Does anyone really think that economics, AI, or software engineering are sciences? There may be elements of those areas that are actually science, but those are very small parts.

  5. @Stiber

    Does anyone really think that economics, AI, or software engineering are sciences?

    Isn’t research in these fields funding by the National Science Foundation in the USA?

  6. Ah, like the definition of science fiction being what science fiction editors buy.

    But, for the most part, those fields don’t utilize the scientific method. And NSF funds all sorts of engineering, which most would agree isn’t science.

  7. There’s an economics saying I’ve always been fond of: “The market can stay irrational longer than you can stay solvent.” I believe that good science ultimately wins out over the bad stuff, but not always in a couple of lifetimes. Maybe String Theory will find its place next to Relativity, or maybe it’ll join Lamarckism and Phlogiston Theory, but sooner or later we’ll find evidence or a better theory. There’s plenty of bright minds thinking hard on the fundamental questions of physics.

    From a more practical standpoint, if we do want to speed the process up, perhaps a professional track needs to be set up specifically around confirmation of other results. We currently value original research highest: maybe a cadre of researchers evaluated solely on the work they do confirming other results, evaluating structural biases, etc. would balance the work done. I agree with earlier commenters that important results do get reproduced, but we could certainly extend that treatment down to the marginal results as well.

  8. @paul: At the moment there is not even a glimmer of hope that string ‘theory’ can find any place…because it is not testable at all…

  9. @Paul

    I’ll agree that String Theory has gotten more attention than is warranted, but it strikes me as premature to rule it out completely.

    I’m told that it is difficult to get a job in theoretical physics, in 2010, if your work is not aligned with string theory.

    The real problem is that people proposing alternatives are not being hired. They go work for financial firms on Wall Street instead.

    On what is known with great certainty, there should be little divergence between scientists. On what is pure speculation, there should be as much diversity as possible.

  10. @Paul

    As you move towards the highly engineered you lose outside voices, but have a much closer link to objective truths.

    I don’t think this needs to be true. Anyone with the right training can do research in Computer Science. And you can mostly test your work. You think algo. A is faster than algo. B? Just implement both of them and compare. (This is naive, but I just want to illustrate my point.)

    There are many fields that were initially closed to outsiders which are becoming very accessible. We don’t need to oppose verifiable truths and accessibility.

  11. If matter is vibrations on strings, there should be other vibrational modes corresponding to higher energies: heavier copies of the known particles should appear in extremely high energy situations. I understand these would be many order of magnitudes higher energy than we can currently produce with particle accelerators and thus are, from a practical standpoint, untestable. But that’s an issue with our technology, not the theory itself: general relativity was true before we had the ability to make fine observations in distant gravitational fields.

    Additionally, even if it failed to predict something new, if we could derive all the laws of chemistry and physics from a particular string theory variant, that would be a very strong argument for it even if it wasn’t classically tested.

    I’ll agree that String Theory has gotten more attention than is warranted, but it strikes me as premature to rule it out completely. If the universe works in a way that isn’t particularly amenable to testing that’s its prerogative.

  12. “more attention than is warranted” was perhaps an understatement: Pushing out valid dissenting views is an egregious attack on scientific progress.

    Actually, this leads me to believe science has a nice tendency to avoid the worst cases of field hijacking. Material Science requires big, expensive machinery. Thus only professional researchers can contribute. But its involved in producing real, physical, measurable quantities. Thus it’s hard to pull the field too far down the wrong path: Either you’re physically producing the desired materials or you’re not.

    Contrast with theoretical physics: As string theory demonstrates, it’s hard to evaluate these things. What’s dark energy? Are there particles that make up quarks? Ideas are overturned over very long time spans. Thus its more susceptible to these sorts of group-think and high-jacking. But to balance that, anybody can think about these problems. Einstein discovered special relativity as a patent clerk. As you move towards the esoteric, you get more contributions from outsiders. As you move towards the highly engineered you lose outside voices, but have a much closer link to objective truths.

  13. Computer Science is a weird field, I’m not surprised it doesn’t fall into that continuum. It isn’t cleanly Math, Engineering or Science, but some combination thereof.

    And I agree that fields tend towards openness: a microscope or a telescope are very affordable but were once exceedingly rare.

    I wasn’t promoting the idea that we should strive to push fields to one extreme or the other, only that if you line the two up on a pair of axises, you’d see very few in the inaccessible, difficult to verify section. Maybe some of the social sciences, where experiments involve mass interviews or other experiments on large groups?

Leave a Reply to Marcel Cancel reply

Your email address will not be published.

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax

You may subscribe to this blog by email.