A criticism of computer science: models or modèles?

I was recently on a review committee for a PhD proposal. The student was brilliant. His proposal sounded deep and engaging. The methodology looked scientific: build a model, program the software, gather data, compute the metrics. Yet the student’s hypothesis could never be realistically be proven false. The project was not conducive to telling us anything about the world. At best, we could learn something about an abstract construction. This is, I fear, all too common in computer science. To make it worse, I believe that most computer scientists are unaware of this methodological failing.

In medicine, medical doctors read scientific papers or, at least, executive summaries of said papers. The papers contribute useful knowledge. However, even the best software practitioners can go years without reading any research, assuming that they ever read any. I believe that it is because they rightly feel that the papers will not teach them much about the real world.

You would probably be upset if you learned that your medical doctor is unaware of the latest clinical research concerning your condition. However, how concerned would a manager at Facebook be if he learned that a software engineer is not up-to-date with computer science research?

The problem comes from the fact that computer scientists rarely work with models or, rather, that they are confused about what a scientific model is.

A model in science is an algorithm that enables you to make meaningful predictions about the real world. In computer science, it might go as follows:

Software X will be faster than software Y on data Z.

A model can often be falsified. In my computer-science example, you should be able to run X and Y and you check which is faster. Because they make actual predictions about the world, models are tremendously useful. Not all scientific models are cleanly falsifiable. For example, natural evolution is a scientific model. The belief that unit testing makes software more reliable is part of another. However, scientific models are always sustained by real-world observations as opposed to our own mental constructions.

But that is not what computer science offers you typically. Instead, computer scientists tend to make statements that avoid scientific falsifiability:

According to some cost model that may or may not be indicative of actual running speed, algorithm X is better than algorithm Y on data Z.

Oldberg proposed to reserve the word model for the genuine scientific models, and to use the French word modèle for the other kind.

You can compare a modèle with reality (what Oldberg calls evaluating), but you can never prove it wrong. A modèle is true as long as it is logically consistent, irrespective of reality.

Computer scientists love their modèles!

The problem is made worse by the fact that researchers working on modèles more easily get the upper hand. They are never wrong. They can endlessly refine their modèles and re-evaluate them. As long as there is no actual problem to be solved, the modèles will tend to displace the models. Cargo cult science wins.

Of course, the reverse phenomenon may exist within industry. People working with modèles are at a disadvantage. They can’t make useful predictions. They can only explain, in retrospect, what is observed. All their sophistication fails to help them when real-world results are what matters.

Continue reading with my post Should computer scientists run experiments? or skip ahead to my post on Big-O notation and real-world performance.

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

20 thoughts on “A criticism of computer science: models or modèles?”

  1. CS follows the mathematical conception of model and there is nothing wrong with that. I have more trouble understanding the (ab)use of model by health and social sciences than computer scientists.

  2. Using “modèle” to mean anything else than “model” would create chaos amongst the French speakers, please refrain. Plus as non-native speaker I would have no clue how to pronounce differently these two words.

    Why not something more like “weak models” versus “strong models”, or even better, plain English: “falsifiable models” versus “unfalsifiable models” ?

  3. I have been making a similar observation about computer science literature for a long time. Much of it is technically correct but studiously avoids measuring and characterizing aspects that would make it relevant to real systems. I realize this makes the computer science easier — it allows greatly simplified assumptions — but also makes it less useful.

    This leads to apparently counterintuitive implementations of software. For example, there are many examples where the best algorithms in implementation are objectively worse, in terms of various common Big-O et al metrics, than the best algorithms in literature. Or the software exhibits worse characteristics in the general case than Big-O asserts. Technically the literature is correct but the real implication is that they aren’t characterizing important aspects of algorithms.

    This also leads to a lot of algorithm research designing to their model rather than real systems, which renders many advances not that valuable in practice while ignoring potentially useful parts of the design space.

  4. You may be giving doctors too much credit. I remember a drug salesman I knew. He took his job seriously because he expected doctors to prescribe drugs based on what he told them before they got a chance to read the supporting literature.

    Programmers don’t like to be wrong. I’m not as bad as I used to be, but I can still have a hard time biting my tongue during code reviews.

    The idea behind saying “approach X is better than approach Y under the following assumptions” is that if assumptions change (say computers ship with more CUPs, or computers ship with multiple kinds of CUPs, or accessing memory becomes more expensive compared to calculating things) you can, in theory, decide which approach is going to work better. And the original researcher gets to claim he was right based on what was true when he published the original paper.

    The problem, of course, is that very few programmers know what assumptions apply to the hardware their code runs on. They might know some general information (accessing cache is faster than accessing RAM), but they don’t know real numbers, and they don’t know enough about the implementation of libraries they use to see how their general knowledge applies. For instance, C++ programmers often pride themselves on programming close to the metal, but cannot say whether it’s faster to use std::set or atd::binary_search and a sorted std::vector. Or they don’t know if string operations are slower than virtual function calls, or how expensive memory allocations really are, etc.

  5. I fully agree and it is a major problem of much of computer science. But, the comparison with medical science might not be optimal. The falsifiability criterion initially divised by Popper was created based on the example of natural science and this is also the strongest claim to its fame. For engineering science (which most of computer science ultimately amounts to) another criterion, which is closely related, comes into play: usefulness.
    A model is useful if it helps to build better systems. Unfortunately, often the notion of “better” is already created in the abstract and this leads then to results, which are in principle great and might even be theoretically useful, but if taken to the real world just break down, because they make unrealistic assumptions about the world.

    It is like engineering great machines that would work well if our planet would have zero gravity, while it (luckily) does not.

  6. @Klaus

    I agree with you that I may have overplayed falsifiability.

    Though I am acutely aware that a lot of medical research is abstract nonsense, it remains true that medical practitioners have a moral obligation to stay up-to-date with at least the clinical component of it.

    Why don’t we put the same obligation on, say, software developers? I think it is quite telling.

  7. Interesting post Daniel, but I think you are misrepresenting modeling in the natural and social sciences. Predictive models are few and far between in any field that isn’t close to physics. The ones that do exist are usually verbal and would feel completely foreign to a mathematician or cstheorist.

    Of course, the abstract modèles that you describe, are even fewer. This, unfortunately, seems to be not because people find them pointless, but because of a general uneasiness in math. But there is one field where modèles proliferate and probably for ill: neoclassical economics.

    The last type of models, and by far the most common among mathematical or computational models in the natural and social sciences, are heuristic models. These models are not connected to experiment and can never be falsified. They all start off wrong and everybody is fine with that. As the famous saying goes: “all models are wrong, but some are useful”. These models, especially their computer variants — simulations, usually through agent-based modeling — can actually be detrimental to the field at the expense of potential understanding offered by modèles. I call this the curse of computing.

    Finally, on your opening example of doctors vs. programmers. This is a completely unreasonable comparison. One of these groups has significantly more training than the other. A typical programmer doesn’t even need an undergraduate degree for the sort of work they usually do. Not to mention that for a typical programmer (compared to a typical doctor) what is at stake is completely different (slightly slower website load times vs a person’s health). Thus, it is not reasonable to look at how they approach the scientific literature.

    If you want to stay in the medical disciplines then a slightly more fair comparison would be nurses vs. programmers, but even then I would argue that a nurse requires more qualifications than a typical programmer. Maybe nurses vs. software engineers is fair.

  8. There is a whole branch of mathematics called “Model Theory” – the Dover book by that name is a good starting point. Computer science uses that kind of model.

  9. I agree with the idea that a lot of computer science research is disconnected from reality, thus difficult to test.

    But I oppose using the french vs. english word. How then am i supposed to translate the notion?

    I support @rodrigob 🙂

  10. This has nothing to do with computer science, per se. All other sciences are no better (and oftentimes worse) than Computer Science from this perspective. In CS, at least, you compare at least more or less realistic things. In Math, e.g., you prove what you can prove, not what it is useful.

  11. This is not true of all computer scientists. In particular, researchers in computational linguistics and machine learning are always testing their algorithms will real-word data.

  12. @Daniel Lamiere

    “Though I am acutely aware that a lot of medical research is abstract nonsense”

    I think this is a very strong statement. Consider that, Medical research these days almost exclusively relies on computational statistics and applied computer science. Specially if you thing of medical imaging and data mining on medical data.

  13. I suggest that “… almost exclusively relies on computational statistics and applied computer science” means nothing on its own. Using statistics does not save research from being abstract nonsense, if it is abstract nonsense in the first place.

    Note that I am not trying to estimate which percentage of medical research is relevant and which is not.

  14. @Itman

    I am not following. What are you suggesting? Should we discredit all medical sciences based on what you say? Statistics could reveal lots of information from plain data. if it used by competent professionals. Statistics is used a lot in computer science as well, specially in machine learning as you know very well. Whole discovery of Higgs boson is based on statistics on the data produced by LHC. I would not discredit statistics totally as you mock.

  15. @Itman

    “Using statistics does not save research from being abstract nonsense, if it is abstract nonsense in the first place. ”

    I give that example to establish parallels with the methods used in computer science. There are a lot of overlapping research areas with CS and medical sciences, specially biomedical research. They say ” Biology easily has 500 years of exciting problems to work on” which attributed to Donald Knuth.
    I don’t know what are you defending. But every field of science has its own merits. If you don’t accept that, that’s your prejudice.

  16. Couldn’t agree more with you, Daniel. In fact, much of CS research is worse, with an approach like:

    Propose an algorithm A that solves a type of problem P. Implement A and show that it solves some subset of examples p of P. Rinse and repeat.

Leave a Reply

Your email address will not be published.

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax

You may subscribe to this blog by email.