Research should not stop with the research paper

The practice of academic research is based on the production of formal documents that undergo formal reviewers by peers. We routinely evaluate academics for jobs and promotions based on their publication output. When asked about their contribution to science, many academics are happy to point at their papers. In some cases, they will also point at the grants that they have secured.

Back when I worked at NRC, a friend of mine, Martin Brooks gave a controversial talk entitled “user-based research”. It has been nearly 20 years, and I still remember this talk. His ideas were so upsetting that people left while he was talking. I stayed and listened. It took me years to own the ideas he expressed that day.

Martin complained that researchers mostly “threw research over the wall” expecting that other people (maybe industry) would pick up the research from the other side of the wall. While it certainly can happen that others will read your paper and apply it, you should not count on it. Nobody has to read and understand your work.

When I was a bit younger, a senior professor pointed out that some idea we were discussed had been described and tested by his team in some prestigious venue decades ago but that nobody ever did anything with it. Without thinking, I replied:   it was your job to do something with it, don’t blame others. The room fell silent and there was a long pause.

I am not saying that if you find a way to cure cancer in mice, it is your job to get the therapy cleared for use with human beings and to open a hospital where the therapy is delivered. A single individual can only do so much.

What I am saying, however, is that publishing a research paper is not the goal of research. It is not the final output. It is only one element in a chain. And it is not even a requirement. You need to communicate your idea, but the peer reviewed journal article is just one such mean.

Your actual goal is “transfer”. That is, someone, somewhere, must put your ideas in practice beyond the publication of your paper. It does not mean “industrial applications” though it can be that. If your idea is worthwhile and you let it end with a research paper, you have failed.

And it does not merely mean collecting “citations” or other bibliometrics. People routinely cite papers without reading them. Few citations are influential.

But the academic incentives almost conspire to prevent impactful research. There is one specific criteria that academics like to apply that is destructive: novelty. For some piece of academic work to be judged worthwhile, it must be novel. I will say it again and again: originality is overrated.

Of course, people entering a new field tend to “rediscover known facts”. You have to push them back and tell them to go study some more. But there is a difference between naivité and lack of originality. You have to be aware of the history of your field, that is what scholarship is all about. But you also have to stick with ideas for the long run, until the fruits appear.

Instead of rewarding novelty, we should reward scholarship: we should ask people to show that they have studied and documented the past. We should never penalize someone who works on pushing a known idea by refining it, communicating it, validating it.

This idea that some venerable professor had 20 years ago that never went anywhere? Well, it might be entirely worthwhile to revisit it and make it go somewhere, even if it is not novel at all.

More: This blog post is the subject of a live interview with me on YouTube.

Further reading: Boote, D. N., & Beile, P. (2005). Scholars before researchers: On the centrality of the dissertation literature review in research preparation. Educational researcher, 34(6), 3-15.

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

16 thoughts on “Research should not stop with the research paper”

  1. Yes, the overwhelming majority of publications is plain junk.
    If you remove the gloses, the sales pitches and the “surveys” only a few paragraphs barely remains worthy of any interest.

    And they are obviously not meant to be of any use, the structure of the “novel points” is swiftly mentioned in passing and lengthy “proofs” are rehashed over and over which usually do not bring much enlightenment.

    The end results are correct but contrived and useless.

    (I have read over 10000 CS papers)

  2. I agree completely. I suspect publications have become the primary metric because it is easy to measure. I’m not sure what should replace it, but I think science, and therefore possibly the world, would be better if the incentives encouraged a broader definition of success.

    This was my biggest complaint about the academic world during my PhD. I always say the most important thing I learned is that I don’t want to be a professor. I don’t want to be a professor because your output is papers, which felt like a waste. Don’t get me wrong: the best papers are valuable! But I would argue that a large percentage (95%?) are not worth reading, which means writing and reviewing them was also a waste.

    I certainly do not recommend anyone read any of the papers I’ve published! But I was trying to play the game, so I had to try to publish something to “prove” that I could.

  3. I agree wholeheartedly. Well said.

    Question: how much is of the problem is human agency and how much is structural due to the physical medium of “paper” (paper in the material science sense)?

    We have lived though a digital revolution in terms of web/mobile/cloud yet the publishing processes surrounding academic research hasn’t changed? At the very least, all data and content should be stored/revised in Git (or an equivalent), the content should be in Markdown (or another equivalent representation of HTML) and the licensing should be sane (unlike the tradition of handing over the copyright to the publishing journal).

    1. I think that the problem is closely related to peer review. By putting a stamp of approval on papers (physical or digital), we create units that can be viewed as accomplishments. Whether it is actual paper or not does not matter much.

      In contrast, the blog post you just read is not approved or reviewed. Thus it is, for me, an accomplishment. Hence, my primary motivation is to communicate an idea… not to earn points in some background game.

      How we evolve beyond where we are is not clear and maybe not easy to predict.

      One thing is for certain, however. Professional scientists are in charge of the system. They are the ones granting jobs and promotions.

      1. I agree with your assessment of peer reviews and the flawed incentive system built around them. I should have thought through my question better. I guess I’m wondering out loud whether the future of peer reviewed research is not only correcting past mistakes but embracing new tools to make the research papers “living documents” that are easy to integrate into new research. A research document should have a version number, not just a publishing date.

        1. And almost on cue the real world confirms that “necessity is the mother of invention”: ‘A completely new culture of doing research.’ Coronavirus outbreak changes how scientists communicate by Kai Kupferschmidt in Science.

          “This is a very different experience from any outbreak that I’ve been a part of,” says epidemiologist Marc Lipsitch of the Harvard T.H. Chan School of Public Health. The intense communication has catalyzed an unusual level of collaboration among scientists that, combined with scientific advances, has enabled research to move faster than during any previous outbreak. “An unprecedented amount of knowledge has been generated in 6 weeks,” says Jeremy Farrar, head of the Wellcome Trust.

  4. Most papers dont have a limit on the number of citations. This creates mutual back scratching incentives. If conferences limit number of citations per paper to 5 or 10, then the output of a researchers career might look more honest.

    Universities should aim to create balance between publishing and transfer. For every 10 papers published, the goal should be to have atleast one transfer. Transfer does not have to mean a startup. Even a small software library that industry finds useful can be counted as a transfer. It is probably more valuable than the multiple papers because the real test of good research is someone using it in an implementation

  5. I have a hard time understanding how there is so much enthusiastic support when I see a bunch of real concerns.

    If I’m an academic researcher in archeology, and I publish a paper on 18th century farmstead construction methods in Lower Saxony, how do my results “transfer”?

    How do others judge if my paper on transfers enough to meet, for example, Krishnan’s proposal that “[f]or every 10 papers published, the goal should be to have at least one transfer”? In my archeology example, the next link in the chain may be 20 years later for that sort of scholarship!

    If I collaborate on a paper with someone in industry, does that count as an automatic transfer? If not, and if my industry partner decides to not pursue the work, what should I do if I don’t know that domain nor have other contacts?

    Similarly, if one of my PhD students does some excellent work in a subfield related to mine, and we publish, then the student graduates and decides to work in another field, then is it my “job” to change my research focus and continue my previous student’s work? Even if it doesn’t really interest me? Do I need to get all of my other PhD students to join in that new focus?

    Because in that scenario I can see a senior professor pointing out the previous work that his team did, and comment that “nobody ever did anything with it” … with himself as part of the “nobody.” If you’ve had 15 PhD students, can you really follow all of the paths each of them went down?

    What about publishing negative results? If I’m an academic medical researcher, I may need to register a clinical trial, and report negative results. This helps minimize selective reporting and publication skew towards positive results. How are these supposed to transfer? Or even determined that there was a transfer?

    Lastly, suppose I publish a method which is 5x faster than the existing algorithms. There’s initial industry interest because there haven’t been any big improvements for over a decade. Then three weeks later another group publishes a fundamentally different algorithm which is another 7x faster than mine. The field switches to that new algorithm and my work becomes a footnote. Does that count as a transfer, or does it count against me because there was no transfer? How long should I continue to develop my algorithm if my job is to do something with it?

    1. Andrew:

      I am not arguing that you should keep up the work forever. If you do some work and a student of yours pick it up and makes something out of it… then you can and maybe you should move on. Same story if you invent something and later someone invents something even better, then you should move on. In fact, I would urge folks to move out of the way as soon as possible.

      There is a finite number of jobs, grants and promotions. So when you ask “what if an archeologist wrote this narrow paper”…?…. currently, this does get assessed. How? Probably on the prestige of the venue where it was published. And then, maybe, by counting citations. If the researcher worked on something that was not too fashionable, then it is likely that it will get published in a little known journal, it will not get much of a readership, and the researcher in question may struggle to translate this work into a job or a promotion. It better not be a negative results because that’s hard to publish. And it has to be about a “new” ideas. That is how it works right now.

      My post is less about how we should assess researchers… and more about what should motivate researchers. I am saying that they should not write the paper and consider their job done. They should keep working till the work bear its fruits.

      What if the work is not fruitful… what if the work will never amount to anything for anyone…

      Then I will say: do something else.

      What is “transfer”? It is up to the researcher to know.

      Everyone’s life ought to be impactful. But I don’t think that there can be one universal measure of “impact”.

      Currently, we are running along under the assumption that a research paper that is never read is “impact” because, well, because it has received a stamp of approval. That is putting the bar very low.

      It is a recent social construction. The whole thing would have made no sense to scholars from the first half of the XXth century.

  6. My 2 cents. The PhD became an “industry” long time back, and one cannot really judge a “transfer” objectively, as Andrew Dalke’s comment points out very well. Learning, knowledge – it shouldn’t be an industry, though. The solution, for me, is to remove those ample incentives that make this a kind of freebie club. A PhD researcher should have accommodation and food coupons for free, and access to Internet, books, other papers, libraries, but no salary: a learner should be into learning for its own sake. It’s a radical proposal (not so radical if one considers that is how monks and Brahmins used to live and research in the old days), but the current system doesn’t work. As for professors, they should be judged purely on the basis of teaching outcomes, rather than their paper output.

    1. Whether PhD students have a salary or not, they are assuredly looking for a job after the PhD.

      I am not sure how assessing professors strictly on their teaching would help research. It might help teaching though.

  7. Love the directness of the post, but am not a fan of the black and white dichotomy. The academic world isn’t so simple. Universities are first and foremost institutions of learning, intended to train students. So, naturally, some academics approach research primarily as a way to teach MSc and PhD students. Who cares whether or not the research was eventually used by anyone outside of academia, what matters is that the students (and the PI) on the project learned something along the way. Universities are also intended to contribute to the sum of human knowledge. If you adopt the view that “If your idea is worthwhile and you let it end with a research paper, you have failed” then researchers who are working on ideas that are 10-20 years out from being commercialized/used have failed.

    Reductionism isn’t appropriate to describe complex systems like Universities. It’s like saying that government is a necessary evil and should be minimized at all costs. But, I do appreciate your point, especially in the context of engineering-focused research that intentionally aims to solve real problems.

    1. I agree that it is worthwhile to do a project even if the sole outcome is that a student acquired new skills or knowledge. I personally do this all the time. But if that’s the outcome, then you do not need the research paper. And I would argue that it is not research, but rather training/teaching. That is fine, but let us not get distracted.

      I did not write that people had to achieve commercialization of their ideas to be successful. Industrial transfer is certainly one way to go, but we should not expect such an outcome in general.

      One of my colleagues work with folks who live in locations with industrial contamination. Much of her work does not go into the research paper. She has to meet with the people, learn about them, figure out the problems… and then, when she thinks she has figured it out, she goes back to them to make sure, and so forth. It would be tragic to assess her work based on the publication record alone.

  8. I agree that the publishing game is often damaging, and sometimes pointless. However, it really isn’t such a bad idea to make people write up what they have done. In many ways, I feel like there is a bit of an unhealthy focus on publishing in the first place. It shouldn’t be that hard (and often is not) to write up results in a way the community will be able to refer to it. Journal metrics on the other hand are truly ridiculous. Far from understanding that most people will simply use a result aggregator like Google Scholar or the Web of Science, some researchers I have known will doggedly try to get their work published in a ‘reputed’ journal instead of just publishing it in a specialized journal and moving on.

Leave a Reply to jld Cancel reply

Your email address will not be published. Required fields are marked *

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax