Peer-reviewed papers are getting increasingly boring

The number of researchers and peer-review publications is growing exponentially.  It has been estimated that the number of researchers in the world doubles every 16 years and the number of research outputs is increasing even faster.

If you accept that published research papers are an accurate measure of our scientific output, then we should be quite happy. However, Cowen and Southwood take an opposing point of view and represent this growth as a growing cost without associated gains.

(…) scientific inputs are being produced at a high and increasing rate, (…) It is a mistake, however, to infer that increases in these inputs are necessarily good news for progress in science (…) higher inputs are not valuable p​er se​, but instead they are a measure of cost, namely how much is being invested in scientific activity. The higher the inputs, or the steeper the advance in investment, presumably we might expect to see progress in science be all the more impressive. If not, then perhaps we should be worried all the more.

So are these research papers that we are producing in greater numbers… the kind of research papers that represent real progress? Bhattacharya and Packalen conclude that though we produce more papers, science itself is stagnating because of the worse incentives which focuses the research on low-risk/no-reward ventures as opposed to genuine progress:

This emphasis on citations in the measurement of scientific productivity shifted scientist rewards and behavior on the margin toward incremental science and away from exploratory projects that are more likely to fail, but which are the fuel for future breakthroughs. As attention given to new ideas decreased, science stagnated.

Thurner et al. concur in the sense that they find that “out-of-the-box” papers are getting harder to find:

over the past decades the fraction of mainstream papers increases, the fraction of out-of-the-box decreases

Surely, the scientists themselves have incentives to course correct and encourage themselves to produce more important and exciting research papers?

Collison and Nielsen challenge scientists and institutions to tackle this perceived diminishing scientific productivity:

Most scientists strongly favor more research funding. They like to portray science in a positive light, emphasizing benefits and minimizing negatives. While understandable, the evidence is that science has slowed enormously per dollar or hour spent. That evidence demands a large-scale institutional response. It should be a major subject in public policy, and at grant agencies and universities. Better understanding the cause of this phenomenon is important, and identifying ways to reverse it is one of the greatest opportunities to improve our future.

If we believe that research papers are becoming worse, that fewer of them convey important information, then the rational approach is to downplay them. Whenever you encounter a scientist and they tell you about how many papers they have published or where they were published, or how many citations they got… you should not mock the scientist in question, but you ought to bring the conversation at another level. What is the scientist working on and why is it important work? Dig below the surface.

Importantly, it does not mean that we should discourage people from publishing a lot of papers not anymore than we generally discourage programmer from writing many lines of code. Everything else being equal, people who love what they are doing, and who are good at it, will do more of it. But nobody would mistake someone who writes a lot as a good writer if they aren’t.

We need to challenge the conventional peer-reviewed research paper, by which I refer to a publication was reviewed by 2 to 5 peers before getting published. It is a relatively recent innovation that may not always be for the best. People like Einstein did not go through this process, at least not in their early years. Research used to be more more like “blogging”. You would write up your ideas and share them. People could read them and criticize them. This communication process can be done with different means: some researchers broadcast their research meetings online.

The peer-reviewed research papers allow you to “measure” productivity. How many papers in top-tier venues did research X produce? And that is why it grew so strong.

There is nothing wrong with people seeking recognition. Incentives are good. But we should reward people for the content of their research, not for the shallow metadata we can derive from their resume. If you have not read and used someone’s work, you have no business telling us whether they are good or bad.

The other related problem is the incestious relationship between researchers and assessment. Is the work on theory X important? “Let us ask people who work on theory X”. No. You have to have customers, users, people who have incentives to provide honest assessments. A customer is someone who uses your research in an objective way. If you design a mathematical theory or a machine-learning algorithm and an investment banker relies on it, they are your customer (whether they are paying you or not). If it fails, they will stop using it.

It seems like the peer-review research papers establish this kind of customer-vendor relationship where you get a frank assessment. Unfortunately, it fails as you scale it up. The customers of the research paper are the independent readers, that is true, but they are the readers who have their own motivations.

You cannot easily “fake” customers. We do so sometimes, with movie critics, say. But movie critics have an incentive to give your recommendations you can trust.

We could try to emulate the movie critic model in science. I could start reviewing papers on my blog. I would have every incentive to be a good critic because, otherwise, my reputation might suffer. But it is an expensive process. Being a movie critic is a full time job. Being a research paper critic would also be a full time job.

What about citations? Well, citations of often granted by your nearest peers. If they are doing work that resembles yours, they have no incentive to take it down.

In conclusion, I do find it credible that science might be facing a sort of systemic stagnation brought forth by a set of poorly aligned incentives. The peer-reviewed paper accepted at a good venue as the ultimate metric seems to be at the core of the problem. Further, the whole web of assessment in modern science often seems broken. It seems that, on an individual basis, researchers ought to adopt the following principles:

  1. Seek objective feedback regarding the quality of your own work using “customers”: people who would tell you frankly if your work was not good. Do not mislead citations or “peer review” for such an assessment.
  2.  When assessing another research, try your best to behave as a customer who has some distance from the research. Do not count inputs and outputs as a quality metric. Nobody would describe Stephen King as a great writer because he published many books. If you are telling me that Mr Smith is a great researcher, then you should be able to tell me about the research and why it is important.

Further reading:

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

44 thoughts on “Peer-reviewed papers are getting increasingly boring”

  1. We have a similar problem in the field of machine learning: since the field is really hot these days, many researchers want to gain visibility and many papers are just one more applicative paper with a tweak on an already existing algorithm that claims e.g. some minor improvement on accuracy.

    1. … and, the be clear, there is nothing wrong with writing yet one more paper about a given tweak. However, if the system is such that it is more rewarding to write ten such papers than one more fundamental paper, then the incentives are wrong.

      Thankfully, the incentives can be changed.

  2. I can tell you that at least in compiler research, conferences keep raising the bar to get published. I like to point to the papers on Smalltalk and Self that introduced inline caching. These are seminal papers with hundreds of citations, every modern JIT compiler depends on this technology. However, these papers would never be accepted as-is in today’s conferences. They compare their compiler against itself, and they use a small set of benchmarks they wrote themselves. The paper has limitations in its methodology, but it doesn’t take away from the validity of its findings. To publish the same paper today would require 5 to 10x more work… So people just aim less high. Have a smaller, simpler, more incremental contribution (2 to 4 weeks of work?), spend the bulk of the work on the paper. What’s the point spending 1 to 2 years building something new when your submission might just get repeatedly shot down? Taking risks gets punished.

    IMO, we need to move away from a reject/accept model and towards a model where every paper get a score, and everything gets published online (including what the reviewers wrote, which gets de-anonymized). Maybe not everyone gets to give a talk at the conference, but all work is made available to the public, with its score and reviews.

    1. We need to be concerned about this problem:

      So people just aim less high. Have a smaller, simpler, more incremental contribution (2 to 4 weeks of work?), spend the bulk of the work on the paper. What’s the point spending 1 to 2 years building something new when your submission might just get repeatedly shot down? Taking risks gets punished.

    2. That is what Arxiv is for. Interesting idea about the score though since that is something they do not have.

  3. I totally understand and agree with the general thesis of this post, but as a non-academic living in an academic world (computer vision and machine learning), the network effect of this explosion of papers, while maybe not so great for academia, has thoroughly changed the landscape in terms of source code and implementation availability.

    So while all of this may be bad for academic, we’re hitting an information golden age on Github. The one-upmanship and steady progress means that usually if I can’t get a specific implementation or understanding, I can find some research that has built upon that which I can easily make useful.

    1. the network effect of this explosion of papers (…) has thoroughly changed the landscape in terms of source code and implementation availability.

      Easier data and code sharing should certainly “accelerate” scientific progress. It is, after all, why the Web was built in the first place! (Not for Youtube!)

      we’re hitting an information golden age on Github

      That might well be true, but does it contradict the thesis of the blog post?

  4. Seems to me the biggest problem is something you addressed tangentially: organizations employing researchers need a way to measure their output, and “papers published in ‘top’ fora” and “citations” allow for quantitative measurement.

    Yes, those impact measures are far from ideal, but alternative models for producing and disseminating research need to support impact measurement at least as well as paper and citation counts — from the point of view of those organizations.

    1. Robert, if the gatekeeper lets you through the gate, it just means that the gatekeeper let you through the gate.

  5. The solution for these problems already exist, in my opinion: registered reports. See https://www.cos.io/initiatives/registered-reports

    The concept is simple: introduce peer review for the experiment or paper design before the experiment is ever carried out, and accept publication independent of what the results of the experiment are.

    Why is this better? It removes incentives for only reporting positive findings; it lets peers introduce directions that will make the paper’s direction worthwhile; it eliminates the risk associated with doing risky science by ensuring publication isn’t rejected.

    The Center for Open Science has other initatives to try and fix science’s problems. Strongly recommend looking over them – I was blown away by Brian Nosek’s seminar on it at Brown in March of last year.

  6. This problem has already been solved, imo, by the concept of registered reports: https://www.cos.io/initiatives/registered-reports

    The concept is simple: (a) introduce peer review before an experiment or the focus of the paper is ever carried out, (b) guarantee that the results of the experiment are published independently of whether the results align with the research hypothesis.

    This has many benefits: it disincentivizes suppressing negative results; it removes the risk of doing risky work that challenges the status quo; it lets researchers improve the approach to the solution rather than agree or disagree with the outcome of the solution. You get all the benefits of peer review with none of the drawbacks.

    So far it’s gained momentum in psychology and some other parts of the social sciences. I saw Brian Nosek’s seminar on it at Brown and am convinced it has direct applicability to systems and learning research.

  7. “Peer-reviewed papers are getting increasingly boring” is an ambiguous statement.

    First: Peer-reviewed is used as an adjective to modify the noun papers. Are papers which are not peer-reviewers less boring, i.e.,
    is the review process itself introducing boredom in scientific papers?

    In my experience, when a multi-author paper is peer-reviewed by a group in which group members inject their own additional details, modifiers, provisos, or caveats during revision, the paper often loses clarity and focus. But confusion leads to reader frustration, rarely boredom.

    Second: Increasingly boring implies that peer-reviewed papers were boring to start with. Isn’t that a hasty generalization or a statement that applies to peer-reviewers only? Reader Scientists do not read peer-reviewed papers with the intent of getting bored.

    Boredom has many sources, but let’s not blame the readers. Boredom is not found during a title search. It is found whenever the writer betrays the expectations that readers inferred from the paper’s title.
    More informative titles and abstracts that set the right expectations would prevent much boredom.
    More visual abstracts, with their high communication bandwidth and deep reader involvement, would also prevent much boredom.
    More reader-centered writers bent on structuring their papers to rapidly answer reader questions would prevent much boredom.

    Readers are mining for Science bitcoins. They expect difficulties, reading challenges, but also rewards. Let’s not bury our coins too deep under kryptonite-like jargon or convoluted sentence structures thus weakening the reader’s resolve. And let’s not atomize and disperse our coins as molecular dust across the scientific landscape thus dissipating the reader’s time and attention.
    After all, isn’t boredom is time incarnate?

  8. There is another possible explanation: Fields that produced breakthroughs in the past may have run out of low-hanging fruit. They are now producing incremental improvements, because the remaining fundamental questions may not have conveniently human-sized answers. Or the answers may require breakthroughs in other fields.

    I like working as a computer scientist in bioinformatics. I get to work on a variety of topics from theoretical CS to developing software for life science researchers. There is a contant feeling of progress, as very few people have tried attacking the problems with a similar toolkit. Yet this kind of research does not fit in the academia outside well-funded research institutes. CS departments, biology departments, and medical schools all have their own standards for evaluating people, and I’m somewhere in the middle.

    Silo mentality is a common problem in large organizations, and the academia is clearly suffering from it.

    1. Sure, there are fields like that — every field has a life cycle, and a part of every researcher’s skill set should be to know about that and recognize it for what it is. Nevertheless, I think incremental improvement is tremendously undervalued and breakthroughs are overvalued. Funding criteria are heavily skewed towards breakthroughs. So you end up funding only very few people and everyone else is dry, the people you consider to be at the top. That leaves people with original, non-mainstream ideas out to dry. Replication studies are not high on the list of priorities.

      And it undervalues the significance of what you may consider marginal gains: marginal gains are what miniaturized the transistor from centimeters to a few nanometers.

    2. There is another possible explanation: Fields that produced
      breakthroughs in the past may have run out of low-hanging fruit. They
      are now producing incremental improvements, because the remaining
      fundamental questions may not have conveniently human-sized answers.

      It is a common explanation for scientific stagnation. The problem is… it does not have a good track record. Whenever someone states that all the low-hanging fruits are gone, someone soon finds a new patch where there are many new low-hanging fruits.

      1. I think the problem is specific to established fields such as computer science and physics. Whenever a new patch is discovered, many bright and motivated researchers descend upon it like a swarm of locusts. Any low-hanging fruit will be quickly picked, because there are so many researchers in the established field.

        Much of what is interesting in computer science research comes from interdisciplinary fields such as data science, algorithmic economics, bioinformatics, and quantum computing. As relatively new and unexplored fields, they still have much left to discover. Yet if such interdisciplinary fields are successful, they also become established and overcrowded.

        I’m seeing that from my niche between computer science and bioinformatics. There are many people doing CS research inspired by bioinformatics, and there are many bioinformaticians doing CS-like methods development. Yet they often fail to cross the gap between the fields, because they are used to their own way of thinking.

        People from one field often know the methods from the other field, but they don’t understand their way of thinking. When they read papers from the other side, they often interpret the results far too literally. They don’t know the shared context in the other field, and they don’t see the obvious implications of the published results.

  9. The incentive structure is all screwed up. In some countries (AFAIK China) physicists need to have published in certain journals (sufficiently often) to get a position. This creates huge pressure on the side of researchers and journals alike, and puts everyone on edge. Math is better to some degree (I’m a mathematical physicist, so I participate in both, the math and the physics community), referee take more care reading the papers they have been assigned and writing reports. The reports seem less acrimonious. (Wow, I have seen pretty bad reports of physics papers and rather insulting replies from both, referees and authors.)

    What is interesting is that this is in stark contrast to what gets funded for the most part: everything needs to be novel, unprecedented and grants tend to be more and more tightly controlled. And general university funding seems to be on the decline, at least in the countries I have been employed in (Germany, USA, Canada, Japan). Try suggesting a careful follow-up study on your next grant application. In many fields they already have a replication crisis.

    1. And general university funding seems to be on the decline, at least in the countries I have been employed in (Germany, USA, Canada, Japan).

      People often make such general statements, but when I look at the actual funding… e.g., how much NSF gets every year in constant dollar, I do not see a decline. Even the more general claims that universities are generally less funded seems suspicious. If you look per capita higher-education funding, you usually see an increasing slope.

      I am not saying that it is not happening, but I submit to you that one should look at the hard numbers.

      What might give the impression that funding is scarcer is that we have greatly expanded the training of new PhDs. So there are more and more new ‘scientists’ (defined as people having a PhD) compared to ‘scientist jobs’, but that’s largely by design. We collectively decided to ramp up the production of new PhDs. Unavoidably, this creates a hard bottleneck since we did not, at the same time, create many new research-oriented jobs.

      1. That’s not what I meant. I meant that in the past professors would be able to cover more of their research expenses via general university funding as opposed to securing your own grant money. My PhD advisor could pay my trips and all that from his yearly budget provided for by his university (as opposed to from grants). Also my position was paid for by the university from its yearly budget it received from the state (save for about a year where he asked me to switch to use up grant money).

        In Canada I got $1,000 from my postdoc advisor for two years (and it was a rather prestigious postdoc position).
        As an associate professor, my yearly budget at my current institute is essentially 0. (My group has a budget to cover paper and all that, but I can’t even buy a computer with university funding.) Luckily, I do have managed to obtain grants for the whole duration of my employment, but not everyone was so lucky.

        1. How money gets distributed within universities may certainly vary over time, and the new distribution can have negative effects. What I submit to you is that universities generally receive more funding than they used to get… on a per capita basis. That is, they have more money. And specifically, there is more money for research.

          If you go back 50 years ago, few people (few professors) had any money at all for research. Most professors today have some funding research activities.

          1. […] and the new distribution can have negative effects.

            Sure, any change can have negative ramifications, although I don’t think this is a persuasive argument against change.

            What I submit to you is that universities generally receive more funding than they used to get… on a per capita basis. That is, they have more money. And specifically, there is more money for research.

            Two things: first of all, I think it is important how funding is used, i. e. that we need to distinguish between average and mean (budget of researchers). For example, if you only reward excellence, you basically keep on reinforcing existing differences and make it very hard for other universities to compete on a level playing field. (In the UK people from “other” universities complain about the Ox-Bridge Mafia (their words).) For example, the US and Japan have a much more hierarchical university system with a clear delineation of top tier universities (e. g. University of Tokyo and Kyoto University in Japan) and the rest. In Germany that has been less pronounced, although I fear that the new funding schemes and criteria push universities in that direction.

            What is worse is that lack of funding could prevent some people from doing any significant research if there isn’t a general fund. I’m a theoretician/mathematician, I just need a computer, some paper and pens. But experimentalists need a significant budget just to keep their machines running. Consequently, they don’t do any significant research, which further hampers their ability to obtain external funding.

            Even universities that obtain lots of external funding don’t in my experience distribute the overhead attached to grants (usually about 30 %) to other researchers. At least in my experience, this money is used to for administration and the like.

            Secondly, if you just say that universities receive more funding, I’m not sure this is all that useful if you don’t account for inflation, the general population increase or the increase in the number of students. Or you could consider research funding as part of GDP. Is that money invested in hiring researchers or expanding the administration. (For example, it seems that in the US specifically, the administration at universities has been expanded significantly, whereas the number of tenured positions did not see a similar increase.) I’m not saying you are wrong, but just that looking at absolute numbers can easily be misleading.

            1. if you just say that universities receive more funding, I’m not sure this is all that useful if you don’t account for inflation

              If I offered numbers, I would surely include the numbers either as ratios (e.g., to GDP) or in real dollars. There are many numbers to look at, and many ways to look at it… but I basically invite you to look at them for yourself. You will find that, indeed, in some countries (e.g., Bulgaria) higher-education funding is decreasing. But I submit to you that throughout the Western world, generally speaking, it has increased significantly, no matter how you normalize it (even as a percentage of GDP) since the 1980s, to say nothing of the 1970s. And compared with the golden era of Physics (first half of the XXth century), then there is no possible comparaison: funding is massively higher today.

              Sure, any change can have negative ramifications, although I don’t think this is a persuasive argument against change.

              I was not making an argument against change. I was arguing that even if funding is up or constant, it does not follow that misallocation of funding could not explain part of the perceived scientific stagnation.

              Two things: first of all, I think it is important how funding is used, i. e. that we need to distinguish between average and mean (budget of researchers). For example, if you only reward excellence, you basically keep on reinforcing existing differences and make it very hard for other universities to compete on a level playing field. (In the UK people from “other” universities complain about the Ox-Bridge Mafia (their words).) For example, the US and Japan have a much more hierarchical university system with a clear delineation of top tier universities (e. g. University of Tokyo and Kyoto University in Japan) and the rest. In Germany that has been less pronounced, although I fear that the new funding schemes and criteria push universities in that direction.

              There is strong evidence that throwing more money at the same researchers, beyond a certain point, hits a point of diminishing return. But is that a marginal effect or is it a primary concern?

              We have various countries that use rather different research funding allocation strategies, with some countries going for more even distributions, and others going for a few big winners… and, yet, we do not see some countries escaping this perceived stagnation. That is, if you have a worldwide scientific stagnation, then you cannot explain it in terms of national-level factors.

              To put it differently, it is very tempting to say “science is stagnating because we do not give enough money to enough people”, but how sure are you that we would see a scientific surge in Germany if it were to offer sizeable unrequested long-lived grants to tens of thousands of professors?

              1. There is strong evidence that throwing more money at the same researchers, beyond a certain point, hits a point of diminishing return. But is that a marginal effect or is it a primary concern?

                To me it is a primary concern. Also, when you write diminishing return, how do you quantify that? It is easy to quantify the amount of money you put into the system, but how do you judge its output? Is that even the right way to think about it?

                I’m working in mathematical physics, and I can credibly claim that my research is motivated by problems from physics much more easily than someone in topology or logic can. Or ancient history for that matter. Even though I come from the hard sciences, I look at the decline of the humanities with worry.

                We have various countries that use rather different research funding allocation strategies, with some countries going for more even distributions, and others going for a few big winners… and, yet, we do not see some countries escaping this perceived stagnation. That is, if you have a worldwide scientific stagnation, then you cannot explain it in terms of national-level factors.

                This is a global phenomenon that can only be remedied when sufficiently many countries change the way they incentivize scientists. Like, say, Climate Change, you can say nothing will change unless there is a global movement. In a sense, you are right. But on the other hand, global = sum of local here, i. e. something will only change globally if there is a critical mass of individual countries taking measures.

                Your blog post only addressed one of the negative consequences the current incentive structure has. Big grants with temporary funding for postdocs create a large number of temporary positions for highly qualified people who should have earned a permanent position already. I count myself amongst them (even though I am an associate professor, my position is not permanent). When there are job openings, the universities easily receive >100 applications, most of them serious. At good universities, it is literally hundreds (at the University of Toronto, it was over 550 when I was there, and this was a position at the math department for one of their satellite campuses; at a solid Dutch university, it was over 200 last year). Those are all signs that the current system is not sustainable. To come back to the topic of your blog post, this is one of the drivers of the publish-or-perish philosophy. It is why some people are aggressively trying to publish results piecemeal and get unprofessional in their responses to referee reports.

                1. Your blog post only addressed one of the negative consequences the current incentive structure has. Big grants with temporary funding for postdocs create a large number of temporary positions for highly qualified people who should have earned a permanent position already. I count myself amongst them (even though I am an associate professor, my position is not permanent). When there are job openings, the universities easily receive >100 applications, most of them serious. At good universities, it is literally hundreds (at the University of Toronto, it was over 550 when I was there, and this was a position at the math department for one of their satellite campuses; at a solid Dutch university, it was over 200 last year). Those are all signs that the current system is not sustainable. To come back to the topic of your blog post, this is one of the drivers of the publish-or-perish philosophy. It is why some people are aggressively trying to publish results piecemeal and get unprofessional in their responses to referee reports.

                  You will find this concern on this blog echoed throughout the years for almost 20 years now. We somehow decided to train a ever greater number of scientists, so we tweaked the supply, without worrying at all about the jobs (the demand angle).

                  The result is rather predictible: job insecurity, lower salaries and so forth. Often people do not notice that “low salary” part, but if you project out the salary of a professor from the 1970s, by extrapolation, it would be quite high today… on par or better than the salary of a medical doctor or high-level manager.

                  To me, it seems quite likely that it can explain a scientific decline, if there is a scientific decline. The problem is… how would you demonstrate it?

                  You can easily show, with hard numbers, that there is a PhD glut. But how would you even try to demonstrate its ill effects on science itself?

                  To me it is a primary concern. Also, when you write diminishing return, how do you quantify that? It is easy to quantify the amount of money you put into the system, but how do you judge its output? Is that even the right way to think about it?

                  Regarding research outputs, the way it has been done is to look at the number of papers and the citation counts. The research shows that if you increase someone’s funding, their publication count goes up up to a point, and then it starts levelling off… some you won’t get 10x as many papers when you increase the funding from 10M$ to 100M$. Funding does not seem to improve quality (as measured by the number of citations per paper) but it can increase volume (number of papers). At the margin, there was interesting work made to compare people who almost got funded with people who did get funded and, ironically, the people who almost got funded often did better than those who did (presumably because they worked hard to improve so that they could be funded).

                  From my reading of the research, I would say that if you want to maximize output, you want to spread the money around.

                  Yet we have the well known problem that productivity is not evenly spread. It follows a power law. Most good papers are written by the top 1% of scientists. So funding agencies and scientists often want to give more to a few people. Whether this has been proven effective, I do not know.

                  In science and engineering, where I live (Canada), we have NSERC Discovery grants which have a success rate of about 50% for new applicants. Considering that you can apply every year, the total success rate can be far higher than 50%. In a department such as mine (and I am not part of the elite), every professor that is no older than me has federal funds. My colleagues are good but I am not part of an elite school. If you combine this funding with all other sources, including the rather generous start-up funds that universities hand out systematically… You basically have that the vast majority of professors in science and engineering that want and need funding can get some.

                  Are there some good researchers who are professors that end up without the funding they need? Yes. But I believe that, at least in Canada, it is at the margin. Note, furthermore, that if you can’t get funding of your own, you often can collaborate with people who have funding and do ok.

                  What is absolutely certain is that the number of professors with funding in Canada is vastly higher today than it was in the 1970s.

                  I’m working in mathematical physics, and I can credibly claim that my research is motivated by problems from physics much more easily than someone in topology or logic can. Or ancient history for that matter. Even though I come from the hard sciences, I look at the decline of the humanities with worry.

                  I tend to focus on science and engineering because this is what I know.

                  1. My own intuition is that the PhD glut, and the resulting job insecurity that it creates for young scientists, is probably causing a scientific decline on its own. Sadly, I do not know how to even begin making this argument.

  10. I wonder whether a stackoverflow-style forum for contributions (i.e. for what used to be papers) and commenting could work, one way or another. Or whether instead it is doomed to mostly fail, as fruitful discussion/commenting breaks down more easily/argumentation becomes less clearcut on ‘general’ topics (say, e.g., social science) than in the case of stackoverflow’s computer code discussions.

    One problem of peer review without ready commenting possibility by the general public means only 2-3 persons can check for an (often difficult to spot) flaw; if they don’t detect it, the flaw may thus remain ignored by (most of) the community for long/forever. Or with re-submission to multiple journals, eager authors, theoretically at least, can have multiple go’s until they find a pair of reviewers not spotting the weakness.

    1. There are already research-level questions on “stackoverflow-style forums”. Go to mathoverflow. Some of the questions there are at the level of research papers (at least short ones).

  11. My experience fwiw: I got my PhD in 2010. Research has been a fairly small part of my career since then. I don’t know if I had whatever it takes for a more research focused career, but by the end of grad school I didn’t want it. I found most of the papers published in top conferences in my areas to be some combination of deadly dull and incremental or wildly impractical (obviously so to anyone who spent time out of academia).

    I like the idea of trying harder to make the (potential) usefulness of research a bigger part of researcher evaluation. I have latterly been exposed to bits of the math research world where my impression is that staggering quantities of high caliber intellect are spent on obscure puzzles that have a snowball’s chance in hell of ever having substantial impact. I would hate to see more of science follow math’s path.

    1. (…) my impression is that staggering quantities of high caliber intellect are spent on obscure puzzles that have a snowball’s chance in hell of ever having substantial impact.

      I think it is a concern. We do not want that to happen.

  12. And that’s why acceptance of papers with strong scientific background is become more and more difficult, problem of finding suitable reviewers or most frequently problems related to conflicts of interests.
    In highly impacted journals, the author’s affiliation is become decisive for acceptance.
    The main problem of actual research policies is that:
    1. Only students working with famous supervisors (which are in general the members of editorial boards of top journals and conferences in their fields) will have the chance to succeed their early career and to occupy future research jobs. This looks like an heritage to be protected.
    As a consequence, this will exclude any individual tentative to make the difference and this will end up with excluding promising early career scientists which do not have strong support.

  13. Publication counts and weights are used as a measure of value or productivity because non-scientists hold the purse strings for scientists, academics in particular. This is especially true for resource-intensive fields.

    To participate in investment decision-making, you either bring credibility in the form of predictive or production capacity, or you bring resources. For scientists to have a say in their destiny (and that of their findings), they might want to ask themselves how their understanding of those who use or deploy the knowledge they produce can empower them to participate in those decisions.

    I would hate to see scientists perfect the means of their disempowerment.

    1. Publication counts and weights are used as a measure of value or productivity because non-scientists hold the purse strings for scientists, academics in particular.

      Funding decisions are overwhelmingly set by scientists themselves, at least in North America and at least when it comes to government spending. Scientists may not control how much money gets allocated, but they largely control where it goes. They control hiring decisions, promotions, grants.

      1. The funding decisions may be made by scientists, but the framework for grant applications and dispersing funding is in most places not. Of course, now we get into the weeds and we need to specify what country and what scheme we are talking about; I haven’t applied for grants in the US, so I can’t speak to that. But it is my experience that e. g. review processes have often been designed by administrators and not scientists. Many grants are graded with points or stars, which means the whole thing becomes essentially binary (e. g. 1–4 points becomes 0 and full 5 points becomes 1). Funds tend come with more and more strings attached. In Japan, I had to give up on one generous grant, because it was too difficult to use the funds. (Even for things like flight tickets, the administrator would have to get two offers, bridging two conferences/workshops with a weekend stay was a major head ache, etc.)

        In my opinion, governments should put more funding into the general budgets for universities rather than grants for individual or groups of scientists, so that everyone can fund their basic scientific activities. Grants should be extra credit. This would hopefully also reduce the number of postdocs, who are used too often as highly specialized fancy temp workers, but the university system does not take responsibility to offer them a path towards a professorship.

    1. Great link.

      I think that highly selective venues are not a great thing. You might think that they would force people to improve their standards and the quality of their work, but I don’t think it has this effect. It makes people risk averse. It rewards “passing by reviewer” instead of rewarding actual advances (hint: program committees can’t really judge fairly what constitutes progress, these things take time).

      Moshe says in his essay that conferences get filled with junior researchers. We know why they show up: they are job hunting and hungry for connections. Why do you think that the senior fellows do not bother to show up?

  14. Thank's for the post, especially as you are from the "inside world". It is worth to mention that I mostly tried to follow the research within system, software and electronics thinking for the last 25 years so the situation might be different in other areas, but in these areas the inflation of "research papers" has increased a lot. Being a "research customer" from the industry I find less and less results in papers to use or even worse that inspire me to new ways if thinking. I hope the latter is not because the brain has start to shrink or it is too biased. I feel more and more that the gap about what is going on in the industry and academia is increasing, i.e. many of the problems raised in the papers are often hypothetical or far away from real problems and challenges so when reading research papers I very often wonder where they have got those problems from. I had the opportunity a couple a years ago to be part of such peer reviews for an IEEE conference as an industry guy and what stuck me was how different I valued the papers compared to those from academia. That is to me a sign that academia is not having the deep knowledge about real problems/challeges, which also most often are multi-dimensional, which makes it even harder for a young researcher to grasp the full picture. As I see it this is related to the "why is this important". This makes it even harder to become a customer of results in papers. It is of course a problem that academia and research has been a profession itself and there are unfortunately not that many individuals that moves back and forth between academia and industry.

    1. there are unfortunately not that many individuals that moves back and forth between academia and industry.

      It seems that your hypothesis is that people produce the kind of papers they produce because they are unaware of what happens in industry. Your hypothesis has a testable component: we would expect industry partnerships to significantly change the publication outputs.

      Though your hypothesis might maybe explain a perceived stagnation in computer science… what about the more fundamental sciences like Physics?

      1. Your hypothesis has a testable component: we would expect industry partnerships to significantly change the publication outputs.
        Though your hypothesis might maybe explain a perceived stagnation in computer science… what about the more fundamental sciences like Physics?

        A better example would likely be engineering or medicine. Science works very differently there. When a friend of mine defended her Master’s thesis, I wanted to join to give her support. To my surprise she told me that I wasn’t allowed in, only a company rep, her supervisor and herself were allowed as everything was under NDA. Engineering profs have a much easier time to switch to the industry or found start-ups or joint ventures. In that case she couldn’t even publish her results, which in my book is against how publicly funded science should work.

        That’s very different for people from other fields, including physics. There are exceptions sometimes, but they remain exceptions.

        1. @Max The hypothesis here is that research is not what it used to because University folks are unaware of what goes on in industry. I am saying that this has a testable component: we can look at what happens when we put measures in place to make sure that students gain knowledge about industry practices. Does the research becomes more useful/relevant?

          You bring a distinct point which seems like it might be the reverse. You seem to imply that there might be too much industry oversight in some fields like medicine and engineering. Again, this has a testable component: we can look at places that are more isolated from industry and see whether they fare better.

          1. The hypothesis here is that research is not what it used to because University folks are unaware of what goes on in industry. I am saying that this has a testable component: we can look at what happens when we put measures in place to make sure that students gain knowledge about industry practices. Does the research becomes more useful/relevant?

            Perhaps this is a difference of philosophy in what the role of a university should be. Personally, I delineate between colleges and universities: colleges should be more strongly influenced by the needs of companies whereas universities in my mind should not. Of course, I am not suggesting to completely put blinders on, but universities fulfill an important role. Not only is that an unnecessary constraint, I think it misses what we teach at universities. Every discipline teaches a way of thinking. As chance would have it, I have a lot of friends who are computer scientists and sociologists, and our way of thinking that was honed at university is totally different. On an even more abstract level, each one of us learnt how to work on things being intrinsically motivated and curious.

            Even if you approach this from the applications side, I still find that thinking about this brings in constraints that stifle creativity. I have seen demos of augmented reality in 2001. Teams had to patch together three Nokia Communicators (?) and everything fit in a backpack. My best friend did a thesis where he used machine learning in automated translations — in the late 2000s.

            You bring a distinct point which seems like it might be the reverse. You seem to imply that there might be too much industry oversight in some fields like medicine and engineering.

            I’m not sure whether it is a separate point: I’m saying that in certain fields the two align closely in my experience, because universities are essentially used as part of some companies’ R&D departments. Especially in engineering some master and PhD students are literally working inside companies. Because of that cooperation in research, tight alignment is assured — which was your question as I understand it. Although the alignment is too tight and the secrecy stifles research in my mind.

            1. Perhaps this is a difference of philosophy in what the role of a university should be. Personally, I delineate between colleges and universities: colleges should be more strongly influenced by the needs of companies whereas universities in my mind should not. Of course, I am not suggesting to completely put blinders on, but universities fulfill an important role. Not only is that an unnecessary constraint, I think it misses what we teach at universities. Every discipline teaches a way of thinking. As chance would have it, I have a lot of friends who are computer scientists and sociologists, and our way of thinking that was honed at university is totally different. On an even more abstract level, each one of us learnt how to work on things being intrinsically motivated and curious.

              To be clear, you are responding to an hypothesis that was posted here by a commenter. All I am saying is that it makes falsifiable predictions so it is a valid hypothesis.

              At no point did I advocate that we accept this hypothesis as true.

              Even if you approach this from the applications side, I still find that thinking about this brings in constraints that stifle creativity. I have seen demos of augmented reality in 2001. Teams had to patch together three Nokia Communicators (?) and everything fit in a backpack. My best friend did a thesis where he used machine learning in automated translations — in the late 2000s.

              Are these examples where collaborating with industry stifled creativity?

              I’m not sure whether it is a separate point: I’m saying that in certain fields the two align closely in my experience, because universities are essentially used as part of some companies’ R&D departments. Especially in engineering some master and PhD students are literally working inside companies. Because of that cooperation in research, tight alignment is assured — which was your question as I understand it. Although the alignment is too tight and the secrecy stifles research in my mind.

              Right, so there are many programs that integrate tightly industry with research. It seems that your experience is much the same as mine: it does not bear the fruits you would expect. I would say that, at best, it makes it easier for the students to join industry later as they have acquired a network and industrial references.

              So my prior regarding the hypothesis stated at the beginning of this comment is that it is false. If you make sure that researchers are more aware of industrial needs and practices, you do not get more useful research outputs. I could be wrong, but that is my prior.

              If there is a scientific stagnation, I do not think it is because people are disconnected from industry. The reason I think so has to do with the fact that a lot of “pure scientist” that we look up to in the early part of the XXth century did not have obvious strong and ongoing connections to industry.

              It may very be that many scientists today are more disconnected than ever from industry, but many are tightly connected with industry, and they do not appear to have espaced the perceived stagnation.

              Whatever explains this perceived stagnation has to have happened all over the world, it has to impact almost all researchers… entire generations…

Leave a Reply to Max Lein Cancel reply

Your email address will not be published. Required fields are marked *

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax