Science and Technology links (October 10th 2021)

  1. Evans and Chu suggest, using data and a theoretical model, that as the number of scientists grow, progress may stagnate. Simply put, in a large field, with many researchers, a few papers and a few people are able to acquire a decisive advantage over newcomers. Large fields allow more inequality. One can review their model critically. Would you not expect large fields to fragment into smaller fields? And haven’t we seen much progress in fields that have exploded in size?
  2. Keeping your iron stores low might be important to slow your aging. Sadly, much of what you eat has been supplemented with iron because a small fraction of the population needs iron supplementation. It is widely belief that you cannot have too much iron supplementation, but, to my knowledge, long-term effects of iron supplementation have not been carefully assessed.
  3. If you lose weight while having high levels of insulin, you are more likely to lose lean tissues (e.g., muscle) than fat.
  4. A drug similar to viagra helps mice fight obesity.
  5. Age-related hair loss might be due to stem cells escaping the hair follicle bulge. This new work contradicts the commonly held belief that the stem cells die over time. This work may not relate to male-pattern baldness.
  6. People tend to stress the ability of “formal peer review” to set apart the good work from the less significant work. People greatly overestimate the value of formal peer review. They forget that much of the greatest works of science occured before formal peer review had even been considered. Cortes and Lawrence have assessed the quality of peer review at a (or “the”) major conference these days (NeurIPS). They found several years ago that when two teams of referees independently assessed the same work, they only agreed on about 50% of the assessment. They have extended their work with a new finding:

    Further, with seven years passing since the experiment we find that for accepted papers, there is no correlation between quality scores and impact of the paper as measured as a function of citation count.

    The lesson is pretty clear. Formal peer review can identify and reject the “obviously bad work”. But it is not a difficult task. In fact, I am quite certain that a fully automated system could quickly identify work that nobody should ever read. However, if you have work that has been visibly well executed, following the state-of-the-art, citing the required prior work, using the right data for the problem, and so forth… then it becomes really difficult to tell whether it is great work, or simply work that meets minimal standards. It is obvious if you know how formal peer review works. You have between two and five researchers, with various levels of familiarity with the domain of the paper, who read it over. They do not redo the experiments. They do not redo the demonstrations. Sometimes you get lucky and find a referee that has deep knowledge of the domain (often because they have worked on a similar problem) and they can be really critical. Even so, in the context of a conference peer review process, there is only so much a highly knowledgeable referee can do, since they cannot freely interact with the authors.

Daniel Lemire, "Science and Technology links (October 10th 2021)," in Daniel Lemire's blog, October 10, 2021.

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

2 thoughts on “Science and Technology links (October 10th 2021)”

  1. Regarding “formal peer review”: the same goes for writing source code or novels. It’s good to automate linting and typesetting. It takes medium effort to verify existence of some formal requirements of being a “good code”, “promising novel” or “substantial science”. It’s harder but still possible to run the reviewed code, try novel on a small audience, reproduce scientific research. It’s next to impossible distinguishing valuable artifact from white noise.

  2. “People tend to stress the ability of “formal peer review” to set apart the good work from the less significant work. People greatly overestimate the value of formal peer review.”

    The reject/revise/accept classification process is clearly biased towards conservativism. Both the best and the worst papers are often rejected. But formal peer review is more than this reject/revise/accept classification. I have benefitted greatly from the detailed comments of reviewers. This detailed feedback is far more important than the simple reject/revise/accept classification. Some publishers, such as PLOS, minimize the reject/revise/accept classification and focus much more on the detailed comments of reviewers. It would be a huge service to science if all publishers were to follow the example of PLOS. In particular, reviewers are often good at spotting errors and lack of clarity, but reviewers are often very bad at estimating the importance of a paper, especially if the paper has some truly new ideas.

Leave a Reply

Your email address will not be published.

You may subscribe to this blog by email.