Science and Technology links (December 22nd 2018)

  1. For equity reasons, many people advocate for double-blind peer review, meaning that the author does not know who the reviewer is, nor does the reviewer know who the author is. It is believed (despite little hard positive evidence and some contrary evidence) that this is surely benefificial to female authors and minorities. Cox and Montgomerie find that:

    These analyses suggest that double-blind review does not currently increase the incidence of female authorship in the journals studied here. We conclude, at least for these journals, that double-blind review does not benefit female authors and may, in the long run, be detrimental.

    Why do they say it may be detrimental?

    Firstly, making everything anonymous is hard work. And ressources are finite: we are already struggling to find time and people to review manuscripts… adding to the burden has a cost. Thus we must ensure that there are comparable benefits. You shouldn’t think for a minute that making science harder and more expensive is going to necessarily benefit women and minorities. Raising the costs usually works against inclusion.

    Secondly, they observe empirically that publications by women are less likely to appear in double-blind-review journals than in conventional journals. Why is that? If double-blind reviews are obviously beneficial to women, they would flock the double-blind journals… but they do not. Either women are misguided or else, more likely, double-blind reviews do not favor women.

    And, finally, there may be substantial benefits to the authors and the community in the reviewers knowing who they are. It is simply a fact that the identity of the authors is an important factor when assessing a piece of scientific work. Ultimately, we tend to reward sustained high quality work with more credibility. You want authors to have skin in the game: if their work is bad, then they should pay a price (more scrutiny of their work in the future) and when their work is consistently good, they should be rewarded (given more implicit trust).

  2. The price of lithium-ion batteries has fallen by 73% between 2010 and 2016. (Source: Bloomberg) However, it does not mean that these batteries are getting better at a similar rate: it seems that even though the price is falling, the physical quality of the batteries remains similar.
  3. Scientists have created viable human hair follicles from cultured human cells. (Source: Nature)
  4. There is interest in NAD supplements for anti-aging purposes. Xie et al. (2018) show cognitive benefits due to NAD supplements in old mice.
  5. Bernstein et al. (2018) point out that continuous and sustained social interactions reduce individual exploration. In other words, to be highly original, you do need to close your door and stop answering emails and phone calls for a time.

Daniel Lemire, "Science and Technology links (December 22nd 2018)," in Daniel Lemire's blog, December 22, 2018.

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

4 thoughts on “Science and Technology links (December 22nd 2018)”

  1. There’s a different version of the double-blind reviewing issue here:
    but I think the interpretation of it in the article is incorrect. The mob interpretation is “boo, evil astronomers”, but my immediate assumption was that it mainly reflects institutional prestige and author reputation.

    Commenter pjcamp provides some data suggesting the same thing.

  2. While it may not be the case that double-blind review helps for equity’s sake, I do think it’s very important to science in general.

    I fundamentally disagree with your statement:

    You want authors to have skin in the game: if their work is bad, then
    they should pay a price (more scrutiny of their work in the future)
    and when their work is consistently good, they should be rewarded
    (given more implicit trust).

    Authors do have skin if the game: if their work is bad or unoriginal and still passes review for some reason it will not get cited. If it is good, it will get cited and they build their reputation. And people will read the work of the author less closely and rely more readily on the conclusion of a paper as it is written, with less scrutiny.

    Double-blind review is and has to be the bulwark against publishing things just because an author is well regarded. Famous authors publish bad papers, and reviewers are easily swayed by status as Tomkins et al. can show here:

    Reputation is a fine metric in general, but it should never be one in science. The science needs to be good and stand on its own, without reliance on a proxy of trust because the author is famous.

    1. Reputation is a fine metric in general, but it should never be one in science.

      We often judge work by the reputation of the journal venue… Nature versus a journal I just started. Do I understand you correctly that we should just anonymize the venue and not differentiate work that comes from Nature from work that comes from any other journal?

      We advertize the name and the affiliation of the authors in all journal publications. Why should we just be blind during peer review, why not anonymize the entire publication process all the way to the final product? Indeed, if famous authors are likely to sway referees by their status, aren’t they equally likely to sway readers in a similar fashion?

      And, of course, if reputation should not be a concern, then people should never be allowed to indicate their affiliation. You should get, say, a PhD, but not have to tell people where you got it, whether it is from MIT or from the local college, it is about reputation, is it not? While we are at it, you should remain anonymous throughout your career as a scientist.

      Here is another take on the business of science. It is super easy to cheat in science. Referees do not redo your experiments. You can make up numbers any time you like. Even with theory, you can present arguments that are weak as if they were strong, and referees won’t catch you because they have hours, not months to study your work. So science is at the core an honor-based system. What the referees do is not really assess the science per se, they cannot, they just police you so that the work will at least appear correct at a glance. That may seem like a small role, but the bulk of the papers received by journal do not even have the proper form, they do not follow the basic standards of the field. So referees filter that… but they don’t redo the experiments, they don’t redo the statistics.

      So what keeps scientists honest? Not much really, but one thing that matters a great deal is reputation. If you are producing really good, trustworthy work, time after time, some people will come to vouch for you. It goes like this “I used work from this guy before and it was reliable, so he is probably one of the good ones”. If, however, you oversell your work and make things up, some people will find out after they tried to replicate your work or tried to build on it. They won’t come out and accuse you openly, but your reputation won’t build up.

      1. I apologize, I was not sufficiently clear in my point and over-generalized.

        What I meant to say was: Reputation should never play a role in accepting or rejecting a paper or finding in science.

        You are right, reputation does play a role and needs to. It makes perfect sense to give research grants based on reputation, as a signal for future work. But reputation should not play a role in evaluating completed or past work.

        The reason we perceive a publication in Nature vs. some other journal as of high value is because we trust that Nature and its reviewers have made the best effort to keep out bias in the decision. We do so because we believe that their review process in accepting or rejecting results explicitly does not include author reputation but purely the merit of the work.

        Is this process imperfect? Yes! But the only reason we trust Nature is because we assume their standards are higher than any random journal. Their reviewers also do not redo experiments, but if I knew that a huge portion of their decision was simply the name of the author, I would trust them way less.

        You rightfully point out that science is an honor-based system. But if we accept and reject papers on reputation, it gets easier to cheat the higher up the food chain you are. Your false results then reach more people, exacerbating the severity of false findings.

        Yes, reputation is important, in science as well. But only as an indicator of the possible quality of future work, but not of past work (published or not). Therefore we need double-blind reviews, so that reviewers get the best chance to look at the merits of the work and not be biased by authority or fame.

Leave a Reply

Your email address will not be published.

You may subscribe to this blog by email.