Productivity measures are counterproductive?

Michael has a long post on why it seems foolish to measure scientist according to one unidimensional metric (such as the H-index). His argument is mostly that you can game these metrics rather easily if you have a large enough social network. Given how hard people work at gaming the PageRank metric, and the often quoted fact that over 50% of all married people cheat on their spouse, we would be naive to think that researchers do not game the metrics. For that matter, it is known that several journals cheat to increase their impact factor (another unidimensional metric).

The question really is, does it hurt us that people play these games? After all, if we accept that the rule of the game is to get a high h-index, then why should I care how people go about it?

Michael is actually reacting on an article, The Mismeasurement of Science, which identifies several ill-effects of these unidimensional measures, including the facts that:

  • many authors ignore or hide results that do not fit with the story being told in the paper because doing so makes the paper less complicated and thus, more appealing;
  • science is becoming a more ruthlessly self-selecting field where those who are less aggressive and less self-aggrandizing are also less likely to receive recognition.

In turn, I conjecture that we have the following measurable effects:

  • Science is becoming less attractive as a career. If you are going to pursue a high H-index, if this becomes your goal, then how is this more interesting, as a game, than to make a lot of money? Should we be surprised that Science Faculties are bleeding students while Business Schools are turning down students? When accounting becomes sexier than Physics, we have a problem. Women, who are less attracted to career where you compare the size of your appendage, are harder to find than ever in Computer Science. Should we get a clue?
  • Research papers, while becoming easier to read and cite, fail to provide us with enough data to correctly appreciate the results and their applications. In particular, research papers are increasingly dismissed by practitioners who need not only a nice story, but also the full story, including the dirty secrets.

Whatever rules we set, they have consequences. I am particularly worried about the fact that we are making science uninteresting by redefining it from “scientific discovery” to “achieving a high H-index”.

Maybe we have to go back and ask fundamental questions. Why do we do science? What do we really expect from scientists? What should we really reward

See also my posts Are we destroying research by evaluating it?, On the upcoming collapse of peer review, and Assessing a researcher… in 2007.

Published by

Daniel Lemire

A computer science professor at the Université du Québec (TELUQ).

4 thoughts on “Productivity measures are counterproductive?”

  1. “… the often quoted fact that over 50% of all married people cheat on their spouse…”

    Is that like the oft-quoted statistic that 73% of all statistics are made up on the spot? :^)

    Seriously, I wonder about that relationship between judging science via H-index and technical investing. In the latter case, investors ignore things like company financial fundamentals, what product they make, what industry sector they’re in, who runs the company, etc. and just focus on stock price and volume. They assume that all available information, plus market psychology and even insider information, is already captured in price and volume movements.

    It seems to me that the key difference is that a measure like H-index is monotonic. We could make it non-monotonic by only including citations made in the last n years (to all papers written by the author). Would that make it too difficult to game the system?

  2. I think you would like Lee Smolin’s latest book, The Trouble With Physics. Firstly, it is a great survey of the state of modern physics for non-physicists. But more relevantly for your post, there is a common theme that the institutional structure that supports physics research is unintentionally suppressing originality and stifling progress.

  3. Do you have examples of journals that game the impact factor (and how they do it)?

    And what happens to these citation-based metrics when the notion of “publications” slowly disappears with the widespread availability of preprints, notes, blogs, etc. on the web?

  4. Cyril: journals can game the system by favoring articles citing a large number of their own articles. A journal I often review asks me every time whether the article cites a large number of articles from the journal itself. The message is clear: if you want your paper to be accepted, cite a large number of papers from the journal itself.

    As for what happens when the concept of publication morphs due to the Web… A related question that I find very interesting is why we publish in the first place. If our goal is to be read by others, and people can more easily read us if they can freely download our paper, then why do we lock our papers behind a publisher’s firewall?

    I am not worried, there will always be metrics. And it is a good thing. And competing for better score is a good thing too. But we can go too far and make the game boring for the next generation.

Leave a Reply

Your email address will not be published. Required fields are marked *

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax