There has been much ink spilled on the evils of public or perish, that is, the way professors and would-be professors are mostly gauged by what they wrote and especially, how much they wrote.
Most recently, David Lorge Parnas, one of the most prolific authors in Computer Science (I found 242 papers in his name) has published an article on this topic: stop the numbers game (Communications of the ACM, Volume 50, Number 11 (2007), Pages 19-21). It starts like this:
As a senior researcher, I am saddened to see funding agencies, department heads, deans, and promotion committees encouraging younger researchers to do shallow research.
Here are the evils of the numbers game according to Parnas:
- It encourages superficial research.
- It encourages overly large groups.
- It encourages repetition.
- It encourages small, insignificant studies.
- It rewards publication of half-baked ideas.
He concludes as follows:
Sadly, the present evaluation system is self-perpetuating. Those who are highly rated by the system are frequently asked to rate each other and others; they are unlikely to want to change a system that gave them their status.
What Parnas misses is that the publication process itself is changing. While Parnas is a prolific author, he does not publish in open archives and he does not appear to have a blog. He has predicted the failure of Wikipedia as an encyclopedia in 2005 because it lacks a classical peer review process. I think he is dead wrong: our current classical peer review process is not the only one that can work, and it is not the optimal system.
But whether you agree with him or not on the evils of the counting game, I do not think you can easily reject this last recommendation:
When serving on recruiting, promotion, or grant-award committees, read the candidate’s papers and evaluate the contents carefully. Insist that others do the same.
I actually just reviewed a couple of grant applications and in both instances, I drilled down to the papers the researcher wrote and reviewed them. Sometimes you have pleasant surprises (results are as strong or stronger as the researcher claimed), but you also get bad surprises (an article touted as the cornerstone of one’s research is a thin 1-pager).
Recruiting is a bit of a tougher issue. If the candidate has single-author papers, then you can probably do a decent job if you know the field, but most candidates will only have multiple-author papers. Reading the papers may not be a good predictor for the candidate’s ability.
(Thanks to Sébastien Paquet for a thoughtful discussion.)