Wednesday, October 02, 2013

Moronicity of Relying on Citation Counts


It is well known that papers in mathematics cite less than, say, those in biomedical research (obligatory link a favorite post by Cosma Shalizi). Moreover, papers in mathematics are outnumbered by a huge margin by those in biomedical research. Put together, they take us to a blindingly obvious conclusion: math, as a research activity, might as well be dead for a university interested in raising its position in the ranking pecking order based primarily on a single citation-based metric.

Even within the same field, competent papers in a new, emerging sub-field (e.g., fullerenes in the 1980s, carbon nanotubes in the 1990s, and graphene in the 2000's) acquire lots more citations (and do so a lot more easily and quickly) than equally competent papers do in an old, established field (fullerenes in the 1990s, carbon nanotubes in the 2000s, and presumably, graphene in the 2000-teens).

All that's just a preamble to a link to yet another cry for sanity, this time from economics (now, that's a surprise!): Citations: Caution, Context, and Common Sense by David Laband at Vox. In a section headlined "Citation counts provide limited information," the author gets to what a blind, moronic insistence on citation metrics might mean within the field of economics:

[...] During the course of my 32-year career as an academic economist, the field of economic history has been slowly, but surely, dying off. Papers written by historians of economic thought rarely, if ever, are published in top economics journals and draw relatively few citations as compared to papers written on currently fashionable subjects such as the economics of happiness or network economics. Does the fact that a historian of economic thought has a much lower citation count since 2000 than a network economist imply that the latter is a ‘better’ economist than the former?

The answer to this question depends entirely on how one defines ‘better,’ and in turn, on why the one is being compared against the other. But the fact is such comparisons are being made constantly now, in a wide variety of academic and institutional settings, all over the world.

1 Comments:

  1. Sivaramakrishnan said...

    Another thing that people often forget when they do citation counting, is the number of authors on a paper. In large collaborative groups (usually big experiments), a lot of people get their names on the paper. Unless the citation numbers are somehow normalized by the number of collaborators, this is quite unfair when comparing to people working in smaller teams, doing smaller units of work.

    This might be part of the reason for the first-author culture in some fields, but in other fields (particularly math), the convention is to list names in alphabetical order (and to not even list affiliations up front, in order to prevent prejudice).

    So a blind count of someone's number of publications or citations, or someone's first-author papers, or pretty much any such combination you can think of, can easily be gamed within a decade and are quite pointless as a proxy for measuring someone's contribution and/or potential.