Alternate Title: How Thomson Reuters's Essential Science Indicators was fooled into naming someone "Rising Star in Computer Science 2008"!
* * *
Here's a version of Goodhart's Law:
When a measure becomes a target, it ceases to be a good measure.
I found this law in this absolutely wonderful article that establishes beyond any doubt how a journal editor, along with a bunch of his colleagues in the editorial board, gamed the system to get his journal -- IJNSNS -- the highest Journal Impact Factor in applied mathematics.
[The article is by Douglas N. Arnold (professor of math) and Kristine K. Fowler (math librarian) of the University of Minnesota. Thanks to Charles Day of Physics Today for the pointer].
In the excerpt below, IJNSNS is being compared to two truly top journals -- SIREV and CPAM -- in applied mathematics. The contrast in the way the three journals got cited cannot be more stunning:
A first step to understanding IJNSNS's high impact factor is to look at how many authors contributed substantially to the counted citations, and who they were. The top-citing author to IJNSNS in 2008 was the journal's Editor-in-Chief, Ji-Huan He, who cited the journal (within the two-year window) 243 times. The second top-citer, D.D. Ganji, with 114 cites, is also a member of the editorial board, as is the third, regional editor Mohamed El Naschie, with 58 cites. Together these three account for 29% of the citations counted towards the impact factor. For comparison, the top three citers to SIREV contributed only 7, 4, and 4 citations, respectively, accounting for less than 12% of the counted citations, and none of these authors is involved in editing the journal. For CPAM the top three citers (9, 8, and 8) contributed about 7% of the citations, and, again, were not on the editorial board.
Another significant phenomenon is the extent to which citations to IJNSNS are concentrated within the 2-year window used in the impact factor calculation. Our analysis of 2008 citations to articles published since 2000 shows that 16% of the citations to CPAM fell within that 2-year window, and only 8% of those to SIREV did; in contrast, 71.5% of the 2008 citations to IJNSNS fell within the 2-year window.
Right at the beginning of the paper, the authors summarize the arguments against the very concept of JIF, especially for a field like mathematics:
The impact factor for a journal in a given year is calculated by ISI (Thomson Reuters) as the average number of citations in that year to the articles the journal published in the preceding two years. It has been widely criticized on a variety of grounds:
A journal’s distribution of citations does not determine its quality.
The impact factor is a crude statistic, reporting only one particular item of information from the citation distribution.
It is a flawed statistic. For one thing, the distribution of citations among papers is highly skewed, so the mean for the journal tends to be misleading. For another, the impact factor only refers to citations within the first two years after publication (a particularly serious deficiency for mathematics, in which around 90% of citations occur after two years).
The underlying database is flawed, containing errors and including a biased selection of journals.
Many confounding factors are ignored, for example, article type (editorials, reviews, and letters versus original research articles), multiple authorship, self-citation, language of publication, etc.
Despite these difficulties, the allure of the impact factor as a single, readily available number—not requiring complex judgments or expert input, but purporting to represent journal quality—has proven irresistible to many.
3 Comments:
Hello Abi, Mohamed El Naschie and Ji-Huan He are extensively covered at El Naschie Watch. That second link points to another, earlier, Douglas N. Arnold paper.
Best regards, Jason
Here is P. Balaram's editorial on the subject.
Thanks for the links and the explanation!
Post a Comment