Citations are a measure of the influence and impact of research. They are imperfect metric, but the advantage they have is that they are objective. If someone cites my article (even unfavorably), then they find it worth discussing.

In academia, researchers are nearly obsessed with their citations. Google Scholar keeps track of them automatically, and I check my Google Scholar profile regularly to see whether the count has increased. I know I am not alone in this. I have seen many celebratory tweets from academics announcing that they have hit a citation milestone (e.g., 1,000 citations). I do it, too.

As I rack up the articles, I have realized that the importance of an article is not necessarily reflected in the number of citations. My most highly cited article is a “how-to” article explaining multivariate analysis of variance (MANOVA)–a statistical procedure that some of my colleagues think is obsolete. There are no new contributions in the article, and a type setting error made one of the tables incomplete. It gets cited over 30 times per year.

Conversely, an article I wrote on the relationship between grade skipping and adult income has only racked up two citations since it was published 2 years ago. That article is hugely important because it shows that there are no long-term negative employment or economic drawbacks to grade skipping. It is the largest study ever on the effects of grade skipping (usable n = 69,937), and it was published in an elite journal.

Time to gather some data!

To see whether those two articles were a fluke, I rated my articles on four dimensions that I think are relevant to importance:

  • Importance of the research topic
  • Importance of the article in contributing to the research topic
  • Quality of methods/data
  • Writing quality

This is a subjective list, so your mileage may vary. For each of these variables, the correlation between each of these variables and the number of citations was quite low (r = .041 to .183). Some of this seems to be an artifact of the skewed distribution. The rank correlations are stronger (ρ = .167 to .425). Still, topic importance is weakly correlated with citations (r = .096 and ρ = .167), and contribution importance doesn’t fare much better (r = .048 and ρ = .298).

The evidence regarding methods quality is mixed (r = .041 and ρ = .422). I think the ρ correlation is more believable because that is not heavily influenced by outliers. The strongest correlations, though, are between citations and writing quality (r = .183 and ρ = .425), though the ρ values of .422 and .425 are not statistically significantly different.

If my corpus is typical, then “important” articles are not “highly cited” articles, though they may be generally be “higher quality.” While I wish my most important articles were my most cited, this means I probably should stop worrying about whether a particular article is getting cited “enough.” I should also be willing to investigate lesser known articles from other researchers’ oeuvre. There may be some hidden gems out there!

Postscript: Altmetrics

One more note: In recent years the Altmetric, a measure of scholarly article influence on social media and in the press, has gained in popularity. I like Altmetrics because they capture influence that citations do not–especially among the public.

The Altmetric “donut” for one of my articles. The number in the center is the Altmetric value, with higher numbers indicating more online engagement with the article. Altmetrics range from 0 to infinity. At the time of this writing, a value of 26 is higher than 95% of all Altmetric scores. The colors indicate different types of engagement: blue = Twitter, red = news media, fuscia = Google+, and gray = Wikipedia.

Altmetric scores seem to be moderately correlated with yearly citations: r = .485, ρ = .381. So, influence online correlates with influence in the scholarly realm. But not overwhelmingly so.

css.php