Winning awards or winning citations: A retrospective look at the consistency between evaluative metrics

Iaroslava Gloria Dutchak, Shih Lun Allen Tseng, Varun Grover

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Appropriate evaluation of information systems research papers ensures that our institutions and review processes stay viable. In the short run, we typically assess research value through research awards, while, in the longer term, we typically assess research value based on how the research community sees and draws from particular published research papers. In this study, we examine the consistency between two metrics for assessing research value: research awards and citations. To do so, we focus on a premier journal, MIS Quarterly. We found that rarely are the “papers of the year” the ones cited the most. We offer possible explanations for this discrepancy based on assessing papers’ originality and utility and their citation patterns.

Original languageEnglish
Article number20
Pages (from-to)526-547
Number of pages22
JournalCommunications of the Association for Information Systems
Volume42
Issue number1
DOIs
StatePublished - 2018

Keywords

  • Award-winning papers
  • Citation
  • Citation patterns
  • IS research papers
  • Most cited papers

Fingerprint

Dive into the research topics of 'Winning awards or winning citations: A retrospective look at the consistency between evaluative metrics'. Together they form a unique fingerprint.

Cite this