Can you crowdsource expertise? Comparing expert and crowd-based scoring keys for three situational judgment tests

Matt I. Brown, Michael A. Grossenbacher, Michelle P. Martin-Raugh, Jonathan Kochert, Matthew S. Prewett

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

It is common practice to rely on a convenience sample of subject matter experts (SMEs) when developing scoring keys for situational judgment tests (SJTs). However, the defining characteristics of what constitutes a SME are often ambiguous and inconsistent. Sampling SMEs can also impose considerable costs. Other research fields have adopted crowdsourcing methods to replace or reproduce judgments thought to require subject matter expertise. Therefore, we conducted the current study to compare crowdsourced scoring keys to SME-based scoring keys for three SJTs designed for three different job domains: Medicine, Communication, and Military. Our results indicate that scoring keys derived from crowdsourced samples are likely to converge with keys based on SME judgment, regardless of test content (r =.88 to.94 between keys). We observed the weakest agreement among individual MTurk and SME ratings for the Medical SJT (classification consistency = 61%) relative to the Military and Communication SJTs (80% and 85%). Although general mental ability and conscientiousness were each related to greater expert similarity among MTurk raters, the average crowd rating outperformed nearly all individual MTurk raters. Using randomly-drawn bootstrapped samples of MTurk ratings in each of the three samples, we found that as few as 30–40 raters may provide adequate estimates of SME judgments of most SJT items. These findings suggest the potential usefulness of crowdsourcing as an alternative or supplement to SME-generated scoring keys.

Original languageEnglish
Pages (from-to)467-482
Number of pages16
JournalInternational Journal of Selection and Assessment
Volume29
Issue number3-4
DOIs
StatePublished - Dec 2021

Keywords

  • consensus-based measurement
  • crowdsourcing
  • implicit trait policies
  • situational judgment tests
  • subject matter expertise

Fingerprint

Dive into the research topics of 'Can you crowdsource expertise? Comparing expert and crowd-based scoring keys for three situational judgment tests'. Together they form a unique fingerprint.

Cite this