The effect of varying test administration and scoring procedures on three tests of (central) auditory processing disorder

Maria E. Pomponio, Stephanie Nagle, Jennifer L. Smart, Shannon Palmer

Research output: Contribution to journalArticlepeer-review

1 Scopus citations


Background: There is currently no widely accepted objective method used to identify (central) auditory processing disorder ([C]APD). Audiologists often rely on behavioral test methods to diagnose (C)APD, which can be highly subjective. This is problematic in light of relevant literature that has reported a lack of adequate graduate-level preparation related to (C)APD. This is further complicated when exacerbated by the use of inconsistent test procedures from those used to standardize tests of (C)APD, resulting in higher test variability. The consequences of modifying test administration and scoring methods for tests of (C)APD are not currently documented in the literature. Purpose: This study aims to examine the effect of varying test administration and scoring procedures from those used to standardize tests of (C)APD on test outcome. Research Design: This study used a repeated-measures design in which all participants were evaluated in all test conditions. The effects of varying the number of test items administered and the use of repetitions of missed test items on the test outcome score were assessed for the frequency patterns test (FPT), competing sentences test (CST), and the low-pass filtered speech test (LPFST). For the CST only, two scoring methods were used (a strict and a lax criterion) to determine whether or not scoring method affected test outcome. Study Sample: Thirty-three native English-speaking adults served as participants. All participants had normal hearing (as defined by thresholds of 25-dB HL or better) at all octave band frequencies from 500 to 4000 Hz, with thresholds of 55-dB HL or better at 8000 Hz. All participants had normal cognitive function as assessed by the Mini-Mental State Examination. Data Collection and Analysis: Paired samples t-tests were used to evaluate the differences in test outcome when varying the CST scoring method. A 3 3 2 3 2 repeated-measures factorial analysis of variance (ANOVA) was used to determine the effects of test, length, and repetitions on outcome score for all three tests of auditory processing ability. Individual 2 3 2 repeated-measures two-way ANOVAs were subsequently conducted for each test to further evaluate interactions. Results: There was no effect of scoring method on the CST outcome. There was a significant main effect of repetition use for the FPT and LPFST, in that test scores were greater when corrected for repetitions. An interaction between test length and repetitions was found for the LPFST only, such that there was a greater effect of repetition use when a shorter test was administered compared with a longer test. Conclusions: Test outcome may be affected when test administration procedures are varied from those used to standardize the test, lending itself to the broader possibility that the overall diagnosis of (C)APD may be subsequently affected.

Original languageEnglish
Pages (from-to)694-702
Number of pages9
JournalJournal of the American Academy of Audiology
Issue number8
StatePublished - 2019


  • (central) auditory processing
  • (central) auditory processing disorder
  • Reliability and validity
  • Scoring methods


Dive into the research topics of 'The effect of varying test administration and scoring procedures on three tests of (central) auditory processing disorder'. Together they form a unique fingerprint.

Cite this