Background: Many clinical scores that measure the degree of asthma are used without adequate evaluation of inter-rater reliability. When reliability is tested, most often the Cohen κ statistic is used, which limits the comparative results of only two raters at a time. Objective: To evaluate inter-rater agreement of a clinical asthma score using a multi-rater κ statistic. Methods: Four raters administered a clinical asthma score to 17 children with clinical asthma. Five items were evaluated: O2 requirement, inspiratory breath sounds, accessory muscle use, expiratory wheeze, and cerebral function. For each, a score of zero indicated a normal state; one, moderate impairment; two, severe impairment. A multi-rater κ statistic was used as a measure of agreement among all four raters simultaneously. This was applied using hand calculations then crosschecked by using a standard statistical syntax, a component of the Statistical Package for Social Sciences (SPSS 9.0). Results: Application of the multi-rater κ statistic revealed strong agreement among raters on oxygenation (K = 0.759), moderate agreement for expiratory eeze and cerebral function (K = 0.698), and poor agreement for accessory muscle use (K = 0.528) and inspiratory breath sounds (K = 0.316). Conclusions: The level of agreement varied by item with the least subjective item, O2 requirement, demonstrating the highest inter-rater correlation. A multi-rater κ statistic can be applied to data obtained from a clinical scoring instrument either manually or by using statistical syntax provided by SPSS.