Class imbalance situations, where one class is rare compared to the other, arise frequently in machine learning applications. It is well known that the usual misclassification error is ill-suited for measuring performance in such settings. A wide range of performance measures have been proposed for this problem. However, despite the large number of studies on this problem, little is understood about the statistical consistency of the algorithms proposed with respect to the performance measures of interest. In this paper, we study consistency with respect to one such performance measure, namely the arithmetic mean of the true positive and true negative rates (AM), and establish that some practically popular approaches, such as applying an empirically determined threshold to a suitable class probability estimate or performing an empirically balanced form of risk minimization, are in fact consistent with respect to the AM (under mild conditions on the underlying distribution). Experimental results confirm our consistency theorems.
|Number of pages||9|
|State||Published - 2013|
|Event||30th International Conference on Machine Learning, ICML 2013 - Atlanta, GA, United States|
Duration: Jun 16 2013 → Jun 21 2013
|Conference||30th International Conference on Machine Learning, ICML 2013|
|Period||06/16/13 → 06/21/13|