Balancing the Scales: A Theoretical and Algorithmic Framework for Learning from Imbalanced Data

Class imbalance remains a major challenge in machine learning, especially in multi-class problems with long-tailed distributions. Existing methods, such as data resampling, cost-sensitive techniques, and logistic loss modifications, though popular and often effective, lack solid theoretical foundati...

Full description

Saved in:
Bibliographic Details
Main Authors Cortes, Corinna, Mao, Anqi, Mohri, Mehryar, Zhong, Yutao
Format Journal Article
LanguageEnglish
Published 25.06.2025
Subjects
Online AccessGet full text
DOI10.48550/arxiv.2502.10381

Cover

More Information
Summary:Class imbalance remains a major challenge in machine learning, especially in multi-class problems with long-tailed distributions. Existing methods, such as data resampling, cost-sensitive techniques, and logistic loss modifications, though popular and often effective, lack solid theoretical foundations. As an example, we demonstrate that cost-sensitive methods are not Bayes-consistent. This paper introduces a novel theoretical framework for analyzing generalization in imbalanced classification. We then propose a new class-imbalanced margin loss function for both binary and multi-class settings, prove its strong$H$ -consistency, and derive corresponding learning guarantees based on empirical loss and a new notion of class-sensitive Rademacher complexity. Leveraging these theoretical results, we devise novel and general learning algorithms, IMMAX (Imbalanced Margin Maximization), which incorporate confidence margins and are applicable to various hypothesis sets. While our focus is theoretical, we also present extensive empirical results demonstrating the effectiveness of our algorithms compared to existing baselines.
DOI:10.48550/arxiv.2502.10381