Improve robustness of machine learning via efficient optimization and conformal prediction

The advance of machine learning (ML) systems in real‐world scenarios usually expects safe deployment in high‐stake applications (e.g., medical diagnosis) for critical decision‐making process. To this end, provable robustness of ML is usually required to measure and understand how reliable the deploy...

Full description

Saved in:
Bibliographic Details
Published inThe AI magazine Vol. 45; no. 2; pp. 270 - 279
Main Author Yan, Yan
Format Journal Article
LanguageEnglish
Published 01.06.2024
Online AccessGet full text
ISSN0738-4602
2371-9621
DOI10.1002/aaai.12173

Cover

More Information
Summary:The advance of machine learning (ML) systems in real‐world scenarios usually expects safe deployment in high‐stake applications (e.g., medical diagnosis) for critical decision‐making process. To this end, provable robustness of ML is usually required to measure and understand how reliable the deployed ML system is and how trustworthy their predictions can be. Many studies have been done to enhance the robustness in recent years from different angles, such as variance‐regularized robust objective functions and conformal prediction (CP) for uncertainty quantification on testing data. Although these tools provably improve the robustness of ML model, there is still an inevitable gap to integrate them into an end‐to‐end deployment. For example, robust objectives usually require carefully designed optimization algorithms, while CP treats ML models as black boxes. This paper is a brief introduction to our recent research focusing on filling this gap. Specifically, for learning robust objectives, we designed sample‐efficient stochastic optimization algorithms that achieves the optimal (or faster compared to existing algorithms) convergence rates. Moreover, for CP‐based uncertainty quantification, we established a framework to analyze the expected prediction set size (smaller size means more efficiency) of CP methods in both standard and adversarial settings. This paper elaborates the key challenges and our exploration towards efficient algorithms with details of background methods, notions for robustness measure, concepts of algorithmic efficiency, our proposed algorithms and results. All of them further motivate our future research on risk‐aware ML that can be critical for AI–human collaborative systems. The future work mainly targets designing conformal robust objectives and their efficient optimization algorithms.
ISSN:0738-4602
2371-9621
DOI:10.1002/aaai.12173