Towards algorithms and models that we can trust: A theoretical perspective

In the last decade it became increasingly apparent the inability of technical metrics such as accuracy, sustainability, and non-regressiveness to well characterize the behavior of intelligent systems. In fact, they are nowadays requested to meet also ethical requirements such as explainability, fair...

Full description

Saved in:
Bibliographic Details
Published inNeurocomputing (Amsterdam) Vol. 592; p. 127798
Main Authors Oneto, Luca, Ridella, Sandro, Anguita, Davide
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.08.2024
Subjects
Online AccessGet full text
ISSN0925-2312
1872-8286
1872-8286
DOI10.1016/j.neucom.2024.127798

Cover

More Information
Summary:In the last decade it became increasingly apparent the inability of technical metrics such as accuracy, sustainability, and non-regressiveness to well characterize the behavior of intelligent systems. In fact, they are nowadays requested to meet also ethical requirements such as explainability, fairness, robustness, and privacy increasing our trust in their use in the wild. Of course often technical and ethical metrics are in tension between each other but the final goal is to be able to develop a new generation of more responsible and trustworthy machine learning. In this paper, we focus our attention on machine learning algorithms and associated predictive models, questioning for the first time, from a theoretical perspective, if it is possible to simultaneously guarantee their performance in terms of both technical and ethical metrics towards machine learning algorithms that we can trust. In particular, we will investigate for the first time both theory and practice of deterministic and randomized algorithms and associated predictive models showing the advantages and disadvantages of the different approaches. For this purpose we will leverage the most recent advances coming from the statistical learning theory: Complexity-Based Methods, Distribution Stability, PAC-Bayes, and Differential Privacy. Results will show that it is possible to develop consistent algorithms which generate predictive models with guarantees on multiple trustworthiness metrics. •AI is nowadays requested to optimize both technical and ethical metrics.•Focus on ML algorithms and associated models (both deterministic and randomized).•We prove that it is possible to develop consistent algorithms and models.•We bound generalization in terms of both technical and ethical metrics.•We leverage the most recent advances coming from the statistical learning theory.
ISSN:0925-2312
1872-8286
1872-8286
DOI:10.1016/j.neucom.2024.127798