AI, Opacity, and Personal Autonomy

Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque : they do not provide explana...

Full description

Saved in:
Bibliographic Details
Published inPhilosophy & technology Vol. 35; no. 4; p. 88
Main Author Vaassen, Bram
Format Journal Article
LanguageEnglish
Published Dordrecht Springer Netherlands 01.12.2022
Springer
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN2210-5433
2210-5441
2210-5441
DOI10.1007/s13347-022-00577-5

Cover

More Information
Summary:Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque : they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a systematic treatment in the literature: when such algorithms are used in life-changing decisions, they can obstruct us from effectively shaping our lives according to our goals and preferences, thus undermining our autonomy. I argue that this concern deserves closer attention as it furnishes the call for transparency in algorithmic decision-making with both new tools and new challenges.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2210-5433
2210-5441
2210-5441
DOI:10.1007/s13347-022-00577-5