Algorithms that remember: model inversion attacks and data protection law

Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of m...

Full description

Saved in:
Bibliographic Details
Published inPhilosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences Vol. 376; no. 2133; p. 20180083
Main Authors Veale, Michael, Binns, Reuben, Edwards, Lilian
Format Journal Article
LanguageEnglish
Published England The Royal Society Publishing 15.10.2018
Subjects
Online AccessGet full text
ISSN1364-503X
1471-2962
1471-2962
DOI10.1098/rsta.2018.0083

Cover

More Information
Summary:Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU's recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. We present recent work from the information security literature around 'model inversion' and 'membership inference' attacks, which indicates that the process of turning training data into machine-learned systems is not one way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation. This article is part of the theme issue 'Governing artificial intelligence: ethical, legal, and technical opportunities and challenges'.
Bibliography:Theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’ compiled and edited by Corinne Cath, Sandra Wachter, Brent Mittelstadt, Luciano Floridi
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
One contribution of 9 to a theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.
ISSN:1364-503X
1471-2962
1471-2962
DOI:10.1098/rsta.2018.0083