FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems

Predictive systems, in particular machine learning algorithms, can take important, and sometimes legally binding, decisions about our everyday life. In most cases, however, these systems and decisions are neither regulated nor certified. Given the potential harm that these algorithms can cause, thei...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Sokol, Kacper, Hepburn, Alexander, Poyiadzi, Rafael, Clifford, Matthew, Santos-Rodriguez, Raul, Flach, Peter
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 08.09.2022
Subjects
Online AccessGet full text
ISSN2331-8422
DOI10.48550/arxiv.2209.03805

Cover

More Information
Summary:Predictive systems, in particular machine learning algorithms, can take important, and sometimes legally binding, decisions about our everyday life. In most cases, however, these systems and decisions are neither regulated nor certified. Given the potential harm that these algorithms can cause, their qualities such as fairness, accountability and transparency (FAT) are of paramount importance. To ensure high-quality, fair, transparent and reliable predictive systems, we developed an open source Python package called FAT Forensics. It can inspect important fairness, accountability and transparency aspects of predictive algorithms to automatically and objectively report them back to engineers and users of such systems. Our toolbox can evaluate all elements of a predictive pipeline: data (and their features), models and predictions. Published under the BSD 3-Clause open source licence, FAT Forensics is opened up for personal and commercial usage.
Bibliography:SourceType-Working Papers-1
ObjectType-Working Paper/Pre-Print-1
content type line 50
ISSN:2331-8422
DOI:10.48550/arxiv.2209.03805