An Explainable Model-Agnostic Algorithm for CNN-Based Biometrics Verification

This paper describes an adaptation of the Local Interpretable Model-Agnostic Explanations (LIME) AI method to operate under a biometric verification setting. LIME was initially proposed for networks with the same output classes used for training, and it employs the softmax probability to determine w...

Full description

Saved in:
Bibliographic Details
Published inIEEE International Workshop on Information Forensics and Security (Print) pp. 1 - 6
Main Authors Alonso-Fernandez, Fernando, Hernandez-Diaz, Kevin, Buades, Jose M., Tiwari, Prayag, Bigun, Josef
Format Conference Proceeding
LanguageEnglish
Published IEEE 04.12.2023
Subjects
Online AccessGet full text
ISSN2157-4774
DOI10.1109/WIFS58808.2023.10374866

Cover

More Information
Summary:This paper describes an adaptation of the Local Interpretable Model-Agnostic Explanations (LIME) AI method to operate under a biometric verification setting. LIME was initially proposed for networks with the same output classes used for training, and it employs the softmax probability to determine which regions of the image contribute the most to classification. However, in a verification setting, the classes to be recognized have not been seen during training. In addition, instead of using the softmax output, face descriptors are usually obtained from a layer before the classification layer. The model is adapted to achieve explainability via cosine similarity between feature vectors of perturbated versions of the input image. The method is showcased for face biometrics with two CNN models based on MobileNetv2 and ResNet50.
ISSN:2157-4774
DOI:10.1109/WIFS58808.2023.10374866