Quantitative Explainable AI For Face Recognition

Face recognition is widely adopted in our daily life in recent years. It usually relies on sophisticated techniques to achieve high accuracy in identifying or verifying the identities of given face images. Artificial intelligence (AI), especially deep learning, is a popular technique used in face re...

Full description

Saved in:
Bibliographic Details
Published inProceedings (International Conference on Engineering of Complex Computer Systems. Online) pp. 32 - 41
Main Authors Peng, Shu, Dong, Naipeng, Bai, Guangdong
Format Conference Proceeding
LanguageEnglish
Published IEEE 14.06.2023
Subjects
Online AccessGet full text
ISSN2770-8535
DOI10.1109/ICECCS59891.2023.00014

Cover

More Information
Summary:Face recognition is widely adopted in our daily life in recent years. It usually relies on sophisticated techniques to achieve high accuracy in identifying or verifying the identities of given face images. Artificial intelligence (AI), especially deep learning, is a popular technique used in face recognition due to its high accuracy, known as the deep face recognition. However, the reliability of the deep face recognition models becomes a concern, especially in security-critical applications. The main challenge is the "black-box" nature of the sophisticated internal structure of the models. Explainable AI has emerged as a solution that provides meaningful explanations to help humans understand the complicated internal structure of the deep learning models, and increase the transparency and interpretability of the "black box". However, this is often at the cost of model accuracy. In this paper, we propose an approach to increasing both the accuracy and interpretation of quantitatively explainable AI models for face recognition. It increases the accuracy of the explainable face recognition models by applying improved loss functions and enhances quantitative interpretability by adding a new visualisation feature. The proposed approach is validated using advanced deep face recognition models and is compared with existing approaches to demonstrate its better performance.
ISSN:2770-8535
DOI:10.1109/ICECCS59891.2023.00014