Progress report on influencing factors and models (v2)

D2.1

Face recognition has become a fundamental technology in multiple real-world applications using pattern recognition-based tools to offer an increased level of security. Moreover, face recognition solutions increasingly use more sophisticated artificial intelligence (AI) techniques witnessing significantly increased performance in recent years. However, these performance gains are often associated with higher complexity and harder to understand systems – consequently modern AI-based face recognition systems are sometimes referred to as ‘blackbox’ systems. This raises the risk of not trusting AI-based facial recognition technology if its results cannot be minimally explained, especially when this technology also brings risks in terms of privacy, which is a very sensitive societal issue Thus, facial recognition explainability has become a pivotal step to understand AI-based facial recognition systems’ behaviour and to increase trust on the used technology. Explainability can be the decisive factor to enable the usage of face recognition systems that conform with the European initiative entitled 'Artificial Intelligence Act', where most biometric recognition systems are considered 'highrisk AI', and the requirements for their authorization will include the need for transparency an information to the user. The idea underlying AI-based facial recognition explainability is to offer insights into why a specific face probe is matched with one identity instead of another. The explainability process starts by identifying the AI-based face recognition influencing factors understanding their impact on the overall performance of AI-based facial recognition systems. In this context, the XAIface project aims to contribute to a better understanding and explanation of the working mechanisms of AI-based facial recognition systems, to enhance their effectiveness and the social acceptance of AI-based facial recognition technology. The main objective of this report is to provide a survey on the influencing factors that impact the recognition performance of AI-based facial recognition systems, in general, and those based on deep learning (DL)-based facial recognition systems, in particular. This report also reviews existing publications that study the impact of the various influencing factors in face recognition performance, and try to model the nature and strength of their effect when compared to each other, as well as any interference between them. Based on the review included in this report, additional studies may be performed to obtain further models considering appropriate criteria, metrics and protocols to more understand the impact of the identified influencing factors on DL-based face recognition performance. Out of the scope of this document is to present the consortium work, which will instead be presented in Deliverable 2.2 'Progress Report on Implementation of Evaluation Metrics and Protocols'.

Public_D2.1_Progress_report_on_influencing_factors_and_models_v2.pdf