Caroline Mazini-Rodrigues

Unsupervised discovery of interpretable visual concepts

Abstract Providing interpretability of deep-learning models to non-experts, while fundamental for a responsible real-world usage, is challenging. Attribution maps from xAI techniques, such as Integrated Gradients, are a typical example of a visualization technique containing a high level of information, but with difficult interpretation. In this paper, we propose two methods, Maximum Activation Groups Extraction (MAGE) and Multiscale Interpretable Visualization (Ms-IV), to explain the model’s decision, enhancing global interpretability. MAGE finds, for a given CNN, combinations of features which, globally, form a semantic meaning, that we call concepts.

Continue reading

Bridging human concepts and computer vision for explainable face verification

By Miriam Doh, Caroline Mazini-Rodrigues, Nicolas Boutry, Laurent Najman, Mancas Matei, Hugues Bersini

2023-10-10

In 2nd international workshop on emerging ethical aspects of AI (BEWARE-23)

Abstract With Artificial Intelligence (AI) influencing the decision-making process of sensitive applications such as Face Verification, it is fundamental to ensure the transparency, fairness, and accountability of decisions. Although Explainable Artificial Intelligence (XAI) techniques exist to clarify AI decisions, it is equally important to provide interpretability of these decisions to humans. In this paper, we present an approach to combine computer and human vision to increase the explanation’s interpretability of a face verification algorithm.

Continue reading

Gradients intégrés renforcés

Abstract Les visualisations fournies par les techniques d’Intelligence Artificielle Explicable xAI) pour expliquer les réseaux de neurones convolutionnels (CNN’s) sont parfois difficile á interpréter. La richesse des motifs d’une image qui sont fournis en entrées (les pix l d’une image) entraîne des corrélations complexes entre les classes. Les techniques basées sur les gradients, telles que les gradients intégrés, mettent en évidence l’import nce de ces caractéristiques. Cependant, lorsqu’on les visualise sous forme d’images, on peut e retrouver avec un bruit excessif et donc une difficulté á interpréter les explic tions fournies.

Continue reading