Abstract
Most explainability methods for graph neural networks focus on identifying important features but fail to capture the reasoning behind a model’s decisions. Logic-based approaches aim to generate explanations in the form of logic rules that reflect the model’s underlying behavior. In this talk, I present two of my recent contributions accepted at ECML-PKDD 2025 and NeurIPS 2025, both centered on logic-based explanations for graph neural networks. The first introduces a post-hoc approach for extracting logic rules based on the presence of sub-graphs, while the second proposes a novel self-explainable model capable of deriving logic rules that transparently describe its own decision process.
Bio
Alessio Ragno is a Postdoctoral Researcher specializing in Explainable AI and Graph Neural Networks. After completing his PhD on topology-based explanations for neural networks at Sapienza University of Rome, Alessio joined the LIRIS laboratory at INSA Lyon. His research interests include self-explainable neural networks, logic-based explanations, graph neural networks, and AI-aided scientific discovery.