Qu’est-ce que mon GNN capture vraiment ? Exploration des représentations internes d’un GNN

Abstract

While existing GNN’s explanation methods explain the decision by studying the output layer, we propose a method that analyzes the hidden layers to identify the neurons that are co-activated for a class. We associate to them a graph.