Ataollah Kamal

In pursuit of the hidden features of GNN’s internal representations

Abstract We consider the problem of explaining Graph Neural Networks (GNNs). While most attempts aim at explaining the final decision of the model, we focus on the hidden layers to examine what the GNN actually captures and shed light on the hidden features built by the GNN. To that end, we first extract activation rules that identify sets of exceptionally co-activated neurons when classifying graphs in the same category. These rules define internal representations having a strong impact in the classification process.

Continue reading

On GNN explainability with activation rules

Abstract GNNs are powerful models based on node representation learning that perform particularly well in many machine learning problems related to graphs. The major obstacle to the deployment of GNNs is mostly a problem of societal acceptability and trustworthiness, properties which require making explicit the internal functioning of such models. Here, we propose to mine activation rules in the hidden layers to understand how the GNNs perceive the world. The problem is not to discover activation rules that are individually highly discriminating for an output of the model.

Continue reading

Improving the quality of rule-based GNN explanations

By Ataollah Kamal, Elouan Vincent, Marc Plantevit, Céline Robardet

2022-09-12

In Workshop on eXplainable knowledge discovery in data mining. Machine learning and principles and practice of knowledge discovery in databases - international workshops of ECML PKDD 2022, grenoble, france, september 19-23, 2022, proceedings, part I

Abstract Recent works have proposed to explain GNNs using activation rules. Activation rules allow to capture specific configurations in the embedding space of a given layer that is discriminant for the GNN decision. These rules also catch hidden features of input graphs. This requires to associate these rules to representative graphs. In this paper, we propose on the one hand an analysis of heuristic-based algorithms to extract the activation rules, and on the other hand the use of transport-based optimal graph distances to associate each rule with the most specific graph that triggers them.

Continue reading

What does my GNN really capture? On exploring internal GNN representations

By Luca Veyrin-Forrer, Ataollah Kamal, Stefan Duffner, Marc Plantevit, Céline Robardet

2022-07-23

In International joint conference on artificial intelligence 2022

Abstract GNNs are efficient for classifying graphs but their internal workings is opaque which limits their field of application. Existing methods for explaining GNN focus on disclosing the relationships between input graphs and the model’s decision. In contrary, the method we propose isolates internal features, hidden in the network layers, which are automatically identified by the GNN to classify graphs. We show that this method makes it possible to know the parts of the input graphs used by GNN with much less bias than the SOTA methods and therefore to provide confidence in the decision process.

Continue reading

Qu’est-ce que mon GNN capture vraiment ? Exploration des représentations internes d’un GNN

By Luca Veyrin-Forrer, Ataollah Kamal, Stefan Duffner, Marc Plantevit, Céline Robardet

2022-03-24

In Extraction et gestion des connaissances, EGC 2022, blois, france, 24 au 28 janvier 2022

Abstract While existing GNN’s explanation methods explain the decision by studying the output layer, we propose a method that analyzes the hidden layers to identify the neurons that are co-activated for a class. We associate to them a graph.

Continue reading