The application of artificial intelligence to protect high-risk environments (medical systems, critical infrastructures, education, essential services), on the one hand, and the auditing of AI-based cybersecurity solutions, on the other, require the involvement of so-called trusted AI, i.e. algorithms that ensure transparency, traceability and supervision. An essential condition is the availability of explainable algorithms that link the performance and scalability of ‘black box’ approaches, such as neural networks, to semantically rich expressions of rules or, in our case, graphs. This is particularly true in the context of medical data.
This project therefore aims to define new graph algorithms that support the explicability of analyses, as well as a dedicated query language to express the operations performed. The research will focus on evolving environments, and therefore on dynamic graphs. Two approaches are considered promising candidates for achieving this objective: Laplacian analysis and inductive convolutional graphical neural networks. This research work will represent an important breakthrough in the field of explainable AI for cybersecurity of sensitive dynamic systems and will highlight the benefits of this approach for eHealth environments.
This research work will represent a significant breakthrough in the field of explainable AI for cybersecurity of sensitive dynamic systems and will highlight the benefits of this approach for eHealth environments. The research work will strive to adhere to best practice in academic research: reproducible research, use and publication of open data, use and publication of open source software.