Dictionnaire des sciences du jeu
In
Abstract
Les jeux d’argent sont-ils des jeux comme les autres ? à en croire la définition du mot jeu dans le dictionnaire, il y aurait fort à parier que non : " Activité à laquelle on se livre pour s’amuser, se divertir, sans qu’il y ait aucun enjeu " (Académie française, 2022). Sachant que, selon ce même dictionnaire, les jeux d’argent se définissent comme des " divertissements où l’on hasarde de l’argent dans l’espoir de gagner la partie “, la présence d’un enjeu monétaire devrait donc les exclure du domaine du jeu, à proprement parler. Pourtant, après avoir longuement étudié ce domaine, notamment sur le plan du langage, le philosophe Jacques Henriot (1989 : 213) soutient que " [l]es jeux d’argent ne forment […] pas une catégorie à part " ; postulant par ailleurs que " ce n’est pas en lisant l’article “Jeu” d’un dictionnaire que l’on apprendra, si l’on ne le sait déjà, ce que c’est que jouer " (ibid. : 13). Trois arguments justifient cette inclusion des jeux d’argent dans le domaine du jeu : 1) " il n’entre pas dans la définition du jeu [outre celle du dictionnaire] que l’acrobate doive toujours travailler avec filet " (ibid. : 213), autrement dit : le jeu n’est pas forcément sans risque ni conséquence et encore moins sans enjeu ; 2) " de l’aveu de tous, les jeux d’argent sont des jeux " (ibid.), ne serait-ce que parce qu’on utilise le même mot pour les qualifier ; 3) " tout jeu, quel qu’il soit, peut donner matière à un pari et se doubler d’un jeu d’argent " (ibid.), ce qui suppose un lien entre les deux…
Metrics for evaluating interface explainability models for cyberattack detection in IoT data
In Complex computational ecosystems 2023 (CCE’23)
Abstract
The importance of machine learning (ML) in detecting cyberattacks lies in its ability to efficiently process and analyze large volumes of IoT data, which is critical in ensuring the security and privacy of sensitive information transmitted between connected devices. However, the lack of explainability of ML algorithms has become a significant concern in the cybersecurity community. Therefore, explainable techniques are developed to make ML algorithms more transparent, thereby improving trust in attack detection systems by its ability to allow cybersecurity analysts to understand the reasons for model predictions and to identify any limitation or error in the model. One of the key artifacts of explainability is interface explainability models such as impurity and permutation feature importance analysis, Local Interpretable Model-agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP). However, these models are not able to provide enough quantitative information (metrics) to build complete trust and confidence in the explanations they generate. In this paper, we propose and evaluate metrics such as reliability and latency to quantify the trustworthiness of the explanations and to establish confidence in the model’s decisions to accurately detect and explain cyberattacks in IoT data during the ML process.
Towards attack detection in traffic data based on spectral graph analysis
In
Abstract
Nowadays, cyberattacks have become a significant concern for individuals, organizations, and governments. These attacks can take many forms, and the consequences can be severe. In order to protect ourselves from these threats, it is essential to employ a range of different strategies and techniques like detection of patterns, classification of system behaviors against previously known attacks, and anomaly detection techniques. This way, we can identify unknown forms of attacks. Few of these existing techniques seem to fully utilize the potential of mathematical approaches such as spectral graph analysis. This domain is made of tools able to extract important topological features of a graph by computing its Laplacian matrix and its corresponding spectrum. This framework can provide valuable insights into the underlying structure of a network, which can be used to detect cyberthreats. Indeed, significant changes in the topology of the graph result in significant changes in the spectrum of the Laplacian matrix. For this reason, we propose here to address this issue by considering the network as a dynamic graph composed of nodes (devices) and edges (requests between devices), to study the evolution of the Laplacian spectrum, and to compute metrics on this evolving spectrum. This way, we should be able to detect suspicious behaviors which may indicate that an attack is occurring.
A systemic mapping of methods and tools for performance analysis of data streaming with containerized microservices architecture
In 18th iberian conference on information systems and technologies (CISTI’2023)
Abstract
With the Internet of Things (IoT) growth and customer expectations, the importance of data streaming and streaming processing has increased. Data Streaming refers to the concept where data is processed and transmitted continuously and in real-time without necessarily being stored in a physical location. Personal health monitors and home security systems are examples of data streaming sources. This paper presents a systematic mapping study of the performance analysis of Data Streaming systems in the context of Containerization and Microservices. The research aimed to identify the main methods, tools, and techniques used in the last five years for the execution of this type of study. The results show that there are still few performance evaluation studies for this system niche, and there are gaps that must be filled, such as the lack of analytical modeling and the disregard for communication protocols’ influence.
Modern vectorization and alignement of historical maps: An application to paris atlas (1789-1950)
In
Abstract
Maps have been a unique source of knowledge for centuries. Such historical documents provide invaluable information for analyzing complex spatial transformations over important time frames. This is particularly true for urban areas that encompass multiple interleaved research domains: humanities, social sciences, etc. The large amount and significant diversity of map sources call for automatic image processing techniques in order to extract the relevant objects as vector features. The complexity of maps (text, noise, digitization artifacts, etc.) has hindered the capacity of proposing a versatile and efficient raster-to-vector approaches for decades. In this thesis, we propose a learnable, reproducible, and reusable solution for the automatic transformation of raster maps into vector objects (building blocks, streets, rivers), focusing on the extraction of closed shapes. Our approach is built upon the complementary strengths of convolutional neural networks which excel at filtering edges while presenting poor topological properties for their outputs, and mathematical morphology, which offers solid guarantees regarding closed shape extraction while being very sensitive to noise. In order to improve the robustness of deep edge filters to noise, we review several, and propose new topology-preserving loss functions which enable to improve the topological properties of the results. We also introduce a new contrast convolution (CConv) layer to investigate how architectural changes can impact such properties. Finally, we investigate the different approaches which can be used to implement each stage, and how to combine them in the most efficient way. Thanks to a shape extraction pipeline, we propose a new alignment procedure for historical map images, and start to leverage the redundancies contained in map sheets with similar contents to propagate annotations, improve vectorization quality, and eventually detect evolution patterns for later analysis or to automatically assess vectorization quality. To evaluate the performance of all methods mentioned above, we released a new dataset of annotated historical map images. It is the first public and open dataset targeting the task of historical map vectorization. We hope that thanks to our publications, public and open releases of datasets, codes and results, our work will benefit a wide range of historical map-related applications.
A Myhill-Nerode theorem for higher-dimensional automata
In Proceedings of the 44th international conference on application and theory of petri nets and concurrency (PN’23)
Abstract
We establish a Myhill-Nerode type theorem for higher-dimensional automata (HDAs), stating that a language is regular precisely if it has finite prefix quotient. HDAs extend standard automata with additional structure, making it possible to distinguish between interleavings and concurrency. We also introduce deterministic HDAs and show that not all HDAs are determinizable, that is, there exist regular languages that cannot be recognised by a deterministic HDA. Using our theorem, we develop an internal characterisation of deterministic languages.
Catoids and modal convolution algebras
In Algebra Universalis
Abstract
We show how modal quantales arise as convolution algebras $Q^X$ of functions from catoids $X$, that is, multisemigroups with a source map $\ell$ and a target map $r$, into modal quantales $Q$, which can be seen as weight or value algebras. In the tradition of boolean algebras with operators we study modal correspondences between algebraic laws in $X$, $Q$ and $Q^X$. The class of catoids we introduce generalises Schweizer and Sklar’s function systems and object-free categories to a setting isomorphic to algebras of ternary relations, as they are used for boolean algebras with operators and substructural logics. Our results provide a generic construction of weighted modal quantales from such multisemigroups. It is illustrated by many examples. We also discuss how these results generalise to a setting that supports reasoning with stochastic matrices or probabilistic predicate transformers.
Non-fungible tokens: A review
In IEEE Internet of Things Magazine
Abstract
Non Fungible Tokens (NFTs) are among the most promising technologies that have emerged in recent years. NFTs enable the efficient verification and ownership management of digital assets and therefore, offer the means to secure them. NFT is similar to blockchain that was first used by the cryptocurrency and then by numerous other technologies. At first, the NFT concept attracted the attention of the digital art community. However, NFT has the potential to enable a plethora of different applications and sce We present a review of the NFT technology. We describe the basic components of NFTs and how NFTs work. Then, we present and discuss the different applications of the NFTs. Finally, we discuss various challenges that the NFT technology must address in the future.
Why is the winner the best?
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR)
Abstract
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and postprocessing (66%). The “typical” lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.