Publications

Weakly well-composed cell complexes over $n$D pictures

Abstract

In previous work we proposed a combinatorial algorithm to “locally repair” the cubical complex $Q(I)$ that is canonically associated with a given 3D picture I. The algorithm constructs a 3D polyhedral complex $P(I)$ which is homotopy equivalent to $Q(I)$ and whose boundary surface is a 2D manifold. A polyhedral complex satisfying these properties is called well-composed. In the present paper we extend these results to higher dimensions. We prove that for a given $n$-dimensional picture the obtained cell complex is well-composed in a weaker sense but is still homotopy equivalent to the initial cubical complex.

Continue reading

Improving parallel state-space exploration using genetic algorithms

By Étienne Renault

2018-06-14

In Proceedings of the 12th international conference on verification and evaluation of computer and communication systems (VECOS’18)

Abstract

The verification of temporal properties against a given system may require the exploration of its full state space. In explicit model-checking this exploration uses a Depth-First-Search (DFS) and can be achieved with multiple randomized threads to increase performance. Nonetheless the topology of the state-space and the exploration order can cap the speedup up to a certain number of threads. This paper proposes a new technique that aims to tackle this limitation by generating artificial initial states, using genetic algorithms. Threads are then launched from these states and thus explore different parts of the state space. Our prototype implementation runs 10% faster than state-of-the-art algorithms. These results demonstrate that this novel approach worth to be considered as a way to overcome existing limitations.

Continue reading

Reactive synthesis from LTL specification with Spot

By Thibaud Michaud, Maximilien Colange

2018-06-07

In Proceedings of the 7th workshop on synthesis, SYNT@CAV 2018

Abstract

We present ltlsynt, a new tool for reactive synthesis from LTL specifications. It relies on the efficiency of Spot to translate the input LTL specification to a deterministic parity automaton. The latter yields a parity game, which we solve with Zielonka’s recursive algorithm.The approach taken in ltlsynt was widely believed to be impractical, due to the double-exponential size of the parity game, and to the open status of the complexity of parity games resolution. ltlsynt ranked second of its track in the $2017$ edition of the SYNTCOMP competition. This demonstrates the practical applicability of the parity game approach, when backed by efficient manipulations of $\omega$-automata such as the ones provided by Spot. We present our approach and report on our experimental evaluation of ltlsynt to better understand its strengths and weaknesses.

Continue reading

A formally-proved algorithm to compute the correct average of decimal floating-point numbers

By Sylvie Boldot, Florian Faissole, Vincent Tourneur

2018-06-01

In 25th IEEE symposium on computer arithmetic

Abstract

Some modern processors include decimal floating-point units, with a conforming implementation of the IEEE-754 2008 standard. Unfortunately, many algorithms from the computer arithmetic literature are not correct anymore when computations are done in radix 10. This is in particular the case for the computation of the average of two floating-point numbers. Several radix-2 algorithms are available, including one that provides the correct rounding, but none hold in radix 10. This paper presents a new radix-10 algorithm that computes the correctly-rounded average. To guarantee a higher level of confidence, we also provide a Coq formal proof of our theorems, that takes gradual underflow into account. Note that our formal proof was generalized to ensure this algorithm is correct when computations are done with any even radix.

Continue reading

Real-time document detection in smartphone videos

By Élodie Puybareau, Thierry Géraud

2018-05-10

In Proceedings of the 24th IEEE international conference on image processing (ICIP)

Abstract

Smartphones are more and more used to capture photos of any kind of important documents in many different situations, yielding to new image processing needs. One of these is the ability of detecting documents in real time on smartphones’ video stream while being robust to classical defects such as low contrast, fuzzy images, flares, shadows, etc. This feature is interesting to help the user to capture his document in the best conditions and to guide this capture (evaluating appropriate distance, centering and tilt). In this paper we propose a solution to detect in real time documents taking very few assumptions concerning their contents and background. This method is based on morphological operators which contrasts with classical line detectors or gradient based thresholds. The use of such invariant operators makes our method robust to the defects encountered in video stream and suitable for real time document detection on smartphones.

Continue reading

The tree of shapes turned into a max-tree: A simple and efficient linear algorithm

By Edwin Carlinet, Thierry Géraud, Sébastien Crozet

2018-05-10

In Proceedings of the 24th IEEE international conference on image processing (ICIP)

Abstract

The Tree of Shapes (ToS) is a morphological tree-based representation of an image translating the inclusion of its level lines. It features many invariances to image changes, which makes it well-suited for a lot of applications in image processing and pattern recognition. In this paper, we propose a way of turning this algorithm into a Max-Tree computation. The latter has been widely studied, and many efficient algorithms (including parallel ones) have been developed. Furthermore, we develop a specific optimization to speed-up the common 2D case. It follows a simple and efficient algorithm, running in linear time with a low memory footprint, that outperforms other current algorithms. For Reproducible Research purpose, we distribute our code as free software.

Continue reading

Segmentation des hyperintensités de la matière blanche en quelques secondes à l’aide d’un réseau de neurones convolutif et de transfert d’apprentissage

By Élodie Puybareau, Yongchao Xu, Joseph Chazalon, Isabelle Bloch, Thierry Géraud

2018-05-04

In Actes du congrès reconnaissance des formes, image, apprentissage et perception (RFIAP), session spéciale “deep learning, deep in france”

Abstract

Dans cet article, nous proposons une méthode automatique et rapide pour segmenter les hyper-intensités de la matière blanche (WMH) dans des images IRM cérébrales 3D, en utilisant un réseau de neurones entièrement convolutif (FCN) et du transfert d’apprentissage. Ce FCN est le réseau neuronal du Visual Geometry Group (VGG) pré-entraîné sur la base ImageNet pour la classification des images naturelles, et affiné avec l’ensemble des données d’entraînement du concours MICCAI WMH. Nous considérons trois images pour chaque coupe du volume à segmenter, provenant des acquisitions en T1, en FLAIR, et le résultat d’un opérateur morphologique appliqué sur le FLAIR, le top-hat, qui met en évidence les petites structures de forte intensité. Ces trois images 2D sont assemblées pour former une image 2D-3 canaux interprétée comme une image en couleurs, ensuite passée au FCN pour obtenir la segmentation 2D de la coupe correspondante. Nous traitons ainsi toutes les coupes pour former la segmentation de sortie 3D. Avec une telle technique, la segmentation de WMH sur un volume cérébral 3D prend environ 10 secondes, pré-traitement compris. Notre technique a été classée 6e sur 20 participants au concours MICCAI WMH.

Continue reading

Un algorithme de complexité linéaire pour le calcul de l’arbre des formes

By Edwin Carlinet, Sébastien Crozet, Thierry Géraud

2018-05-04

In Actes du congrès reconnaissance des formes, image, apprentissage et perception (RFIAP)

Abstract

L’arbre des formes (AdF) est une représentation morpho- logique hiérarchique de l’image qui traduit l’inclusion des ses lignes de niveaux. Il se caractérise par son invariance à certains changement de l’image, ce qui fait de lui un outil idéal pour le développement d’applications de reconnaissance des formes. Dans cet article, nous proposons une méthode pour transformer sa construction en un calcul de Max-tree. Ce dernier a été largement étudié au cours des dernières années et des algorithmes efficaces (dont certains parallèles) existent déjà. Nous proposons également une optimisation qui permet d’accélérer son calcul dans le cas classique des images 2D. Il en découle un algorithme simple, efficace, s’exécutant linéairement en fonction du nombre de pixels, avec une faible empreinte mémoire, et qui surpasse les algorithmes à l’état de l’art.

Continue reading

Parallel model checking algorithms for linear-time temporal logic

Abstract

Model checking is a fully automated, formal method for demonstrating absence of bugs in reactive systems. Here, bugs are violations of properties in Linear-time Temporal Logic (LTL). A fundamental challenge to its application is the exponential explosion in the number of system states. The current chapter discusses the use of parallelism in order to overcome this challenge. We reiterate the textbook automata-theoretic approach, which reduces the model checking problem to the graph problem of finding cycles. We discuss several parallel algorithms that attack this problem in various ways, each with different characteristics: Depth-first search (DFS) based algorithms rely on heuristics for good parallelization, but exhibit a low complexity and good on-the-fly behavior. Breadth-first search (BFS) based approaches, on the other hand, offer good parallel scalability and support distributed parallelism. In addition, we present various simpler model checking tasks, which still solve a large and important subset of the LTL model checking problem, and show how these can be exploited to yield more efficient algorithms. In particular, we provide simplified DFS-based search algorithms and show that the BFS-based algorithms exhibit optimal runtimes in certain cases.

Continue reading

Strategies in typecase optimization

By Jim Newton, Didier Verna

2018-04-05

In ELS 2018, the 11th european lisp symposium

Abstract

We contrast two approaches to optimizing the Common Lisp typecase macro expansion. The first approach is based on heuristics intended to estimate run time performance of certain type checks involving Common Lisp type specifiers. The technique may, depending on code size, exhaustively search the space of permutations of the type checks, intent on finding the optimal order. With the second technique, we represent a typecase form as a type specifier, encapsulating the side-effecting non-Boolean parts so as to appear compatible with the Common Lisp type algebra operators. The encapsulated expressions are specially handled so that the Common Lisp type algebra functions preserve them, and we can unwrap them after a process of Boolean reduction into efficient Common Lisp code, maintaining the appropriate side effects but eliminating unnecessary type checks. Both approaches allow us to identify unreachable code, test for exhaustiveness of the clauses and eliminate type checks which are calculated to be redundant.

Continue reading