Publications

High throughput automated detection of axial malformations in fish embryo

Abstract

Fish embryo models are widely used as screening tools to assess the efficacy and /or toxicity of chemicals. This assessment involves analysing embryo morphological abnormalities. In this article, we propose a multi-scale pipeline to allow automated classification of fish embryos (Medaka: Oryzias latipes) based on the presence or absence of spine malformations. The proposed pipeline relies on the acquisition of fish embryo 2D images, on feature extraction due to mathematical morphology operators and on machine learning classification. After image acquisition, segmentation tools are used to focus on the embryo before analysing several morphological features. An approach based on machine learning is then applied to these features to automatically classify embryos according to the detection of axial malformations. We built and validated our learning model on 1,459 images with a 10-fold cross- validation by comparison with the gold standard of 3D observations performed under a microscope by a trained operator. Our pipeline results in correct classification in 85% of the cases included in the database. This percentage is similar to the percentage of success of a trained human operator working on 2D images. Indeed, most of the errors are due to the inherent limitations of 2D images compared to 3D observations. The key benefit of our approach is the low computational cost of our image analysis pipeline, which guarantees optimal throughput analysis.

Continue reading

Intervertebral disc segmentation using mathematical morphology—A CNN-free approach

By Edwin Carlinet, Thierry Géraud

2018-11-26

In Proceedings of the 5th MICCAI workshop & challenge on computational methods and clinical applications for spine imaging (CSI)

Abstract

In the context of the challenge of “automatic InterVertebral Disc (IVD) localization and segmentation from 3D multi-modality MR images” that took place at MICCAI 2018, we have proposed a segmentation method based on simple image processing operators. Most of these operators come from the mathematical morphology framework. Driven by some prior knowledge on IVDs (basic information about their shape and the distance between them), and on their contrast in the different modalities, we were able to segment correctly almost every IVD. The most interesting feature of our method is to rely on the morphological structure called the Three of Shapes, which is another way to represent the image contents. This structure arranges all the connected components of an image obtained by thresholding into a tree, where each node represents a particular region. Such structure is actually powerful and versatile for pattern recognition tasks in medical imaging.

Continue reading

Segmentation of gliomas and prediction of patient overall survival: A simple and fast procedure

By Élodie Puybareau, Guillaume Tochon, Joseph Chazalon, Jonathan Fabrizio

2018-11-05

In Proceedings of the workshop on brain lesions (BrainLes), in conjunction with MICCAI

Abstract

In this paper, we propose a fast automatic method that seg- ments glioma without any manual assistance, using a fully convolutional network (FCN) and transfer learning. From this segmentation, we predict the patient overall survival using only the results of the segmentation and a home made atlas. The FCN is the base network of VGG-16, pretrained on ImageNet for natural image classification, and fine tuned with the training dataset of the MICCAI 2018 BraTS Challenge. It relies on the “pseudo-3D” method published at ICIP 2017, which allows for segmenting objects from 2D color images which contain 3D information of MRI volumes. For each n th slice of the volume to segment, we consider three images, corresponding to the (n-1)th, nth, and (n-1)th slices of the original volume. These three gray-level 2D images are assembled to form a 2D RGB color image (one image per channel). This image is the input of the FCN to obtain a 2D segmentation of the n th slice. We process all slices, then stack the results to form the 3D output segmentation. With such a technique, the segmentation of a 3D volume takes only a few seconds. The prediction is based on Random Forests, and has the advantage of not being dependant of the acquisition modality, making it robust to inter-base data.

Continue reading

Representing and computing with types in dynamically typed languages

Abstract

In this report, we present code generation techniques related to run-time type checking of heterogeneous sequences. Traditional regular expressions can be used to recognize well defined sets of character strings called rational languages or sometimes regular languages. Newton et al. present an extension whereby a dynamic programming language may recognize a well defined set of heterogeneous sequences, such as lists and vectors. As with the analogous string matching regular expression theory, matching these regular type expressions can also be achieved by using a finite state machine (deterministic finite automata, DFA). Constructing such a DFA can be time consuming. The approach we chose, uses meta-programming to intervene at compile-time, generating efficient functions specific to each DFA, and allowing the compiler to further optimize the functions if possible. The functions are made available for use at run-time. Without this use of meta-programming, the program might otherwise be forced to construct the DFA at run-time. The excessively high cost of such a construction would likely far outweigh the time needed to match a string against the expression. Our technique involves hooking into the Common Lisp type system via the DEFTYPE macro. The first time the compiler encounters a relevant type specifier, the appropriate DFA is created, which may be a Omega(2^n operation, from which specific low-level code is generated to match that specific expression. Thereafter, when the type specifier is encountered again, the same pre-generated function can be used. The code generated is Theta(n) complexity at run-time. A complication of this approach, which we explain in this report, is that to build the DFA we must calculate a disjoint type decomposition which is time consuming, and also leads to sub-optimal use of TYPECASE in machine generated code. To handle this complication, we use our own macro OPTIMIZED-TYPECASE in our machine generated code. Uses of this macro are also implicitly expanded at compile time. Our macro expansion uses BDDs (Binary Decision Diagrams) to optimize the OPTIMIZED-TYPECASE into low level code, maintaining the TYPECASE semantics but eliminating redundant type checks. In the report we also describe an extension of BDDs to accomodate subtyping in the Common Lisp type system as well as an in-depth analysis of worst-case sizes of BDDs.

Continue reading

An image processing library in modern C++: Getting simplicity and efficiency with generic programming

By Michaël Roynard, Edwin Carlinet, Thierry Géraud

2018-10-25

In Proceedings of the 2nd workshop on reproducible research in pattern recognition (RRPR 2018)

Abstract

As there are as many clients as many usages of an Image Processing library, each one may expect different services from it. Some clients may look for efficient and production-quality algorithms, some may look for a large tool set, while others may look for extensibility and genericity to inter-operate with their own code base… but in most cases, they want a simple-to-use and stable product. For a C++ Image Processing library designer, it is difficult to conciliate genericity, efficiency and simplicity at the same time. Modern C++ (post 2011) brings new features for library developers that will help designing a software solution combining those three points. In this paper, we develop a method using these facilities to abstract the library components and augment the genericity of the algorithms. Furthermore, this method is not specific to image processing; it can be applied to any C++ scientific library.

Continue reading

Deep neural networks for aberrations compensation in digital holographic imaging of the retina

By Julie Rivet, Guillaume Tochon, Serge Meimon, Michel Pâques, Thierry Géraud, Michael Atlan

2018-10-25

In Proceedings of the SPIE conference on adaptive optics and wavefront control for biological systems v

Abstract

In computational imaging by digital holography, lateral resolution of retinal images is limited to about 20 microns by the aberrations of the eye. To overcome this limitation, the aberrations have to be canceled. Digital aberration compensation can be performed by post-processing of full-field digital holograms. Aberration compensation was demonstrated from wavefront measurement by reconstruction of digital holograms in subapertures, and by measurement of a guide star hologram. Yet, these wavefront measurement methods have limited accuracy in practice. For holographic tomography of the human retina, image reconstruction was demonstrated by iterative digital aberration compensation, by minimization of the local entropy of speckle-averaged tomographic volumes. However image-based aberration compensation is time-consuming, preventing real-time image rendering. We are investigating a new digital aberration compensation scheme with a deep neural network to circumvent the limitations of these aberrations correction methods. To train the network, 28.000 anonymized images of eye fundus from patients of the 15-20 hospital in Paris have been collected, and synthetic interferograms have been reconstructed digitally by simulating the propagation of eye fundus images recorded with standard cameras. With a U-Net architecture, we demonstrate defocus correction of these complex-valued synthetic interferograms. Other aberration orders will be corrected with the same method, to improve lateral resolution up to the diffraction limit in digital holographic imaging of the retina.

Continue reading

Left atrial segmentation in a few seconds using fully convolutional network and transfer learning

By Élodie Puybareau, Zhou Zhao, Younes Khoudli, Edwin Carlinet, Yongchao Xu, Jérôme Lacotte, Thierry Géraud

2018-10-25

In Proceedings of the workshop on statistical atlases and computational modelling of the heart (STACOM 2018), in conjunction with MICCAI

Abstract

In this paper, we propose a fast automatic method that segments left atrial cavity from 3D GE-MRIs without any manual assistance, using a fully convolutional network (FCN) and transfer learning. This FCN is the base network of VGG-16, pre-trained on ImageNet for natural image classification, and fine tuned with the training dataset of the MICCAI 2018 Atrial Segmentation Challenge. It relies on the “pseudo-3D” method published at ICIP 2017, which allows for segmenting objects from 2D color images which contain 3D information of MRI volumes. For each $n^{\text{th}}$ slice of the volume to segment, we consider three images, corresponding to the $(n-1)^{\text{th}}$, $n^{\text{th}}$, and $(n+1)^{\text{th}}$ slices of the original volume. These three gray-level 2D images are assembled to form a 2D RGB color image (one image per channel). This image is the input of the FCN to obtain a 2D segmentation of the $n^{\text{th}}$ slice. We process all slices, then stack the results to form the 3D output segmentation. With such a technique, the segmentation of the left atrial cavity on a 3D volume takes only a few seconds. We obtain a Dice score of 0.92 both on the training set in our experiments before the challenge, and on the test set of the challenge.

Continue reading

Document detection in videos captured by smartphones using a saliency-based method

By Minh Ôn Vũ Ngọc, Jonathan Fabrizio, Thierry Géraud

2018-09-20

In International conference on document analysis and recognition workshops (ICDARW)

Abstract

Smartphones are now widely used to digitizepaper documents. Document detection is the first importantstep of the digitization process. Whereas many methods extractlines from contours as candidates for the document boundary, we present in this paper a region-based approach. A key feature of our method is that it relies on visual saliency, using a recent distance existing in mathematical morphology. We show that the performance of our method is competitive with state-of-the-art methods on the ICDAR Smartdoc 2015 Competition dataset.

Continue reading

Recognizing heterogeneous sequences by rational type expression

By Jim Newton, Didier Verna

2018-09-14

In Proceedings of the meta’18: Workshop on meta-programming techniques and reflection

Abstract

We summarize a technique for writing functions which recognize types of heterogeneous sequences in Common Lisp. The technique employs sequence recognition functions, generated at compile time, and evaluated at run-time. The technique we demonstrate extends the Common Lisp type system, exploiting the theory of rational languages, Binary Decision Diagrams, and the Turing complete macro facility of Common Lisp. The resulting system uses meta-programming to move an exponential complexity operation from run-time to a compile-time operation, leaving a highly optimized linear complexity operation for run-time.

Continue reading

A theoretical and numerical analysis of the worst-case size of reduced ordered binary decision diagrams

By Jim Newton, Didier Verna

2018-08-28

In ACM Transactions on Computational Logic

Abstract

Binary Decision Diagrams (BDDs) and in particular ROBDDs (Reduced Ordered BDDs) are a common data structure for manipulating Boolean expressions, integrated circuit design, type inferencers, model checkers, and many other applications. Although the ROBDD is a lightweight data structure to implement, the behavior, in terms of memory allocation, may not be obvious to the program architect. We explore experimentally, numerically, and theoretically the typical and worst-case ROBDD sizes in terms of number of nodes and residual compression ratios, as compared to unreduced BDDs. While our theoretical results are not surprising, as they are in keeping with previously known results, we believe our method contributes to the current body of research by our experimental and statistical treatment of ROBDD sizes. In addition, we provide an algorithm to calculate the worst-case size. Finally, we present an algorithm for constructing a worst-case ROBDD of a given number of variables. Our approach may be useful to projects deciding whether the ROBDD is the appropriate data structure to use, and in building worst-case examples to test their code.

Continue reading