Abstract
Digital Breast Tomosynthesis (DBT) is an X-ray imaging technique commonly used in breast cancer screening. Current DBT systems acquire multiple images at different angles, typically every 1 to 3 degrees, over an angular range of 15° to 40°. These images are then processed by reconstruction algorithms to generate a 3D volume that radiologists can examine for diagnostic purposes.
However, the limited angular range constrains the quality of the reconstruction, particularly its resolution along the depth of the breast. As a result, orthogonal slices become difficult to interpret: internal structures produce artifacts that interfere with the visibility of objects in adjacent planes.
The use of deep learning in this context presents several challenges. Paired real-world data needed for supervised learning is not available, and volumes from other imaging modalities cannot be used directly. Furthermore, deep learning–based reconstructions may not faithfully reflect the original measurements, and the associated uncertainty remains implicit.
We propose a post-processing approach based on neural networks to enhance conventionally reconstructed volumes. By adapting volumes from another medical imaging modality and simulating their X-ray acquisition, we generate a realistic synthetic dataset that serves as ground truth for supervised training of a 3D convolutional model. This strategy significantly improves the quality of orthogonal slices.
We also aim to assess the reliability of the predicted volumes by adopting a Bayesian perspective. We distinguish between uncertainties due to model optimization (epistemic) and those arising from the ill-posed nature of the problem (aleatoric). The former is modeled by approximating the posterior predictive distribution, while the latter is captured using a Laplace distribution. By addressing known convergence issues from the literature, we obtain uncertainty maps that closely approximate the true reconstruction error.
Additional Information
The seminar will be given in 🇫🇷 French 🇫🇷, and is open to everyone, either in person in room KB 201 (Paris campus) or online via Teams.