Quantifying Model Uncertainty in Inverse Problems via Bayesian Deep Gradient Descent

Recent advances in reconstruction methods for inverse problems leverage powerful data-driven models, e.g., deep neural networks. These techniques have demonstrated state-of-the-art performances for several imaging tasks, but they often do not provide uncertainty on the obtained reconstruction. In th...

Full description

Saved in:
Bibliographic Details
Published in2020 25th International Conference on Pattern Recognition (ICPR) pp. 1392 - 1399
Main Authors Barbano, Riccardo, Zhang, Chen, Arridge, Simon, Jin, Bangti
Format Conference Proceeding
LanguageEnglish
Published IEEE 10.01.2021
Subjects
Online AccessGet full text
DOI10.1109/ICPR48806.2021.9412521

Cover

More Information
Summary:Recent advances in reconstruction methods for inverse problems leverage powerful data-driven models, e.g., deep neural networks. These techniques have demonstrated state-of-the-art performances for several imaging tasks, but they often do not provide uncertainty on the obtained reconstruction. In this work, we develop a scalable, data-driven, knowledge-aided computational framework to quantify the model uncertainty via Bayesian neural networks. The approach builds on, and extends deep gradient descent, a recently developed greedy iterative training scheme, and recasts it within a probabilistic framework. Scalability is achieved by being hybrid in the architecture: only the last layer of each block is Bayesian, while the others remain deterministic, and by being greedy in training. The framework is showcased on one representative medical imaging modality, viz. computed tomography with either sparse view or limited view data, and exhibits competitive performance with respect to state-of-the-art benchmarks, e.g., total variation, deep gradient descent and learned primal-dual.
DOI:10.1109/ICPR48806.2021.9412521