Compressive Image Reconstruction in Reduced Union of Subspaces

We present a new compressed sensing framework for reconstruction of incomplete and possibly noisy images and their higher dimensional variants, e.g. animations and light‐fields. The algorithm relies on a learning‐based basis representation. We train an ensemble of intrinsically two‐dimensional (2D)...

Full description

Saved in:
Bibliographic Details
Published inComputer graphics forum Vol. 34; no. 2; pp. 33 - 44
Main Authors Miandji, Ehsan, Kronander, Joel, Unger, Jonas
Format Journal Article
LanguageEnglish
Published Oxford Blackwell Publishing Ltd 01.05.2015
Subjects
Online AccessGet full text
ISSN0167-7055
1467-8659
1467-8659
DOI10.1111/cgf.12539

Cover

More Information
Summary:We present a new compressed sensing framework for reconstruction of incomplete and possibly noisy images and their higher dimensional variants, e.g. animations and light‐fields. The algorithm relies on a learning‐based basis representation. We train an ensemble of intrinsically two‐dimensional (2D) dictionaries that operate locally on a set of 2D patches extracted from the input data. We show that one can convert the problem of 2D sparse signal recovery to an equivalent 1D form, enabling us to utilize a large family of sparse solvers. The proposed framework represents the input signals in a reduced union of subspaces model, while allowing sparsity in each subspace. Such a model leads to a much more sparse representation than widely used methods such as K‐SVD. To evaluate our method, we apply it to three different scenarios where the signal dimensionality varies from 2D (images) to 3D (animations) and 4D (light‐fields). We show that our method outperforms state‐of‐the‐art algorithms in computer graphics and image processing literature.
Bibliography:Supporting InformationSupporting Information
ArticleID:CGF12539
istex:719C6E09B9806C60A2BBFDDD910E24F84661E5CB
ark:/67375/WNG-WGL2BL7H-X
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:0167-7055
1467-8659
1467-8659
DOI:10.1111/cgf.12539