Anchored Neighborhood Regression for Fast Example-Based Super-Resolution

Recently there have been significant advances in image up scaling or image super-resolution based on a dictionary of low and high resolution exemplars. The running time of the methods is often ignored despite the fact that it is a critical factor for real applications. This paper proposes fast super...

Full description

Saved in:
Bibliographic Details
Published in2013 IEEE International Conference on Computer Vision pp. 1920 - 1927
Main Authors Timofte, Radu, De, Vincent, Van Gool, Luc
Format Conference Proceeding Journal Article
LanguageEnglish
Published IEEE 01.12.2013
Subjects
Online AccessGet full text
ISSN1550-5499
DOI10.1109/ICCV.2013.241

Cover

More Information
Summary:Recently there have been significant advances in image up scaling or image super-resolution based on a dictionary of low and high resolution exemplars. The running time of the methods is often ignored despite the fact that it is a critical factor for real applications. This paper proposes fast super-resolution methods while making no compromise on quality. First, we support the use of sparse learned dictionaries in combination with neighbor embedding methods. In this case, the nearest neighbors are computed using the correlation with the dictionary atoms rather than the Euclidean distance. Moreover, we show that most of the current approaches reach top performance for the right parameters. Second, we show that using global collaborative coding has considerable speed advantages, reducing the super-resolution mapping to a precomputed projective matrix. Third, we propose the anchored neighborhood regression. That is to anchor the neighborhood embedding of a low resolution patch to the nearest atom in the dictionary and to precompute the corresponding embedding matrix. These proposals are contrasted with current state-of-the-art methods on standard images. We obtain similar or improved quality and one or two orders of magnitude speed improvements.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Conference-1
ObjectType-Feature-3
content type line 23
SourceType-Conference Papers & Proceedings-2
ISSN:1550-5499
DOI:10.1109/ICCV.2013.241