Low-rank approximation pursuit for matrix completion

•We propose a computationally more efficient greedy algorithm for the matrix completion, which extends the orthogonal rank-one matrix pursuit from selecting just one candidate per iteration step to multiple candidates that are added to the basis set.•We further reduce the storage complexity of our b...

Full description

Saved in:
Bibliographic Details
Published inMechanical systems and signal processing Vol. 95; pp. 77 - 89
Main Authors Xu, An-Bao, Xie, Dongxiu
Format Journal Article
LanguageEnglish
Published Berlin Elsevier Ltd 01.10.2017
Elsevier BV
Subjects
Online AccessGet full text
ISSN0888-3270
1096-1216
DOI10.1016/j.ymssp.2017.03.024

Cover

More Information
Summary:•We propose a computationally more efficient greedy algorithm for the matrix completion, which extends the orthogonal rank-one matrix pursuit from selecting just one candidate per iteration step to multiple candidates that are added to the basis set.•We further reduce the storage complexity of our basic algorithm by using an economic weight updating rule. We show that both versions of our algorithm achieve linear convergence.•We count the number of floating-point operations of our LRAP algorithm and of its more economic version ELRAP in order to show that our algorithms scale well to large problems.•To verify the efficiency of our algorithm, we compare our LRAP and ELRAP algorithms with three state-of-the-art matrix completion algorithms on large-scale data sets, such as Jester and MovieLens. We consider the matrix completion problem that aims to construct a low rank matrix X that approximates a given large matrix Y from partially known sample data in Y. In this paper we introduce an efficient greedy algorithm for such matrix completions. The greedy algorithm generalizes the orthogonal rank-one matrix pursuit method (OR1MP) by creating s⩾1 candidates per iteration by low-rank matrix approximation. Due to selecting s⩾1 candidates in each iteration step, our approach uses fewer iterations than OR1MP to achieve the same results. Our algorithm is a randomized low-rank approximation method which makes it computationally inexpensive. The algorithm comes in two forms, the standard one which uses the Lanzcos algorithm to find partial SVDs, and another that uses a randomized approach for this part of its work. The storage complexity of this algorithm can be reduced by using an weight updating rule as an economic version algorithm. We prove that all our algorithms are linearly convergent. Numerical experiments on image reconstruction and recommendation problems are included that illustrate the accuracy and efficiency of our algorithms.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0888-3270
1096-1216
DOI:10.1016/j.ymssp.2017.03.024