Block-Term Tensor Decomposition Via Constrained Matrix Factorization

In this work, we consider the problem of factoring a third-order tensor into multilinear rank-(L r , L r , 1) terms. This model, referred to as the rank-(L r , L r , 1) block-term decomposition (BTD), finds many applications in signal processing, especially blind separation of smooth sources and unm...

Full description

Saved in:
Bibliographic Details
Published in2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP) pp. 1 - 6
Main Authors Fu, Xiao, Huang, Kejun
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.10.2019
Subjects
Online AccessGet full text
DOI10.1109/MLSP.2019.8918708

Cover

More Information
Summary:In this work, we consider the problem of factoring a third-order tensor into multilinear rank-(L r , L r , 1) terms. This model, referred to as the rank-(L r , L r , 1) block-term decomposition (BTD), finds many applications in signal processing, especially blind separation of smooth sources and unmixing spectral-spatial data (e.g., hyperspectral image). On the other hand, finding latent factors of rank-(L r , L r , 1) BTD poses a very challenging optimization problem. Some computational tools designed for canonical polyadic decomposition (CPD) (e.g., alternating least squares (ALS) and Levenberg-Marquardt (LM) based algorithms) can be modified to handle rank(L r , L r , 1) BTD. Nonetheless, these methods essentially treat rank(L r , L r , 1) BTD as a special CPD problem. This raises a number of challenges, since rank-(L r , L r , 1) BTD can be viewed as a CPD problem with rank-deficient latent factors and high CP rank-and these are known as hard cases for CPD algorithms. In this work, we reformulate the rank-(L r , L r , 1) BTD problem as a matrix rank - constrained matrix factorization problem. We propose a simple algorithm that combines alternating optimization and projected gradient. This way, the per-iteration complexity is much smaller than those of the ALS and LM - based algorithms. We also show that the algorithm converges to a stationary point of the problem of interest at a sublinear rate-although nonconvex constraints are involved. Numerical experiments are conducted to showcase the effectiveness of our algorithm.
DOI:10.1109/MLSP.2019.8918708