On Gradient Based Descent Algorithms for Joint Diagonalization of Matrices

Joint diagonalization of collections of matrices, i.e. the problem of finding a joint set of approximate eigenvectors, is an important problem that appears in many applicative contexts. It is commonly formulated as finding the minimizer, over the set of all possible bases, for a certain non-convex f...

Full description

Saved in:
Bibliographic Details
Published in2024 32nd European Signal Processing Conference (EUSIPCO) pp. 2632 - 2636
Main Authors Troedsson, Erik, Carlsson, Marcus, Wendt, Herwig
Format Conference Proceeding
LanguageEnglish
Published European Association for Signal Processing - EURASIP 26.08.2024
Subjects
Online AccessGet full text
ISSN2076-1465
DOI10.23919/EUSIPCO63174.2024.10715124

Cover

More Information
Summary:Joint diagonalization of collections of matrices, i.e. the problem of finding a joint set of approximate eigenvectors, is an important problem that appears in many applicative contexts. It is commonly formulated as finding the minimizer, over the set of all possible bases, for a certain non-convex functional that measures the size of off-diagonal elements. Many approaches have been studied in the literature, some of the most popular ones working with approximations of this cost functional. In this work, we deviate from this philosophy and instead propose to directly attempt to find a minimizer making use of the gradient and Hessian of the original functional. Our main contributions are as follows. First, we design and study gradient descent and conjugate gradient algorithms. Second, we show that the intricate geometry of the functional makes it beneficial to change basis at each iteration, leading to faster convergence. Third, we conduct large sets of numerical experiments that indicate that our proposed descent methods yield competitive results when compared to popular methods such as WJDTE.
ISSN:2076-1465
DOI:10.23919/EUSIPCO63174.2024.10715124