Proximal Multitask Learning Over Networks With Sparsity-Inducing Coregularization

In this work, we consider multitask learning problems where clusters of nodes are interested in estimating their own parameter vector. Cooperation among clusters is beneficial when the optimal models of adjacent clusters have a good number of similar entries. We propose a fully distributed algorithm...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on signal processing Vol. 64; no. 23; pp. 6329 - 6344
Main Authors Nassif, Roula, Richard, Cedric, Ferrari, Andre, Sayed, Ali H.
Format Journal Article
LanguageEnglish
Published IEEE 01.12.2016
Institute of Electrical and Electronics Engineers
Subjects
Online AccessGet full text
ISSN1053-587X
1941-0476
1941-0476
DOI10.1109/TSP.2016.2601282

Cover

More Information
Summary:In this work, we consider multitask learning problems where clusters of nodes are interested in estimating their own parameter vector. Cooperation among clusters is beneficial when the optimal models of adjacent clusters have a good number of similar entries. We propose a fully distributed algorithm for solving this problem. The approach relies on minimizing a global mean-square error criterion regularized by nondifferentiable terms to promote cooperation among neighboring clusters. A general diffusion forward-backward splitting strategy is introduced. Then, it is specialized to the case of sparsity promoting regularizers. A closed-form expression for the proximal operator of a weighted sum of ℓ 1 -norms is derived to achieve higher efficiency. We also provide conditions on the step-sizes that ensure convergence of the algorithm in the mean and mean-square error sense. Simulations are conducted to illustrate the effectiveness of the strategy.
ISSN:1053-587X
1941-0476
1941-0476
DOI:10.1109/TSP.2016.2601282