A Proximal Dual Consensus Method for Linearly Coupled Multi-Agent Non-Convex Optimization

Motivated by large-scale signal processing and machine learning applications, this paper considers the distributed multi-agent optimization problem for a linearly constrained non-convex problem. Each of the agents owns a local cost function and local variable, but are coupled with each other due to...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998) pp. 5740 - 5744
Main Authors Zhang, Jiawei, Ge, Songyang, Chang, Tsung-Hui, Luo, Zhi-Quan
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.05.2020
Subjects
Online AccessGet full text
ISSN2379-190X
DOI10.1109/ICASSP40776.2020.9054236

Cover

More Information
Summary:Motivated by large-scale signal processing and machine learning applications, this paper considers the distributed multi-agent optimization problem for a linearly constrained non-convex problem. Each of the agents owns a local cost function and local variable, but are coupled with each other due to the linear constraint. Most of the existing methods are either applicable for convex problems only or are developed under the non-convex setting subject to a specific type of linear constraint. There still lacks a distributed method for solving the linear constrained problem under the general and non-convex setting. In this paper, we propose such a method, called the proximal dual consensus (PDC) method, that combines a proximal technique and the dual consensus method. Theoretical analysis shows that the proposed PDC method can yield a Karush-Kuhn-Tucker solution of the linearly constrained non-convex problem and it has an O(1/ε) iteration complexity, where ε is a solution accuracy. The practical behavior of the proposed method is examined by numerical results.
ISSN:2379-190X
DOI:10.1109/ICASSP40776.2020.9054236