Distributed Learning for Stochastic Generalized Nash Equilibrium Problems

This paper examines a stochastic formulation of the generalized Nash equilibrium problem where agents are subject to randomness in the environment of unknown statistical distribution. We focus on fully distributed online learning by agents and employ penalized individual cost functions to deal with...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on signal processing Vol. 65; no. 15; pp. 3893 - 3908
Main Authors Chung-Kai Yu, van der Schaar, Mihaela, Sayed, Ali H.
Format Journal Article
LanguageEnglish
Published New York IEEE 01.08.2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1053-587X
1941-0476
1941-0476
DOI10.1109/TSP.2017.2695451

Cover

More Information
Summary:This paper examines a stochastic formulation of the generalized Nash equilibrium problem where agents are subject to randomness in the environment of unknown statistical distribution. We focus on fully distributed online learning by agents and employ penalized individual cost functions to deal with coupled constraints. Three stochastic gradient strategies are developed with constant step-sizes. We allow the agents to use heterogeneous step-sizes and show that the penalty solution is able to approach the Nash equilibrium in a stable manner within O(μ max ), for small step-size value μ max and sufficiently large penalty parameters. The operation of the algorithm is illustrated by considering the network Cournot competition problem.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1053-587X
1941-0476
1941-0476
DOI:10.1109/TSP.2017.2695451