Variable sample-size optimistic mirror descent algorithm for stochastic mixed variational inequalities

In this paper, we propose a variable sample-size optimistic mirror descent algorithm under the Bregman distance for a class of stochastic mixed variational inequalities. Different from those conventional variable sample-size extragradient algorithms to evaluate the expected mapping twice at each ite...

Full description

Saved in:
Bibliographic Details
Published inJournal of global optimization Vol. 89; no. 1; pp. 143 - 170
Main Authors Yang, Zhen-Ping, Zhao, Yong, Lin, Gui-Hua
Format Journal Article
LanguageEnglish
Published New York Springer US 01.05.2024
Springer
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN0925-5001
1573-2916
DOI10.1007/s10898-023-01346-0

Cover

More Information
Summary:In this paper, we propose a variable sample-size optimistic mirror descent algorithm under the Bregman distance for a class of stochastic mixed variational inequalities. Different from those conventional variable sample-size extragradient algorithms to evaluate the expected mapping twice at each iteration, our algorithm requires only one evaluation of the expected mapping and hence can significantly reduce the computation load. In the monotone case, the proposed algorithm can achieve O ( 1 / t ) ergodic convergence rate in terms of the expected restricted gap function and, under the strongly generalized monotonicity condition, the proposed algorithm has a locally linear convergence rate of the Bregman distance between iterations and solutions when the sample size increases geometrically. Furthermore, we derive some results on stochastic local stability under the generalized monotonicity condition. Numerical experiments indicate that the proposed algorithm compares favorably with some existing methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0925-5001
1573-2916
DOI:10.1007/s10898-023-01346-0