This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\DeclareMathOperator\Reg

Reg \DeclareMathOperator\HamHam \DeclareMathOperator\GapGap \DeclareMathOperator\GDGD \DeclareMathOperator\GDAGDA \DeclareMathOperator\EGEG \DeclareMathOperator\OGDAOGDA \DeclareMathOperator\diagdiag \DeclareMathOperator\ICIC \DeclareMathOperator\bilbil \DeclareMathOperator\functFunc \DeclareMathOperator\idId \DeclareMathOperator\suppsupp \DeclareMathOperator\KLKL \DeclareMathOperator\BSSBSS \DeclareMathOperator\BESBES \DeclareMathOperator\BGSBGS \DeclareMathOperator\polypoly \coltauthor\NameNoah Golowichthanks: Supported by a Fannie & John Hertz Foundation Fellowship, an MIT Akamai Fellowship, and an NSF Graduate Fellowship. \Email[email protected]
\addrDepartment of EECS, Massachusetts Institute of Technology and \NameSarath Pattathil \Email[email protected]
\addrDepartment of EECS, Massachusetts Institute of Technology and \NameConstantinos Daskalakisthanks: Supported by NSF Awards IIS-1741137, CCF-1617730 and CCF-1901292, by a Simons Investigator Award, by the DOE PhILMs project (No. DE-AC05-76RL01830), by the DARPA award HR00111990021, by a Google Faculty award, and by the MIT Frank Quick Faculty Research and Innovation Fellowship. \Email[email protected]
\addrDepartment of EECS, Massachusetts Institute of Technology and \NameAsuman Ozdaglar \Email[email protected]
\addrDepartment of EECS, Massachusetts Institute of Technology

Last Iterate is Slower than Averaged Iterate in Smooth Convex-Concave Saddle Point Problems

Abstract

In this paper we study the smooth convex-concave saddle point problem. Specifically, we analyze the last iterate convergence properties of the Extragradient (EG) algorithm. It is well known that the ergodic (averaged) iterates of EG converge at a rate of O(1/T){O}(1/T) ([nemirovski_prox-method_2004]). In this paper, we show that the last iterate of EG converges at a rate of O(1/T){O}(1/\sqrt{T}). To the best of our knowledge, this is the first paper to provide a convergence rate guarantee for the last iterate of EG for the smooth convex-concave saddle point problem. Moreover, we show that this rate is tight by proving a lower bound of Ω(1/T)\Omega(1/\sqrt{T}) for the last iterate. This lower bound therefore shows a quadratic separation of the convergence rates of ergodic and last iterates in smooth convex-concave saddle point problems.

keywords:
Minimax optimization, Extragradient algorithm, Last iterate convergence

1 Introduction

In this paper we study the following saddle-point problem:

min𝐱mmax𝐲nf(𝐱,𝐲),\displaystyle\min_{{\mathbf{x}}\in\mathbb{R}^{m}}\max_{{\mathbf{y}}\in\mathbb{R}^{n}}f({\mathbf{x}},{\mathbf{y}}), (1)

where the function ff is smooth, convex in 𝐱{\mathbf{x}}, and concave in 𝐲{\mathbf{y}}. This problem is equivalent ([facchinei_finite-dimensional_2003]) to finding a global saddle point of the function ff, i.e., a point (𝐱,𝐲)({\mathbf{x}}^{*},{\mathbf{y}}^{*}) such that: