Diffusion Stochastic Optimization for Min-Max Problems
Abstract
The optimistic gradient method is useful in addressing minimax optimization problems. Motivated by the observation that the conventional stochastic version suffers from the need for a large batch size on the order of to achieve an -stationary solution, we introduce and analyze a new formulation termed Diffusion Stochastic Same-Sample Optimistic Gradient (DSS-OG). We prove its convergence and resolve the large batch issue by establishing a tighter upper bound, under the more general setting of nonconvex Polyak-Lojasiewicz (PL) risk functions. We also extend the applicability of the proposed method to the distributed scenario, where agents communicate with their neighbors via a left-stochastic protocol. To implement DSS-OG, we can query the stochastic gradient oracles in parallel with some extra memory overhead, resulting in a complexity comparable to its conventional counterpart. To demonstrate the efficacy of the proposed algorithm, we conduct tests by training generative adversarial networks.
Index Terms:
Minimax optimization, nonconvex Polyak-Lojasiewicz, distributed stochastic optimization, stochastic optimistic gradient, diffusion strategyI Introduction
Minimax optimization takes center stage in many machine learning applications, including generative adversarial networks (GANs) [2], adversarial machine learning [3], and reinforcement learning [4].
Minimax problems take the form:
(1) |
where and are the primal and dual variables, respectively, and is the risk (objective) function. This problem is typically approached from various perspectives. In the optimization literature, assuming certain structure for the risk function over the variables is prevalent. For instance, the risk function can be assumed (strongly) convex in the primal variable and (strongly) concave in the dual variable [5, 6] or it can be assumed one-sided nonconvex/Polyak-Lojasiewicz (PL) [7, 8, 9] in the primal variable and one-sided strongly concave/PL in the dual variable. Some works assume a two-sided PL condition[10]. It is noted that the gradient operator of a (strongly) convex or (strongly) concave objective is also (strongly) monotone; therefore, the optimization problem is generalized to a variational inequality (VI) problem in some settings. Solving VI problems heavily relies on the concept of “operators” and their associated properties. The goal of a VI problem is to determine the solution at which the gradient operator satisfies variational stability conditions, such as the Stampacchia variational inequality (SVI), which is a first-order necessary optimality condition [11, 12]. In a similar vein, the Minty variational inequality (MVI) condition and its weak version, which include a subclass of nonmonotone problems, has also gained popularity in recent years [13, 14, 15, 16, 17]. The settings mentioned above represent the tractable conditions for which convergent algorithms are known to exist. On the other hand, the nonconvex-nonconcave formulation, which is at the core of a wide class of minimax objectives, remains intractable [18]. This fact corroborates the significant difficulty encountered in training GANs, as they often involve nonconvex-nonconcave objectives.
The optimistic gradient (OG) method and its variants have attracted significant interest for their proven convergence in wide formulations of (1) [19, 15, 14, 7, 13, 20]. However, a good part of the theory from this body of literature breaks down when the stochastic setting is taken into consideration where only a noisy random estimate of the gradient is available. For instance, the work [7] showed that employing a batch size on the order of is necessary to estimate the gradient and in order to control the gradient noise for stochastic OG method in a nonconvex-strongly concave (SC) setting. Here, the scalar denotes the desired level of accuracy in the solution, which is generally a very small number. Similarly, the work [20] established similar results with a batch size to achieve convergence in the stochastic MVI setting. These are not isolated cases; a similar need for using large batch sizes appears in the works [14, 13, 9, 15]. The reliance on batch sizes that are inversely proportional to is a serious theoretical limitation since it rules out the freedom of using small batches. Meanwhile, empirical results showed that stochastic OG may not be subject to such a restrictive requirement [21, 20].
In this work, we investigate distributed stochastic minimax problems where a network of agents cooperate to solve the following problem:
(2a) | ||||
(2b) |
Here, are the global primal and dual variables, the function is the local stochastic loss evaluated at the sample . The scalars are positive weights satisfying . We assume that each agent only has access to its local data set . We consider a fully distributed (decentralized) setting where the agents are connected by a graph topology and each agent can only share information with its immediate neighbors [22]. We approach (2a)-(2b) from the perspective of optimization, adopting assumptions that are more general than the nonconvex-SC formulation. Instead of assuming the dual risk function to be SC in the dual variable, we consider the weaker PL-condition [23]. Specifically, we adopt the following assumption.
Assumption 1 (Risk function).
We assume each local risk function is nonconvex in while is -PL in , i.e., for any and in the domain of , it holds that
(3) |
where is some strictly positive constant.
Under this assumption, we are interested in answering the following questions: Q1: Can the reliance on large batch sizes be eliminated in the stochastic OG method? Q2: What are the performance guarantees of the resulting batch-flexible stochastic OG over networks?
I-A Related works
Minimax optimization problems have been largely studied under the single node setting [6, 24, 5, 9, 25, 26, 27, 28, 29, 30]. The fundamental setting assumed within these works is the strong convexity-strong concavity/strong monotonicity. For this setting, the work [6] established the tight complexity bound of the first-order algorithm in finding a near-optimal solution. The work [5] devised the first optimal algorithm that matches this bound. The work [24] showed that the alternating gradient descent-ascent (AGDA) method converges faster than its simultaneous counterpart. The work [25] proposed a stochastic variance-reduced extragradient (EG) method, which showed improved performance in training GANs. The quasi-Newton methods have also been proposed to achieve a faster convergence rate in finding the saddle point [26]. Dropping “strongly” setting from both sides of strong convexity-strong concavity brings about convergence issues where simultaneous GDA will suffer from divergent rotation while AGDA will get stuck in a limit cycle rather than converge to a limit point [24, 31]. For the convex-concave setting, the work [29] established the sublinear rate of for the best-iterate of the stochastic OG method, namely the best iterate produced over running iterations. Achieving last-iterate convergence is more difficult; this aspect of OG was not addressed until the work [28]. Furthermore, the work [27] showed that the average-iterate of EG, namely the average of all iterates produced over iterations, has a convergence rate of which is faster than its last-iterate. The work [30] also showed anchoring EG has a convergence rate of without considering stochasticity. There is another line of research that investigates the structured nonmonotone problem, particularly under the premise of the (weak) Minty variational inequality— see [16, 17, 15].
In recent years, there has been an increasing interest towards the stochastic nonconvex (one-sided) minimax optimization problem. The work [9] established the convergence of stochastic GDA (SGDA) in finding the primal solution under a nonconvex-SC formulation, with the use of batch size to reach error accuracy. The work [32] proposed to incorporate a concave maximizer and variance reduction into SGDA, which is computationally costly since it performs multi-loop steps in updating the dual variable. The work [10] studied the two-sided PL setting and established the linear convergence of variance-reduced stochastic AGDA (SAGDA). Additionally, the work [8] showed that employing two iid samples for SAGDA obtains convergence rate for the primal solution in nonconvex-PL setting. However, vanilla AGDA does not converge in bilinear scenarios[31], which casts doubt on its ability to solve more complex settings. Exploiting the convergence of the backbone minimax algorithms across various settings, say OG/EG, is always of independent interest. Unfortunately, the recent work [7] showed a less than ideal result under the nonconvex-SC formulation as the large batch issue persists. This fact strongly motivates us to pursue better results that theoretically address this limitation.
Over the past decade, distributed optimization has become an increasingly important field due to its flexibility in processing large volume and physically dispersed data [22, 33]. The distributed minimax optimization scenario remains an open field to be exploited since there are many diverse settings. Some distributed nonconvex minimax works appear in [34, 35, 36, 37, 14, 20]. The work [34] showed the convergence rate of for the primal solution using STORM momentum-based [38] SGDA, under a nonconvex-SC setting. The work [37] extended the analysis to the nonconvex-PL setting. Other works [35, 36] introduced variance-reduced SGDA for the nonconvex-SC setting. However, variance reduction requires an excessively large batch size to estimate the gradient at the checkpoint, making it hard to tune. On the other hand, the works [34, 38, 37, 36, 35] rely on the Lipschitz continuity of the stochastic loss gradient to carry out the analysis. Relaxing this condition leads to a limited applicability of the results [39]. The work [14] proposed a distributed version of stochastic EG for solving stochastic MVI in a time-varying network. The work [20] proposed a single-call EG [40] for distributed parallel training of the GANs. However, both results face the same large batch size issue.
Works | Assumptions | Primal Rate | Dual Rate | Batch size |
DPOSG [20] | Stochastic MVI, bounded gradient, D-S | |||
Stochastic EG [14] | Stochastic MVI, Lipschitz risk gradient, D-S | |||
DM-HSGD [34] | Stochastic nonconvex-SC, Lipschitz loss gradient, D-S | ——– | ||
DM-GDA [37] | Stochastic nonconvex-PL, Lipschitz loss gradient, D-S | ——– | ||
DSGDA [35] | Finite-sum, nonconvex-SC, Lipschitz loss gradient, D-S | ——– | ||
DSS-OG (this work) | Stochastic nonconvex-PL, Lipschitz risk gradient, L-S |
I-B Main Contributions
Our main contributions in this work are as follows:
-
•
We introduce and analyze a new variant of stochastic OG that achieves a convergence rate of in finding the primal solution under the nonconvex-PL setting. Our analysis improves the existing performance bound by addressing the issue of large batch size [7]. We also provide insights into theoretical limitations that prevent the conventional stochastic OG from achieving better results.
-
•
We extend the applicability of stochastic OG to a fully distributed setting. We consider left-stochastic combination matrices, which are more general than the doubly-stochastic setting considered in [34, 35, 36, 37, 14, 20]. We also carry out convergence analysis without assuming the smoothness of the stochastic loss gradient.
-
•
We establish convergence results under more relaxed conditions, ensuring convergence for both primal and dual solutions. Previous works primarily focused on the convergence of the primal objective [34, 35, 36, 37]. We show that the dual solution converges at a rate of for our method. Please refer to Table I for the comparison of our results with existing works.
Notations. We use lowercase normal font to represent deterministic scalars or vectors, and lowercase bold font for stochastic scalars or vectors. We use upper case letters for matrices and calligraphic letters for network quantities. The symbol denotes the set of neighbors of agent , and represents the scaling weight for information flowing from agent to agent . denotes the -dimensional vector of all ones. The identity matrix of size is denoted by . The Kronecker product operator is denoted by , stands for the -norm, and denotes the inner product operator.
II Algorithm Description
II-A Centralized Scenario
We start with a centralized scenario where the data processing and model training are carried out at a fusion center. Specifically, we consider initially the minimax problem with formulated at a stand-alone agent or fusion center. Several variants of the stochastic OG method will be revisited here to motivate our new formulation. It is noted that these variants were originally proposed in [19, 40] in the deterministic setting, whereas we consider the stochastic version.
The extragradient (EG) method traces back to the seminal work [19], which is based on a heuristic idea, namely “extrapolation”. It proposed to utilize the gradient at the extrapolated point, rather than the gradient at the current point for carrying out the update. Doing so leads to increased stability by alleviating the rotational force. Such a scheme was also applied to other optimization problems[41]. The stochastic EG (S-EG) was studied in the works [7, 14]. This algorithm starts from iteration , say, with randomly initialized variables and recursively performs the following steps:
S-EG | (4a) | ||||
S-EG | (4b) | ||||
S-EG | (4c) | ||||
S-EG | (4d) |
Here, denotes the extrapolated point, and is the output. The variables are iid samples drawn during different steps and for updating different variables. We use the subscript and bar notation in these iid samples to highlight at which point and for which variables they are drawn. For instance, denotes an iid sample drawn at iteration for constructing the stochastic gradient at point and for updating the primal variable . Additionally, the scalars are the stepsizes or learning rates. The S-EG needs to query two stochastic gradient oracles to update each variable. It is also executed in a sequential manner, i.e., determining the extrapolated point first before performing the update.
To reduce the computational complexity, a single-call version of EG known as past EG (PEG), was introduced in the work [40]. It proposed to use the past gradient at the point . The stochastic form of PEG is characterized by the following steps [42, 20]:
S-PEG | (5a) | ||||
S-PEG | (5b) | ||||
S-PEG | (5c) | ||||
S-PEG | (5d) |
PEG can also be derived from the forward-reflected-backward framework [43]. It has attracted wide interest in theoretical aspects and obtains empirical success in practical applications [28, 21, 20].
The standard form of S-PEG, also referred to as the stochastic OG (S-OG), is often utilized for theoretical analysis [7]. The updates in Eqs. (5a)–(5d) can be simplified by rewriting them in terms of at each iteration [7, 42]:
S-OG | |||||
S-OG | (6a) | ||||
S-OG | |||||
S-OG | (6b) |
S-OG involves two different samples, namely and , for updating the primal variable in different steps (similar to the dual variable). We observe that and are actually approximating different losses whose landscape differs. The gradient constructed by can deviate from that at the same point when it is evaluated at . In other words, S-OG is not making the “optimistic” direction under the same landscape in a strict sense. When the sample variance is high, the deviation between the loss landscapes constructed from and could be high, leading to a poor approximation of the update direction [44]. On the other hand, the stochastic gradients involved in (6a)–(6b) are not martingale processes, which cause difficulty for analysis (see Section III-E for discussion).
We instead introduce a new formulation using the same-sample approximation. For each variable, it utilizes the gradients from the same loss landscape to perform “optimistic” scheme; these losses are also unbiased for the true risk. This can be verified by taking expectation over the distribution of the current samples , . This approach involves recomputing the stochastic gradient at the point using the current samples :
SS-OG | |||||
SS-OG | (7a) | ||||
SS-OG | |||||
SS-OG | (7b) |
SS-OG requires double call of stochastic gradient oracles for each variable. However, it can be executed in parallel with some extra memory overhead, making it preferred in comparison to the sequential approaches, e.g., S-EG and S-PEG.
II-B Distributed scenario
In this section, we extend SS-OG to the distributed setting (2a)–(2b). Under this scenario, each agent utilizes a local sample to approximate the true gradient by using its loss value. Several distributed learning strategies for minimization problems have been investigated in [22, 33] and the references therein. Here, we integrate (6a)-(6b) with the adapt-then-combine (ATC) diffusion learning strategy from [22, 33]. This strategy has been shown in these references to outperform the consensus strategy, which has been applied to minimax problems as well in works such as [20, 35]. For this reason, we focus on diffusion learning and refer to the proposed algorithm as the Diffusion Stochastic Same-Sample Optimistic Gradient (DSS-OG). It is listed in Algorithm 1. In this algorithm, is the scalar weight used by agent to scale information received from agent and for . This algorithm starts from with randomly initialized local weights . Each agent first collect two iid samples, namely and , to approximate the OG direction for updating the local primal and dual variables, respectively. After then the local models of each agent are sent to its immediate neighbors for aggregation using the weights .
For convenience of the analysis, we introduce the following network quantities:
(8a) | ||||
(8b) | ||||
(8c) | ||||
(8d) | ||||
(8e) |
and similarly for . Additionally, we use refer to deterministic realizations of the random quantities , , , . If we further introduce the network combination matrices:
(9a) | ||||
(9b) | ||||
(9c) |
then, the DSS-OG recursions be expressed globally the following recursion:
(10a) | ||||
(10b) |
III Convergence Analysis
We now proceed to analyze the convergence properties of DSS-OG. We do so under the following assumptions, which are generally more relaxed than similar conditions in the prior literature.
III-A Assumptions
Assumption 2.
We assume the gradients associated with each local risk function are -Lipschitz, i.e.,
(11) | ||||
Remark 1.
In this work, we will only adopt the above assumption for true gradients. This is in contrast to the works [34, 35, 36, 37], where smoothness of the local stochastic loss is assumed instead. It should be noted that if we only assume Assumption 2, the computational benefits of all momentum techniques used in [34, 37] are nullified, with the gradient evaluation complexity degrading to . This is the tight lower bound for solving a nonconvex problems developed in [39]. Moreover, the primal objective introduced in (12) is essentially nonconvex.
Since the primal problem is to minimize over in (2a), we introduce the primal objective[34, 35, 36, 37]:
(12) |
Assumption 3.
We assume is lower bounded, i.e., .
Assumption 4.
We denote the filtration generated by the random processes as . We assume the stochastic gradients are unbiased with bounded variance while taking expectation over the random sample conditioned on , i.e.,
(13) | ||||
Moreover, we assume the data samples are independent of each other for all .
Assumption 5.
We assume the disagreement between the local and global gradients is bounded, i.e.,
(14) |
The above assumptions can be relaxed by adopting more sophisticated distributed learning strategies, e.g., exact diffusion and gradient tracking [45, 46, 47]; we leave these extensions for future works.
Assumption 6.
The graph is strongly connected and the matrix is left-stochastic. As a result, it holds that and has strictly positive entries for some integer .
This assumption ensures that has a single maximum eigenvalue at with Perron eigenvector with positive entries such that . Furthermore, the combination matrix admits the Jordan decomposition where [22]:
(15) |
Here, the submatrix consists of Jordan blocks with arbitrarily small values on the lower diagonal, and it holds that , , . For convenience of the analysis, we let . It is not difficult to verify .
III-B Main Results
In Theorem 1, we prove the convergence rate of the best-iterate of the primal objective, which is the standard criterion for convergence analysis involving the nonconvex problem. In Theorem 2, we show the last-itearate convergence of the dual optimality gap. Given the conditions from these theorems, we establish the complexity of the DSS-OG in finding an -stationary point.
Theorem 1.
Under Assumptions 1–6, choosing step sizes
|
(16) |
|
(17) |
the non-asymptotic convergence rate of the primal objective is given by:
(18) |
where is the Lipschitz constant associated with and
|
(19) |
are constants. Furthermore, DSS-OG outputs a primal -stationary point after iterations and gradient evaluation complexity, i.e.,
(20) | ||||
See Appendix C for proof.
Theorem 1 states that by adopting a two-time-scale scheme for the step sizes and , DSS-OG is able to output an -stationary point for the primal objective with a sublinear rate dominated by . The two-time-scale condition reflects the unsymmetric assumption of the risk function over the primal and dual variables. Similar conditions were also established in the works [7, 9, 8] when asymmetric conditions are assumed over the primal and dual variables.
Theorem 2.
Under Assumptions 1–6, and after iteration numbers , choosing step size
|
(21) |
the non-asymptotic bound of the dual optimality gap regarding the last-iterate is given by:
(22) |
That is, DSS-OG outputs a dual -stationary point after iterations and gradient evaluation complexity, i.e.,
(23) |
See Appendix D for proof.
The above theorem implies that finding a dual solution that shrinks the dual optimality gap has a faster sublinear convergence rate of than the primal convergence rate. Eqs. (18) and (22) also show the fundamental difference of the complexity in finding the solution for a nonconvex problem and for a PL problem. From this theoretical aspect, it is hard to guarantee a pair of primal and dual solutions with the same accuracy level (24a)–(24b) simultaneously. Fortunately, we can prove that the same accuracy level of the primal and dual solutions can be obtained in an asynchronous manner (See Theorem 3).
Theorem 3.
After iterations of running DSS-OG, we can find the network centroid that satisfies the following first-order stationary conditions:
(24a) | ||||
(24b) |
for an arbitrarily small constant . The solutions points are found in different time instance where the subscripts in indicate the exact iterations when they are found. See Appendix E for proof.
III-C Comparison between S-OG and SS-OG
We point out the key differences between SS-OG and S-OG:
-
•
The stochastic gradients involved in S-OG are not martingale process. To see this, we observe that the backward gradients used at iteration are constructed by the past samples . They are biased estimators when taking expectation over the current samples . In contrast, it is easy to verify the stochastic gradients involved in SS-OG are unbiased.
-
•
SS-OG strictly performs the “optimistic” scheme under the same loss landscape. The losses utilized there are unbiased estimators for the local risk. In contrast, S-OG involves gradients from two different loss landscape. The gradient direction at the same point in two different loss landscapes can be notably different when data variance is high.
-
•
We will bound the terms involving for DSS-OG rather than for distributed S-OG. To see this, we can repeat the arguments in (I) and use the fact that the two terms involved in the inner product are correlated for distributed S-OG. Therefore, we observe that the distributed S-OG needs to bound the deviation between the deterministic and stochastic quantities. The consequence of such deviation is that a constant on the order of will appear in the final bound because the Lipschitz continuity to be used is only assumed for determistic quantities. In constrast, we only need to bound the deviation between deterministic quantities for DSS-OG. Therefore, employing a large batch size will become necessary for S-OG in order to control the magnitude of .
IV Computer simulation
IV-A Wasserstein GAN
In this example, we consider the application of an -regularized Wasserstein GAN (WGAN) for learning the mean and variance of a one-dimensional Gaussian [8]. The objective is to optimize the following global risk function through the cooperation of agents:
(25) | ||||
Here, the dual variable is -regularized by a constant to ensure the PL condition. We consider a setting where the generator has a single hidden layer with 5 neurons, and the discriminator is parameterized by and is of the form . As for the true samples, , they are drawn from the distribution .
We generate a strongly connected network with agents and use the averaging rule [48, 33]. In addition to the fully distributed setting, we also implement centralized SS-OG (CSS-OG). We compare DSS-OG and CSS-OG with the distributed version of SAGDA, smoothed SAGDA and the adaptive algorithms, such as RMSprop and Adam which are also evaluated in this same application [8]. For these algorithms, we follow the optimal settings reported in [8]. For DSS-OG and CSS-OG, we tune the step sizes to . We set the same random seed before running each algorithm to ensure the same generated training set. The code is available at the attached link111https://github.com/MulGarFed/Minimax_DSSOG.git.
The simulation results are demonstrated in Figs.LABEL:fig:example1_variance0001_grad-LABEL:fig:example1_variance01_mse. We observe both DSS-OG and CSS-OG outperform the other algorithms in terms of gradient norm and mean square error (MSE) when . From Fig. LABEL:fig:example1_variance01_mse, the MSE of DSS-OG and CSS-OG is able to match the results of RMSprop when . We find that the adaptive algorithm RMSprop converges more rapidly than constant step size algorithms. Another interesting observation is that the gradient norm does not necessarily imply the quality of a solution. For instance, despite RMSprop not having the lowest gradient norm in Fig. LABEL:fig:example1_variance0001_grad, it exhibits rapid convergence and good performance in terms of MSE, as seen in Fig. LABEL:fig:example1_variance0001_mse. This discrepancy suggests a promising research direction in identifying more practical metrics for evaluating minimax solutions.
IV-B Deep Convolutional GANs
In this example, we train a deep convolutional GANs (DCGANs) using the MNIST dataset. We consider a similar neural network structure described in [49].
We use the same network topology from the previous example. To enhance performance, we implement DSS-OG with a Adam optimizer through replacing its original stochastic gradient term with the SS-OG gradients from (7a)–(7b). For this experiment, we set the momentum factors to and vary the step sizes. We plot the results of the Fréchet Inception Distance (FID) score [50] in Fig. 2(a) The sample images generated by a single node are demonstrated in Fig. 2(b). It is seen that the simulated algorithms can attain good results of FID score with the appropriately chosen hyperparameters. Moreover, the single agent is capable of generating all classes of digits, demonstrating the generative diversity of a single node in the distributed network.
V Conclusion
In this work, we introduced and analyzed DSS-OG, a novel distributed method for stochastic nonconvex-PL minimax optimization. This method improves the analysis of the existing works by addressing the large batch size issue. We validated the performance of DSS-OG through training experiments of two neural network models: WGAN and DCGAN. Our future work is to develop distributed, communication-efficient, and faster algorithms for variational inequality setting under cooperative or competing networks.
Appendix A Basic Lemmas
In this section, we list two basic results that will be used in our analysis.
Lemma 1 (Quadratic Growth [23]).
If is -PL function, then it also satisfies a quadratic growth, i.e.,
(26) |
where is any optimal point taken from the set .
Appendix B Key Lemmas
Here, we list the auxiliary lemmas leading to the main convergence result. The proofs of these results are deferred to later appendices.
Lemma 3.
Choosing suitable step sizes, we observe from Lemma 3 that the bound of iteration averaged network deviation is dominated by a term on the order of .
Lemma 4.
We observe that the increment is bounded by three terms, i.e., the term associated with network deviation on the order of , iteration averaged risk gradients centroid, and a bound caused by the gradient noise on the order of , respectively.
Lemma 5.
Lemma 5 implies how the deviation between the global risk function at network centroid and the centroid of local risk gradient is bounded over iterations. We observe that this bound consists of three terms, namely the iteration averaged risk gradient centroid and two terms on the order of . In the following lemmas, we will establish some useful inequalities associated with the Lipschitz continuity of primal objective (12) and risk function (2a). For simplicity, we use
(32) |
to denote the dual optimality gap for simplicity.
Lemma 6.
Lemma 8.
The bound of dual optimality gap is quantified by several terms, including the initialization term which decays on the rate of , the gradient disagreement terms, the risk gradients centroid, the increment of network centroid, and a constant term on the order of .
Appendix C Proof of Theorem 1
We now prove the best iterate convergence for the primal variable. Moving the gradient term to the left hand side in Lemma 6 and averaging the inequality over iterations, we get
(38) | ||||
(39) |
where follows from Assumption 3 and the telescoping results of in (38). Invoking Lemma 8 and regrouping the terms in (39), we obtain
(40) | ||||
where follows from several steps: inserting , using (77) and choosing step size
|
(41) |
For the sake of simplicity, we introduce the following notations for constants:
|
(42) |
Let be the initial primal optimality gap. Invoking Lemma 5 for inequality (40), we get
(43) | ||||
For the terms associated with primal gradient, we can bound them by choosing a suitable step size:
(44) |
where we choose , (b) we choose and . By doing so, we can eliminate the primal gradient terms since now it has a negative sign. For terms associated with dual gradient, we can bound it by deducing that:
(45) |
where we have used
, is due to when , we choose . Replacing these gradients with the deduced upper bound, we conclude
(46) |
Considering all the step size conditions derived above and choosing
|
(47) |
we get the non-asymptotic bound as follows:
(48) |
It is seen that the convergence rate is dominated by , therefore DSS-OG outputs -stationary point after iteration and gradient evaluation complexity.
Appendix D Proof of Theorem 2
We further prove that DSS-OG can find a solution for the dual problem by halting the update of primal variable after is found; this can be achieved by setting after iteration . For the convenience of analysis, we use and to denote the stochastic and true gradients when their primal variable is fixed at . Repeating the arguments in Lemma 7 at point , we get
(49) | ||||
We now study recursion (49) for sufficiently large iteration . Furthermore, we choose the step size . In the following, we insert this expression of into the coefficient associated with in (49), and keep “” notation in the remaining terms to determine additional step size conditions. Setting in (49), we get
(50) | ||||
Multiplying both sides by , we have
(51) | ||||
We further denote and obtain a recursion regarding . Iterating the recursion of from to , we obtain
(52) |
Repeating the arguments in Lemma 4, we can bound the iteration averaged gradient deviation as follows:
(53) |
Inserting (D) into (D) and choosing to eliminate gradient terms in , we get ():
(54) |
Because , dividing both sides of (D) by , we get the following non-asymptotic bound for dual optimality gap:
(55) |
Appendix E Proof of Theorem 3
According to Theorem 1, after running iterations, we can identify the point which satisfies the following condition
(56) |
According to Lemma 2, this is equivalent to
(57) |
Furthermore, from Theorem 2, after iterations , we can find a point such that
(58) |
We keep the subscript in to indicate the exact iterations at which this point is obtained. Under condition (58), we can verify satisfies the second stationary condition (29b), i.e.,
(59) |
where follows from the -smooth property of the local risk, follows from Lemma 1, follows from (23). On the other hand, the solution also satisfies the first stationary condition (29a), i.e.,
(60) |
where follows from (57) and (23). Therefore, the overall complexity for iteration and gradient evaluation is given by .
The proof is completed here.
Appendix F Proof of Lemma 3
Taking conditional expectations of the transient term over the distribution of random samples , we get
(61) |
where follows from the Jordan decomposition of and follows from the sub-multiplicative property of norms and the fact that . Taking expectation over (61) again and averaging this inequality over iterations, we get:
(62) |
In the following, we bound the term by Eqs. (63)-(68), by (F), and deduce the final bound (F) by Eqs. (70)-(73). To bound , we inspect on the right hand side (RHS) of (61) by deducing that:
(63) |
where follows from (10a) and
Moreover, follows from the sub-multiplicative property of the norm and the expansion of the squared norm, follows from the conditional expectation over the sample and the fact that only relies on the random samples up to iteration , follows from the inequality , follows from Assumption 4 and the following inequality:
(64) |
where denotes the gradient noise
(65) | ||||
Choosing , we have and
(66) |
For the term , we can bound it as follows:
(67) |
where follows from , follows from Jensen’s inequality, the -smooth property of the local risk and Assumption 5, follows from (61). For convenience, we denote the network constant as . Combining the results of (F) and (F) and taking expectation again, we get
(68) | ||||
A similar inequality holds for . Combining these results together and choosing , we have
(69) |
Averaging (F) over iterations, we get
(70) | ||||
where in we set the initial iterates as and add terms and into the first inequality to group the summation terms. Since , we choose the step sizes such that
(71) | ||||
Solving inequality (70) gives
(72) | ||||
Combining (72) with (F), we get the final bound as follows:
(73) | ||||
Appendix G Proof of Lemma 4
Note that , we insert this expression of into and deduce that
(74) | ||||
where follows from Jensen’s inequality. The term can be bounded similarly, therefore
(75) | ||||
Taking the conditional expectation of (75) over the distribution of random samples , we get
(76) | ||||
For the term , we can bound it as follows:
(77) | ||||
where follows from Assumption 4 and the fact the local true gradient is independent of the gradient noise , follows from Assumption 4. A similar argument holds for . Substituting these results into (76) and choosing , we obtain
(78) | ||||
Taking expectations over (G) again and averaging the inequality over iterations, we get
(79) |
where is obtained by adding the positive terms into the RHS of (G) and grouping the summation terms, follows from Lemma 3.
Appendix H Proof of Lemma 5
Appendix I Proof of Lemma 6
For convenience, we denote
(82) |
with subscript “c” to denote the dual optimality gap evaluated at the network centroid . From Lemma 2, is -smooth, we have
(83) |
Taking conditional expectation of (I) over the distribution of random sample , we get
(84) |
where follows from the fact that does not rely on the random samples at iteration , follows from (77). Inserting into (I) and applying Jensen’s inequality, we get
(85) |
Taking expectations again over , we get
(86) | ||||
Here, is due to the -smoothness property of the risk function, follows from Lemma 1.
Appendix J Proof of Lemma 7
Because is -smooth, fixing at , we have
(87) |
Taking expectation of (J) over the distribution of conditioned on and , we get
(88) | ||||
where follows from (77), we insert the true gradient information and apply Jensen’s inequality, is due to the -smooth property of the risk function. Taking expectation again over (J), we obtain the result (34). The proof of (35) is similar to the above arguments.
Appendix K Proof of Lemma 8
Adding on both sides of (34):
(89) | ||||
where follows from Assumption 1, is obtained by adding and subtracting the terms and in . By the definition of , we can rewrite above inequality as:
(90) |
By Lemma 7, we can bound as:
(91) |
By Lemma 6, we can bound as
(92) | ||||
Combining the results of (K) and (K) into (K), we get
(93) | ||||
We further choose . We have , and inequality (93) becomes
(94) | ||||
Choosing and iterating (94) from to , we get
(95) |
where is due to for any . Averaging (K) over iterations, we get
(96) |
where (a) follows from the following inequality
(97) |
here, is a constant, is a positive sequence.
References
- [1] H. Cai, S. A. Sulaiman, and A. H. Sayed, “Diffusion optimistic learning for min-max optimization,” in Proc. IEEE ICASSP, Seoul, South Korea, April 2024, pp. 1–5.
- [2] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, 2014, pp. 1–9.
- [3] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv:1706.06083, 2017.
- [4] B. Dai, A. Shaw, L. Li, L. Xiao, N. He, Z. Liu, J. Chen, and L. Song, “Sbeed: Convergent reinforcement learning with nonlinear function approximation,” in International Conference on Machine Learning. PMLR, 2018, pp. 1125–1134.
- [5] D. Kovalev and A. Gasnikov, “The first optimal algorithm for smooth and strongly-convex-strongly-concave minimax optimization,” in Advances in Neural Information Processing Systems, 2022, pp. 14691–14703.
- [6] J. Zhang, M. Hong, and S. Zhang, “On lower iteration complexity bounds for the convex concave saddle point problems,” Mathematical Programming, vol. 194, no. 1-2, pp. 901–935, 2022.
- [7] P. Mahdavinia, Y. Deng, H. Li, and M. Mahdavi, “Tight analysis of extra-gradient and optimistic gradient methods for nonconvex minimax problems,” in Advances in Neural Information Processing Systems, 2022, pp. 31213–31225.
- [8] J. Yang, A. Orvieto, A. Lucchi, and N. He, “Faster single-loop algorithms for minimax optimization without strong concavity,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2022, pp. 5485–5517.
- [9] T. Lin, C. Jin, and M. Jordan, “On gradient descent ascent for nonconvex-concave minimax problems,” in International Conference on Machine Learning. PMLR, 2020, pp. 6083–6093.
- [10] J. Yang, N. Kiyavash, and N. He, “Global convergence and variance reduction for a class of nonconvex-nonconcave minimax problems,” in Advances in Neural Information Processing Systems, 2020, pp. 1153–1165.
- [11] D. Kinderlehrer and G. Stampacchia, An Introduction to Variational Inequalities and Their Applications, SIAM, 2000.
- [12] S. Komlósi, “On the stampacchia and minty variational inequalities,” Generalized Convexity and Optimization for Economic and Financial Decisions, pp. 231–260, 1999.
- [13] Z. Dou and Y. Li, “On the one-sided convergence of adam-type algorithms in non-convex non-concave min-max optimization,” arXiv:2109.14213, 2021.
- [14] A. Beznosikov, P. Dvurechenskii, A. Koloskova, V. Samokhin, S. U. Stich, and A. Gasnikov, “Decentralized local stochastic extra-gradient for variational inequalities,” in Advances in Neural Information Processing Systems, 2022, pp. 38116–38133.
- [15] A. Bohm, “Solving nonconvex-nonconcave min-max problems exhibiting weak minty solutions,” Transactions on Machine Learning Research, pp. 1–21, 2023.
- [16] J. Diakonikolas, C. Daskalakis, and M. I. Jordan, “Efficient methods for structured nonconvex-nonconcave min-max optimization,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2021, pp. 2746–2754.
- [17] T. Pethick, O. Fercoq, P. Latafat, P. Patrinos, and V. Cevher, “Solving stochastic weak minty variational inequalities without increasing batch size,” arXiv:2302.09029, 2023.
- [18] C. Daskalakis, S. Skoulakis, and M. Zampetakis, “The complexity of constrained min-max optimization,” in Proc. ACM SIGACT Symposium on Theory of Computing, 2021, pp. 1466–1478.
- [19] G.M. Korpelevich, “The extragradient method for finding saddle points and other problems,” Matecon, vol. 12, pp. 747–756, 1976.
- [20] M. Liu, W. Zhang, Y. Mroueh, X. Cui, J. Ross, T. Yang, and P. Das, “A decentralized parallel algorithm for training generative adversarial nets,” in Advances in Neural Information Processing Systems, 2020, pp. 11056–11070.
- [21] C. Daskalakis, A. Ilyas, V. Syrgkanis, and H. Zeng, “Training gans with optimism.,” in International Conference on Learning Representations (ICLR 2018), 2018.
- [22] A. H. Sayed, “Adaptation, learning, and optimization over networks,” Foundations and Trends® in Machine Learning, vol. 7, no. 4-5, pp. 311–801, 2014.
- [23] H. Karimi, J. Nutini, and M. Schmidt, “Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition,” in Machine Learning and Knowledge Discovery in Databases: European Conference ECML PKDD, Riva del Garda, Italy, 2016, 2016, pp. 795–811.
- [24] G. Zhang, Y. Wang, L. Lessard, and R. B. Grosse, “Near-optimal local convergence of alternating gradient descent-ascent for minimax optimization,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2022, pp. 7659–7679.
- [25] T. Chavdarova, G. Gidel, F. Fleuret, and S. Lacoste-Julien, “Reducing noise in gan training with variance reduced extragradient,” Advances in Neural Information Processing Systems, vol. 32, 2019.
- [26] C. Liu and L. Luo, “Quasi-newton methods for saddle point problems,” in Advances in Neural Information Processing Systems, 2022, pp. 3975–3987.
- [27] N. Golowich, S. Pattathil, C. Daskalakis, and A. Ozdaglar, “Last iterate is slower than averaged iterate in smooth convex-concave saddle point problems,” in Conference on Learning Theory. PMLR, 2020, pp. 1758–1784.
- [28] E. Gorbunov, A. Taylor, and G. Gidel, “Last-iterate convergence of optimistic gradient method for monotone variational inequalities,” Advances in Neural Information Processing Systems, vol. 35, pp. 21858–21870, 2022.
- [29] E.K. Ryu, K. Yuan, and W. Yin, “ODE analysis of stochastic gradient methods with optimism and anchoring for minimax problems,” arXiv:1905.10899, 2019.
- [30] T. Yoon and E.K. Ryu, “Accelerated algorithms for smooth convex-concave minimax problems with o (1/k^2 ) rate on squared gradient norm,” in International Conference on Machine Learning. PMLR, 2021, pp. 12098–12109.
- [31] G. Gidel, R. A. Hemmat, M. Pezeshki, R. L. Priol, G. Huang, S. Lacoste-Julien, and I. Mitliagkas, “Negative momentum for improved game dynamics,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2019, pp. 1802–1811.
- [32] L. Luo, H. Ye, Z. Huang, and T. Zhang, “Stochastic recursive gradient descent ascent for stochastic nonconvex-strongly-concave minimax problems,” in Advances in Neural Information Processing Systems, 2020, pp. 20566–20577.
- [33] A. H. Sayed, Inference and Learning from Data, vol. 3, Cambridge University Press, 2022.
- [34] W. Xian, F. Huang, Y. Zhang, and H. Huang, “A faster decentralized algorithm for nonconvex minimax problems,” in Advances in Neural Information Processing Systems, 2021, pp. 25865–25877.
- [35] H. Gao, “Decentralized stochastic gradient descent ascent for finite-sum minimax problems,” arXiv:2212.02724, 2022.
- [36] L. Chen, H. Ye, and L. Luo, “A simple and efficient stochastic algorithm for decentralized nonconvex-strongly-concave minimax optimization,” arXiv:2212.02387, 2022.
- [37] F. Huang and S. Chen, “Near-optimal decentralized momentum method for nonconvex-PL minimax problems,” arXiv:2304.10902, 2023.
- [38] A. Cutkosky and F. Orabona, “Momentum-based variance reduction in non-convex sgd,” Advances in Neural Information Processing Systems, vol. 32, 2019.
- [39] Y. Arjevani, Y. Carmon, J. C. Duchi, J.D. Foster, N. Srebro, and B. Woodworth, “Lower bounds for non-convex stochastic optimization,” Mathematical Programming, vol. 199, no. 1-2, pp. 165–214, 2023.
- [40] L. D. Popov, “A modification of the arrow-hurwicz method for search of saddle points,” Mathematical Notes of the Academy of Sciences of the USSR, pp. 845–848, 1980.
- [41] K. Antonakopoulos, A. Kavis, and V. Cevher, “Extra-newton: A first approach to noise-adaptive accelerated second-order methods,” Advances in Neural Information Processing Systems, vol. 35, pp. 29859–29872, 2022.
- [42] Y. Hsieh, F. Iutzeler, J. Malick, and P. Mertikopoulos, “On the convergence of single-call stochastic extra-gradient methods,” in Advances in Neural Information Processing Systems, 2019, pp. 1–11.
- [43] Y. Malitsky and M. K. Tam, “A forward-backward splitting method for monotone inclusions without cocoercivity,” SIAM Journal on Optimization, pp. 1451–1472, 2020.
- [44] K. Mishchenko, D. Kovalev, E. Shulgin, P. Richtárik, and Y. Malitsky, “Revisiting stochastic extragradient,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2020, pp. 4573–4582.
- [45] K. Yuan, B. Ying, X. Zhao, and A. H. Sayed, “Exact diffusion for distributed optimization and learning—part i: Algorithm development,” IEEE Transactions on Signal Processing, vol. 67, no. 3, pp. 708–723, 2018.
- [46] S. A. Alghunaim and K. Yuan, “A unified and refined convergence analysis for non-convex decentralized learning,” IEEE Transactions on Signal Processing, vol. 70, pp. 3264–3279, 2022.
- [47] S. A. Alghunaim, E. K. Ryu, K. Yuan, and A. H. Sayed, “Decentralized proximal gradient algorithms with linear convergence rates,” IEEE Transactions on Automatic Control, vol. 66, no. 6, pp. 2787–2794, 2020.
- [48] A. H. Sayed, “Diffusion adaptation over networks,” in Academic Press Library in Signal Processing, vol. 3, pp. 323–453. Elsevier, 2014.
- [49] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv:1511.06434, 2015.
- [50] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “Gans trained by a two time-scale update rule converge to a local nash equilibrium,” Advances in Neural Information Processing Systems, vol. 30, 2017.
- [51] M. Nouiehed, M. Sanjabi, T. Huang, J. D. Lee, and M. Razaviyayn, “Solving a class of non-convex min-max games using iterative first order methods,” in Advances in Neural Information Processing Systems, 2019, pp. 1–9.