This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A Sparse Smoothing Newton Method for Solving Discrete Optimal Transport Problems

Di Hou , Ling Liang , and Kim-Chuan Toh Department of Mathematics, National University of Singapore, Singapore. [email protected](Corresponding author) Department of Mathematics, University of Maryland at College Park, USA. [email protected]Department of Mathematics and Institute of Operations Research and Analytics, National University of Singapore, Singapore. [email protected]
Abstract

The discrete optimal transport (OT) problem, which offers an effective computational tool for comparing two discrete probability distributions, has recently attracted much attention and played essential roles in many modern applications. This paper proposes to solve the discrete OT problem by applying a squared smoothing Newton method via the Huber smoothing function for solving the corresponding KKT system directly. The proposed algorithm admits appealing convergence properties and is able to take advantage of the solution sparsity to greatly reduce computational costs. Moreover, the algorithm can be extended to solve problems with similar structures, including the Wasserstein barycenter (WB) problem with fixed supports. To verify the practical performance of the proposed method, we conduct extensive numerical experiments to solve a large set of discrete OT and WB benchmark problems. Our numerical results show that the proposed method is efficient compared to state-of-the-art linear programming (LP) solvers. Moreover, the proposed method consumes less memory than existing LP solvers, which demonstrates the potential usage of our algorithm for solving large-scale OT and WB problems.

Keywords. Optimal transport, Wasserstein barycenter, linear programming, smoothing Newton method, Huber function

1 Introduction

We are interested in the discrete optimal transport (OT) problem that takes the form of

minXm×nC,Xs.t.Xen=a,XTem=b,X0,\min_{X\in\mathbb{R}^{m\times n}}\quad\left\langle C,X\right\rangle\quad\mathrm{s.t.}\quad Xe_{n}=a,\;X^{T}e_{m}=b,\;X\geq 0, (1)

where C+m×nC\in\mathbb{R}^{m\times n}_{+} denotes a certain cost matrix, a+ma\in\mathbb{R}^{m}_{+} and b+nb\in\mathbb{R}^{n}_{+} denote two discrete probability distributions satisfying emTa=enTb=1e_{m}^{T}a=e_{n}^{T}b=1, and eme_{m} (resp. ene_{n}) denotes the vector of all ones in m\mathbb{R}^{m} (resp. n\mathbb{R}^{n}). By the standard Lagrangian duality theory, the dual problem of (1) is given as the following maximization problem:

maxfm,gn,Zm×na,f+b,gs.t.fenT+emgT+Z=C,Z0.\max_{f\in\mathbb{R}^{m},g\in\mathbb{R}^{n},Z\in\mathbb{R}^{m\times n}}\quad\left\langle a,f\right\rangle+\left\langle b,g\right\rangle\quad\mathrm{s.t.}\quad fe_{n}^{T}+e_{m}g^{T}+Z=C,\;Z\geq 0. (2)

Obviously, the feasible set for problem (1) is nonempty and compact. Hence, the primal problem (1) has a finite optimal value that is attainable. Since the strong duality for linear programming always holds, it is clear that the following KKT optimality conditions for (1)–(2):

Xen=a,XTem=b,fenT+emgT+Z=C,X0,Z0,X,Z=0Xe_{n}=a,\quad X^{T}e_{m}=b,\quad fe_{n}^{T}+e_{m}g^{T}+Z=C,\quad X\geq 0,\;Z\geq 0,\;\left\langle X,Z\right\rangle=0 (3)

admits at least one solution. Note that the last three conditions in (3) can be reformulated as the nonsmooth system XΠ+(XσZ)=0X-\Pi_{+}(X-\sigma Z)=0, where Π+()\Pi_{+}(\cdot) is the projection operation onto the nonnegative orthant +m×n\mathbb{R}^{m\times n}_{+} and σ>0\sigma>0 is any positive constant.

The OT problem has a large number of important applications, due partly to the essential metric property of its optimal solution. In particular, the optimal objective value of the OT problem defines a distance (i.e., the Wasserstein distance) between two probability distributions if the cost matrix CC is chosen based on some distances; see for instance [1, 2, 3, 4, 5, 6, 7] and references therein for fundamental properties of OT problems together with many important applications in a wide variety of fields including imaging science, engineering, statistics and machine learning, management, finance, and beyond. For example, a representative application of the Wasserstein distance is the Wasserstein barycenter (WB) problem, which aims to compute the mean of a set of discrete probability distributions under the Wasserstein distance [8]. It has shown promising performance in many fields such as machine learning [9, 4, 10], image processing [11], and economics [12, 13]. For a set of discrete probability distributions with finite support points, a WB problem with its support points pre-specified can be reformulated as a linear programming (LP) problem [14] that shares a similar structure as the OT problem; See section 4 for the detailed description of the WB problem with fixed supports.

As real-world applications typically produce large-scale OT problems, computing the optimal transportation plans efficiently and accurately at scale has received growing attention. Indeed, the last few years have seen a blossoming of interest in developing efficient solution methods for solving OT problems. First, if we treat the OT problem as a special case of a general LP problem, it is clear that any solver for LP can be applied. In particular, it is commonly accepted that 111Based on the benchmark conducted by Hans D. Mittelmann, see http://plato.asu.edu/. primal-dual interior point methods (IPMs) [15] and simplex methods [16], such as the highly optimized commercial solvers GUROBI, MOSEK and CPLEX, and the open-source solver HiGHS, are the most efficient and robust solvers for general LPs. Moreover, when an LP problem (including the OT problem) has many more variables than the number of linear constraints, [17] provides strong evidence that the dual-based inexact augmented Lagrangian method is also highly efficient and consumes less memory than that of IPMs. On the other hand, it is known from the literature (see e.g., [4, Chapter 3]) that the OT problem can be viewed as an optimal network flow problem built upon a bipartite graph having (m+n)(m+n) nodes for which aa and bb are considered as the sources and sinks, respectively. Then, the network simplex method [18] can also be applied to solve the OT problem exactly. In fact, as demonstrated in [19], the network simplex method (implemented in CPLEX) and its variant (such as the transportation simplex method [20]) are very efficient and robust. Last, for applications for which accurate solutions are not required, entropic regularized algorithms (see e.g., the Sinkhorn algorithm [21] and its stabilized variants [22]), entropic and/or Bregman proximal point methods (see e.g., [23, 24]) and multiscale approaches [25, 26, 27] are applicable and popular.

When it comes to the WB problem, the problem scale is usually much larger than that of the OT problem due to the need to compute multiple transport plans for the given discrete distributions. Thus, it is quite challenging to solve the WB problems by classical methods like IPMs or simplex methods. To overcome the computational challenges, one promising approach is to consider the entropic regularized problem [28, 9], for which scalable Sinkhorn based algorithms can be developed for solving large-scale problems (see e.g., [29, 30, 31]). However, the entropic regularized problem typically can only produce a rough approximate solution to the original WB problem. To get better approximate solutions, it is necessary to choose a small regularization parameter, but this will generally cause some numerical instabilities and also inefficiency to the Sinkhorn based algorithms. Thus, in this paper, we aim to solve the OT and WB problems directly without introducing an entropic regularization.

In this paper, we focus on solving the KKT system (3) directly by Newton-type methods. We know that the OT problem (1) has many more variables than the number of linear constraints. As a consequence, an optimal solution (not necessarily unique) of an OT problem is generally highly sparse. In fact, as an optimal network flow problem, the OT problem admits an optimal solution having at most m+nm+n nonzero entries (see e.g., [4, Section 3.4]). It is well-known that a primal-dual IPM applies the Newton method for solving a sequence of perturbed KKT systems. This approach is shown to be highly efficient and robust in general. However, as we will explain in section 2, the primal-dual IPM needs to solve a sequence of dense and highly ill-conditioned linear systems which require excessive computational costs. Moreover, as the iterates are required to stay in the positive orthant, IPM is incapable of exploiting the solution sparsity when solving the OT problem.

The main issue that we want to address in this paper is: How can we exploit the solution sparsity in designing a Newton-type method for solving the KKT system (3) directly? One popular way is to combine an IPM with the column generation technique; See e.g., [32, 33] for more details. However, the column generation technique is effective only if one is able to identify the correct active set efficiently, which requires sophisticated pricing strategies that are usually highly heuristic. Another approach is to apply the semismooth Newton method [34] to solve the original KKT system directly. It is shown that, under suitable conditions, the semismooth Newton (SSN) method has a local superlinear or even quadratic convergence rate. However, there exist some practical and theoretical issues when applying the SSN method, as we shall explain later in section 2. Among these issues, the most critical one is that the global convergence of SSN cannot be guaranteed due to the lack of a suitable merit function for line search.

To address the question just raised, we propose a squared smoothing Newton method via the Huber function for solving OT problems. The proposed method admits appealing global convergence properties, and it is able to take advantage of the underlying sparsity structure of the optimal solution. In addition, the proposed method enjoys superlinear convergence to an accumulation point. Extensive numerical experiments conducted on solving OT problems generated from the DOTmark collection [19] show the promising practical performance of our proposed method when compared to highly optimized LP solvers and the network simplex method. Besides the successful application to OT problems, we further extend the method to solve WB problems with fixed supports, and provide an efficient way to solve the corresponding Newton system. Numerical results on some real image dataset indicate that the proposed method outperforms both the primal-dual IPM and dual simplex method implemented in the highly optimized commercial solver Gurobi.

The rest of this paper is organized as follows. In section 2, to motivate the development of our proposed algorithm, we briefly introduce the main ideas of the primal-dual IPM and the semismooth Newton method for solving OT problems, and explain some of the limitations of both methods. Then in section 3, we present our main algorithm, i.e., a squared smoothing Newton method via the Huber smoothing function, and conduct rigorous convergence analysis. In section 4, we extend our algorithm to solve WB problems with fixed supports, and explain how to solve the underlying Newton equations efficiently. To evaluate the practical performance of the proposed method, we conduct numerical experiments on numerous OT and WB problems with different sizes in section 5. Finally, we conclude our paper in section 6.

2 Preliminary

In this article, the following notations are used. For any finite dimensional Euclidean space 𝔼\mathbb{E} equipped with an inner product ,\left\langle\cdot,\cdot\right\rangle, we denote its induced norm by \left\lVert\cdot\right\rVert. We denote the nonnegative orthant of n\mathbb{R}^{n} as +n\mathbb{R}^{n}_{+}. For any positive integer nn, enne_{n}\in\mathbb{R}^{n} denotes the vector of all ones and Inn×nI_{n}\in\mathbb{R}^{n\times n} denotes the identity matrix. Let Xm×nX\in\mathbb{R}^{m\times n}, we use vec(X)\mathrm{vec}(X) to denote the vector in mn\mathbb{R}^{mn} obtained by stacking the columns of XX. Conversely, X=Mat(x)X=\mathrm{Mat}(x) is the unique matrix such that x=vec(X)x=\mathrm{vec}(X). For given matrices AA and BB, ABA\otimes B denotes the Kronecker product of AA and BB. Additionally, for any matrix XX such that AXBAXB is well-defined, it holds that vec(AXB)=(BTA)vec(X)\mathrm{vec}(AXB)=(B^{T}\otimes A)\mathrm{vec}(X). We also use \circ to denote the element-wise multiplication operator of two vectors/matrices of the same size. Let xx be any vector, Diag(x)\mathrm{Diag}(x) denotes the diagonal matrix with xx on its diagonal. Conversely, for any square matrix XX, diag(X)\mathrm{diag}(X) is the vector consisting of the diagonal entries of XX. For any vector xx, the projection of xx onto +n\mathbb{R}^{n}_{+} is denoted by Π+(x):=argmin{zx2:z0}=max{x,0}\Pi_{+}(x):=\mathrm{argmin}\{\left\lVert z-x\right\rVert^{2}\;:\;z\geq 0\}=\max\{x,0\}, where the max\max operation is applied elementwise on xx.

In this section, we shall briefly discuss the primal-dual IPM and the semismooth Newton (SSN) method, which are both Newton-type methods for handling the KKT system (3). We also mention some theoretical and practical issues that we may face when applying these two methods to solving the OT problem.

We start by introducing some notation that would simplify our presentation. Let x:=vec(X)mnx:=\mathrm{vec}(X)\in\mathbb{R}^{mn}. By using the properties of the Kronecker product, we see that

Xen=(enTIm)x,XTem=(InemT)x.Xe_{n}=(e_{n}^{T}\otimes I_{m})x,\quad X^{T}e_{m}=(I_{n}\otimes e_{m}^{T})x.

Denote z:=vec(Z)mnz:=\mathrm{vec}(Z)\in\mathbb{R}^{mn}, c:=vec(C)mnc:=\mathrm{vec}(C)\in\mathbb{R}^{mn}, d:=(a;b)m+nd:=(a;b)\in\mathbb{R}^{m+n}, y:=(f;g)m+ny:=(f;g)\in\mathbb{R}^{m+n} and A:=(enTImInemT)A:=\begin{pmatrix}e_{n}^{T}\otimes I_{m}\\ I_{n}\otimes e_{m}^{T}\end{pmatrix}. Then we can rewrite the KKT systems (3) as follows:

Ax=d,ATy+z=c,xz=0,x0,z0.Ax=d,\quad A^{T}y+z=c,\quad x\circ z=0,\;x\geq 0,\;z\geq 0. (4)

2.1 Primal-dual interior point methods

Recall that a primal-dual IPM solves the KKT system (3) indirectly by solving a sequence of perturbed KKT systems of the form:

Ax=d,ATy+z=c,xz=μemn,x0,z0,Ax=d,\quad A^{T}y+z=c,\quad x\circ z=\mu e_{mn},\;x\geq 0,\;z\geq 0, (5)

via the Newton method, where μ>0\mu>0 is the barrier parameter that is driven to zero. At each IPM iteration, the following Newton equation is solved:

(A000ATImnDiag(z)0Diag(x))(ΔxΔyΔz)=(rp:=dAxrd:=cATyzrc:=γμemnxz),\begin{pmatrix}A&0&0\\ 0&A^{T}&I_{mn}\\ \mathrm{Diag}(z)&0&\mathrm{Diag}(x)\end{pmatrix}\begin{pmatrix}\Delta x\\ \Delta y\\ \Delta z\end{pmatrix}=\begin{pmatrix}r_{p}:=d-Ax\\ r_{d}:=c-A^{T}y-z\\ r_{c}:=\gamma\mu e_{mn}-x\circ z\end{pmatrix}, (6)

where (x,y,z)++mn×m+n×++mn(x,y,z)\in\mathbb{R}^{mn}_{++}\times\mathbb{R}^{m+n}\times\mathbb{R}^{mn}_{++} is the current primal-dual iterate, (Δx,Δy,Δz)mn×m+n×mn(\Delta x,\Delta y,\Delta z)\in\mathbb{R}^{mn}\times\mathbb{R}^{m+n}\times\mathbb{R}^{mn} is the Newton direction, μ\mu is set to be x,z/(mn)\langle x,z\rangle/(mn), and γ[0,1]\gamma\in[0,1] is a given parameter. To reduce the computational cost, the large-scale system of 2mn+m+n2mn+m+n linear equations is typically reduced to the so-called normal system of m+nm+n equations in Δy\Delta y through the block elimination of Δx\Delta x and Δz\Delta z. Specifically, we solve the normal equation:

AΘATΔy=rp+AΘ(rdDiag(x)1rc),A\Theta A^{T}\Delta y=r_{p}+A\Theta(r_{d}-\mathrm{Diag}(x)^{-1}r_{c}), (7)

where Θ:=Diag(θ)mn×mn\Theta:=\mathrm{Diag}(\theta)\in\mathbb{R}^{mn\times mn} is a diagonal matrix whose diagonal entries are θi=xi/zi\theta_{i}=x_{i}/z_{i}, for i=1,,mni=1,\dots,mn. After obtaining Δy\Delta y by solving (7), we can compute Δx\Delta x and Δz\Delta z accordingly. To solve (7) efficiently, the following structure of AΘATA\Theta A^{T} is useful.

Fact 1.

For any diagonal matrix Θ:=Diag(θ)mn×mn\Theta:=\mathrm{Diag}(\theta)\in\mathbb{R}^{mn\times mn}, let V:=Mat(θ)m×nV:=\mathrm{Mat}(\theta)\in\mathbb{R}^{m\times n}, then it holds that

AΘAT=(Diag(Ven)VVTDiag(VTem)).A\Theta A^{T}=\begin{pmatrix}\mathrm{Diag}(Ve_{n})&V\\ V^{T}&\mathrm{Diag}(V^{T}e_{m})\end{pmatrix}.

As we can see, the iterates xx and zz generated by an IPM should be positive because of the third equation in (5). As a consequence, the matrix V=Mat(θ)V=\mathrm{Mat}(\theta) is dense and AΘATA\Theta A^{T} contains two fully dense blocks. Therefore, factorizing AΘATA\Theta A^{T} in (7) can be very expensive when m+nm+n is large. As a result, solving the normal system by an iterative solver is the only viable option. Moreover, it is obvious that AΘATA\Theta A^{T} is always singular, since the vector (em;en)m+n(e_{m};-e_{n})\in\mathbb{R}^{m+n} is an eigenvector for AΘATA\Theta A^{T} corresponding to the zero eigenvalue. Note that the singularity of the coefficient matrix may not be an issue since one can remove the last row of AA in a prepossessing phase, which results in a normal equation having a coefficient matrix of the form

(Diag(Ven)V^V^TDiag(V^Tem))(m+n1)×(m+n1),\begin{pmatrix}\mathrm{Diag}(Ve_{n})&\hat{V}\\ \hat{V}^{T}&\mathrm{Diag}(\hat{V}^{T}e_{m})\end{pmatrix}\in\mathbb{R}^{(m+n-1)\times(m+n-1)},

where V^m×(n1)\hat{V}\in\mathbb{R}^{m\times(n-1)} is formed from the first n1n-1 columns of VV. Then, by [35, Lemma 2.3], we see that the resulting coefficient matrix is nonsingular. However, the issue of ill-conditioning could still be a critical difficulty that goes against an iterative solver for fast convergence. In view of the above discussions, IPMs may not perform as efficiently as one would expect when solving large-scale OT problems.

2.2 The semismooth Newton method

In this subsection, we shall discuss the SSN method. We first revisit the concept of semismoothness which was originally introduced in [36] for functionals and later extended for vector-valued functions in [34]. Let 𝕏\mathbb{X} and 𝕐\mathbb{Y} be two finite dimensional Euclidean spaces and F:𝕏𝕐F:\mathbb{X}\rightarrow\mathbb{Y} be a locally Lipschitz continuous function. According to Rademacher’s Theorem [37], FF is F-differentiable almost everywhere, and the Clarke’s generalized Jacobian [38] of FF at x𝕏x\in\mathbb{X}, denoted by F(x)\partial F(x), is well-defined and it holds that

F(x):=conv(BF(x)),BF(x):={limDFzxF(z)},x𝕏,\partial F(x):=\mathrm{conv}(\partial_{B}F(x)),\quad\partial_{B}F(x):=\left\{\lim_{D_{F}\ni z\rightarrow x}F^{\prime}(z)\right\},\quad\forall\;x\in\mathbb{X},

where “conv\mathrm{conv}” denotes the convex hull operation and DF𝕏D_{F}\subset\mathbb{X} is the set of points at which FF is F-differentiable.

If FF is directionally differentiable at x𝕏x\in\mathbb{X}, i.e., the limit

F(x;d):=limt0F(x+td)F(x)tF^{\prime}(x;d):=\lim_{t\downarrow 0}\frac{F(x+td)-F(x)}{t}

exists for all d𝕏d\in\mathbb{X}, and F(x+d)F(x)Vd=o(d),VF(x+d),d0,\left\lVert F(x+d)-F(x)-Vd\right\rVert=o(\left\lVert d\right\rVert),\;\forall V\in\partial F(x+d),\;\forall d\rightarrow 0, we say that FF is semismooth at xx. If FF is semismooth at x𝕏x\in\mathbb{X} and it holds that

F(x+d)F(x)Vd=O(d2),VF(x+d),d0,\left\lVert F(x+d)-F(x)-Vd\right\rVert=O(\left\lVert d\right\rVert^{2}),\quad\forall V\in\partial F(x+d),\;\forall d\rightarrow 0,

then FF is said to be strongly semismooth at xx. Moreover, FF is said to be (strongly) semismooth everywhere on 𝕏\mathbb{X}, if FF is (strongly) semismooth at any point x𝕏x\in\mathbb{X}.

Convex functions, smooth functions, and piecewise linear functions are examples of semismooth functions. In particular, one can verify that the projection operator onto the nonnegative orthant is strongly semismooth everywhere. Hence, the KKT mapping F(x,y,z)F(x,y,z) of the following equation is also strongly semismooth everywhere:

0=F(x,y,z):=(AxdATyz+cxΠ+(xσz)),(x,y,z)mn×m+n×mn,0\;=\;F(x,y,z):=\begin{pmatrix}Ax-d\\ -A^{T}y-z+c\\ x-\Pi_{+}(x-\sigma z)\end{pmatrix},\quad(x,y,z)\in\mathbb{R}^{mn}\times\mathbb{R}^{m+n}\times\mathbb{R}^{mn}, (8)

where σ>0\sigma>0 is a given constant. Therefore, a natural idea is to apply the semismooth Newton (SSN) method [34], i.e., a nonsmooth version of Newton method, for solving (8). In particular, at each iteration of the SSN method, the following linear system is solved

(A000ATImnImnV0σV)(ΔxΔyΔz)=(rp:=dAxrd:=ATy+zcrc:=Π+(xσz)x),\begin{pmatrix}A&0&0\\ 0&-A^{T}&-I_{mn}\\ I_{mn}-V&0&\sigma V\end{pmatrix}\begin{pmatrix}\Delta x\\ \Delta y\\ \Delta z\end{pmatrix}=\begin{pmatrix}r_{p}:=d-Ax\\ r_{d}:=A^{T}y+z-c\\ r_{c}:=\Pi_{+}(x-\sigma z)-x\end{pmatrix}, (9)

where (x,y,z)(x,y,z) denotes the current iterate and V=Diag(v)mn×mnV=\mathrm{Diag}(v)\in\mathbb{R}^{mn\times mn} is an element in the Clarke’s generalized Jacobian (see [38]) of Π+(xσz)\Pi_{+}(x-\sigma z). Specifically, for given (x,z)(x,z), we may choose

vi={1,if (xσz)i0,0,otherwise,i=1,,mn.v_{i}=\left\{\begin{array}[]{ll}1,&\textrm{if }\;(x-\sigma z)_{i}\geq 0,\\ 0,&\textrm{otherwise},\end{array}\right.\quad i=1,\dots,mn.

By the expression of VV, we see that when (x,z)(x,z) are nearly optimal, VV could have a large number of zeros diagonal elements. Thus, the sparsity structure of the optimal solution can be utilized in the SSN method, which is an attractive feature when compared with the IPM. However, there are two issues that prevent us from applying the SSN method directly:

  1. 1.

    The mapping FF is nonsmooth due to the nonsmoothness of Π+\Pi_{+}. Hence, it is difficult to find a valid merit function for which a line search scheme for the SSN is well-defined. As a consequence, the globalization of the SSN method is a challenging difficulty that has yet to be resolved.

  2. 2.

    It is not clear how to reduce the large-scale linear system (9), which is typically singular, to a certain normal linear system to save computational cost as in the case of the interior point method.

3 A Squared Smoothing Newton Method

Note that the KKT equation (8) can be reduced to finding the root of the following mapping E(x,y)E(x,y):

0=E(x,y):=(AxdxΠ+(x+σ(ATyc))),(x,y)mn×m+n,0\;=\;E(x,y):=\begin{pmatrix}Ax-d\\ x-\Pi_{+}(x+\sigma(A^{T}y-c))\end{pmatrix},\quad\;(x,y)\in\mathbb{R}^{mn}\times\mathbb{R}^{m+n}, (10)

by eliminating the variable z=cATyz=c-A^{T}y. In this section, we focus on presenting our main algorithm, the squared smoothing Newton method via the Huber smoothing function, for solving the nonsmooth system (10). Existing smoothing Newton methods have been developed and studied extensively and were mainly applied for solving conic programming problems, complementarity problems, and variational inequalities. We refer the readers to [39, 40, 41, 42] for more details.

3.1 The Huber smoothing function

The key idea of a smoothing method [43] is to construct a smoothing approximation function :++×mn×m+nmn×m+n\mathcal{E}:\mathbb{R}_{++}\times\mathbb{R}^{mn}\times\mathbb{R}^{m+n}\rightarrow\mathbb{R}^{mn}\times\mathbb{R}^{m+n} for the mapping EE, and then apply the Newton method for solving the smoothed system (ϵ,x,y)=0\mathcal{E}(\epsilon,x,y)=0, for (ϵ,x,y)×mn×m+n(\epsilon,x,y)\in\mathbb{R}\times\mathbb{R}^{mn}\times\mathbb{R}^{m+n}. To this end, it is essential to construct a smoothing approximation of the projection operator Π+()\Pi_{+}(\cdot), namely, Φ:++×mnmn\Phi:\mathbb{R}_{++}\times\mathbb{R}^{mn}\rightarrow\mathbb{R}^{mn}, such that for any ϵ>0\epsilon>0, Φ(ϵ,)\Phi(\epsilon,\cdot) is continuously differentiable on mn\mathbb{R}^{mn} and for any xmnx\in\mathbb{R}^{mn}, it holds that Φ(ϵ,x)Π+(x)0\left\lVert\Phi(\epsilon,x)-\Pi_{+}(x)\right\rVert\rightarrow 0 as ϵ0\epsilon\downarrow 0.

Note that a smoothing approximation function of Π+()\Pi_{+}(\cdot) strongly depends on the smoothing function for the plus function

π(t)=max{0,t},t.\pi(t)=\max\{0,t\},\quad\forall\;t\in\mathbb{R}.

To the best of our knowledge, the Chen-Harker-Kanzow-Smale (CHKS) smoothing function [44, 45, 46], defined as

ξ(ϵ,t):=12(t2+4ϵ2+t),(ϵ,t)×,\xi(\epsilon,t):=\frac{1}{2}\left(\sqrt{t^{2}+4\epsilon^{2}}+t\right),\quad\forall\;(\epsilon,t)\in\mathbb{R}\times\mathbb{R},

is the most commonly used smoothing function for π()\pi(\cdot) in the literature. Readers are referred to [41] for other smoothing functions, whose properties are also well studied. It is easy to see that ξ(ϵ,t)\xi(\epsilon,t) maps any negative number tt to a positive value when ϵ0\epsilon\neq 0, while the function π()\pi(\cdot) maps any negative number tt to zero. Hence, the CHKS smoothing function does not preserve the sparsity structure of π()\pi(\cdot). As a result, a smoothing Newton method developed based on the CHKS smoothing function is not able to exploit the sparsity in the solutions of the OT problem to reduce the cost of solving the linear system at each iteration of the algorithm.

To resolve the issue just mentioned, we propose to use the Huber smoothing function (a translation of the function considered in [47]) which also maps any negative number to zero. In particular, the Huber smoothing function we shall use in this paper is defined as:

h(ϵ,t):={t|ϵ|2,t|ϵ|,t22|ϵ|,0<t<|ϵ|,0,t0,(ϵ,t)\{0}×,h(0,t)=π(t),t.h(\epsilon,t):=\left\{\begin{array}[]{ll}t-\frac{\lvert\epsilon\rvert}{2},&t\geq\lvert\epsilon\rvert,\\ \frac{t^{2}}{2\lvert\epsilon\rvert},&0<t<\lvert\epsilon\rvert,\\ 0,&t\leq 0,\end{array}\right.\quad\forall\;(\epsilon,t)\in\mathbb{R}\backslash\{0\}\times\mathbb{R},\quad h(0,t)=\pi(t),\quad\forall t\in\mathbb{R}. (11)

Clearly, h(ϵ,t)h(\epsilon,t) is continuously differentiable for ϵ0\epsilon\neq 0 and tt\in\mathbb{R}. In fact, we see that for ϵ0\epsilon\neq 0,

h1(ϵ,t):=hϵ(ϵ,t)={12sign(ϵ),t|ϵ|,t22ϵ2sign(ϵ),0<t<|ϵ|,0,t0,h2(ϵ,t):=ht(ϵ,t)={1,t|ϵ|,t|ϵ|,0<t<|ϵ|,0,t0,.h_{1}^{\prime}(\epsilon,t):=\frac{\partial h}{\partial\epsilon}(\epsilon,t)=\left\{\begin{array}[]{ll}-\frac{1}{2}\mathrm{sign}(\epsilon),&t\geq\lvert\epsilon\rvert,\\ -\frac{t^{2}}{2\epsilon^{2}}\mathrm{sign}(\epsilon),&0<t<\lvert\epsilon\rvert,\\ 0,&t\leq 0,\end{array}\right.\quad h_{2}^{\prime}(\epsilon,t):=\frac{\partial h}{\partial t}(\epsilon,t)=\left\{\begin{array}[]{ll}1,&t\geq\lvert\epsilon\rvert,\\ \frac{t}{\lvert\epsilon\rvert},&0<t<\lvert\epsilon\rvert,\\ 0,&t\leq 0,\end{array}\right..

Moreover, it is easy to check that for any tt\in\mathbb{R},

Bh(0,t)=\displaystyle\partial_{B}h(0,t)= {vv=limϵk0,tkth(ϵk,tk),if exists}={(±12,1),t>0,{(±12ξ2,ξ)ξ[0,1]},t=0,(0,0),t<0,\displaystyle\;\left\{v\mid v=\lim_{{\epsilon^{k}\rightarrow 0},t^{k}\rightarrow t}h^{\prime}(\epsilon^{k},t^{k}),\;\textrm{if exists}\right\}=\left\{\begin{array}[]{ll}\left(\pm\frac{1}{2},1\right),&t>0,\\[3.0pt] \left\{\left(\pm\frac{1}{2}\xi^{2},\xi\right)\mid\xi\in[0,1]\right\},&t=0,\\[3.0pt] (0,0),&t<0,\end{array}\right.

and consequently,

h(0,t)=conv(Bh(0,t))={T+,t>0T0,t=0,{(0,0)},t<0,\partial h(0,t)=\mathrm{conv}(\partial_{B}h(0,t))=\left\{\begin{array}[]{ll}T_{+},&t>0\\ T_{0},&t=0,\\ \{(0,0)\},&t<0,\end{array}\right. (12)

where

T+:={(ν,1)ν[12,12]},T0:={(ν,ξ)ξ[0,1],ν[12ξ,12ξ]}.\displaystyle T_{+}:=\;\left\{(\nu,1)\mid\nu\in\left[-\frac{1}{2},\frac{1}{2}\right]\right\},\quad{T_{0}:=\;\left\{\left(\nu,\xi\right)\mid\xi\in\left[0,1\right],\nu\in\left[-\frac{1}{2}\xi,\frac{1}{2}\xi\right]\right\}.}

Recall that the generalized Jacobian of the plus function π\pi has the following form

π(t)={{1},t>0,[0,1],t=0,{0},t<0,t.\partial\pi(t)=\left\{\begin{array}[]{ll}\{1\},&t>0,\\ \left[0,1\right],&t=0,\\ \{0\},&t<0,\end{array}\right.\quad\forall t\in\mathbb{R}.

Therefore, we can see that for any vh(0,t)v\in\partial h(0,t), there exists a uπ(t)u\in\partial\pi(t) such that v(0,h)u(h)v(0,h)\equiv u(h) for any hmnh\in\mathbb{R}^{mn}. One can also verify that the converse statement is true.

3.2 The main algorithm

Using the Huber function h(,)h(\cdot,\cdot), we can construct the smoothing approximation for the projection operator Π+()\Pi_{+}(\cdot) as

Φ(ϵ,x):=(h(ϵ,x1),,h(ϵ,xmn))T,xmn,ϵ0.\Phi(\epsilon,x):=(h(\epsilon,x_{1}),\dots,h(\epsilon,x_{mn}))^{T},\quad\forall x\in\mathbb{R}^{mn},\;\epsilon\neq 0. (13)

Given the above smoothing function Φ()\Phi(\cdot) for Π+()\Pi_{+}(\cdot), a smoothing approximation of E(,)E(\cdot,\cdot) can be constructed accordingly as follows:

(ϵ,x,y):=(Ax+κpϵyd(1+κcϵ)xΦ(ϵ,x+σ(ATyc))),(ϵ,x,y)++×mn×m+n,\mathcal{E}(\epsilon,x,y):=\begin{pmatrix}Ax+\kappa_{p}\epsilon y-d\\ (1+\kappa_{c}\epsilon)x-\Phi(\epsilon,x+\sigma(A^{T}y-c))\end{pmatrix},\quad\forall(\epsilon,x,y)\in\mathbb{R}_{++}\times\mathbb{R}^{mn}\times\mathbb{R}^{m+n}, (14)

where κp>0\kappa_{p}>0 and κc>0\kappa_{c}>0 are two given parameters. Note that adding the perturbation terms κpϵy\kappa_{p}\epsilon y and κcϵx\kappa_{c}\epsilon x is essential in our algorithmic design, since these perturbations will ensure that the Jacobian of \mathcal{E} at (ϵ,x,y)(\epsilon,x,y) is nonsingular when ϵ>0\epsilon>0.

In the current literature, smoothing Newton methods can be roughly divided into two groups, the Jacobian smoothing Newton methods and the squared smoothing Newton methods (see [48] for a comprehensive survey). The global convergence of a Jacobian smoothing Newton method strongly relies on the so-called Jacobian consistency property and the existence of a positive constant κ\kappa such that

(ϵ,x,y)E(x,y)κϵ,ϵ>0,(x,y)mn×m+n.\left\lVert\mathcal{E}(\epsilon,x,y)-E(x,y)\right\rVert\leq\kappa\epsilon,\quad\forall\epsilon>0,\;(x,y)\in\mathbb{R}^{mn}\times\mathbb{R}^{m+n}. (15)

Unfortunately, the property (15) does not hold for (14). Therefore, in our setting, the Jacobian smoothing Newton method is not applicable. On the other hand, the squared smoothing Newton method tries to find a solution of (ϵ,x,y)=0\mathcal{E}(\epsilon,x,y)=0 by solving the following system

^(ϵ,x,y):=(ϵ(ϵ,x,y))=0,(ϵ,x,y)×mn×m+n,\widehat{\mathcal{E}}(\epsilon,x,y):=\begin{pmatrix}\epsilon\\ \mathcal{E}(\epsilon,x,y)\end{pmatrix}=0,\quad(\epsilon,x,y)\in\mathbb{R}\times\mathbb{R}^{mn}\times\mathbb{R}^{m+n}, (16)

via the Newton method. As we shall see shortly, the global convergence of the squared smoothing Newton (SqSN) method does not rely on strong conditions such as (15). This explains why we choose the SqSN method in the present paper.

Associated with the system (16), we can define the merit function ϕ:×mn×m+n\phi:\mathbb{R}\times\mathbb{R}^{mn}\times\mathbb{R}^{m+n}\rightarrow\mathbb{R}:

ϕ(ϵ,x,y):=^(ϵ,x,y)2,(ϵ,x,y)×mn×m+n,\phi(\epsilon,x,y):=\left\lVert\widehat{\mathcal{E}}(\epsilon,x,y)\right\rVert^{2},\quad(\epsilon,x,y)\in\mathbb{R}\times\mathbb{R}^{mn}\times\mathbb{R}^{m+n},

to ensure that the line search procedure within the SqSN method is well-defined. We also define an auxiliary function ζ:×mn×m+n\zeta:\mathbb{R}\times\mathbb{R}^{mn}\times\mathbb{R}^{m+n}\rightarrow\mathbb{R}:

ζ(ϵ,x,y):=rmin{1,^(ϵ,x,y)1+τ},(ϵ,x,y)×mn×m+n,\zeta(\epsilon,x,y):=r\min\Big{\{}1,\left\lVert\widehat{\mathcal{E}}(\epsilon,x,y)\right\rVert^{1+\tau}\Big{\}},\quad\forall(\epsilon,x,y)\in\mathbb{R}\times\mathbb{R}^{mn}\times\mathbb{R}^{m+n},

which controls how the smoothing parameter ϵ\epsilon is driven to zero, where r(0,1)r\in(0,1) and τ(0,1]\tau\in(0,1] are two given constants. Using these notations, we can now describe our squared smoothing Newton method via the Huber function in Algorithm 1.

Algorithm 1 A squared smoothing Newton (SqSN) method via the Huber function
1:Initial point (x0,y0)mn×m+n(x^{0},y^{0})\in\mathbb{R}^{mn}\times\mathbb{R}^{m+n}, ϵ0>0\epsilon^{0}>0, r(0,1)r\in(0,1) such that δ:=rϵ0<1\delta:=r\epsilon^{0}<1, τ(0,1]\tau\in(0,1], ρ(0,1)\rho\in(0,1), and μ(0,1/2)\mu\in(0,1/2).
2:for k0k\geq 0 do
3:     if ^(ϵk,xk,yk)=0\widehat{\mathcal{E}}(\epsilon^{k},x^{k},y^{k})=0 then
4:         Output: (ϵk,xk,yk)(\epsilon^{k},x^{k},y^{k});
5:     else
6:         Compute ζk=ζ(ϵk,xk,yk)\zeta_{k}=\zeta(\epsilon^{k},x^{k},y^{k});
7:         Find Δk:=(Δϵk;Δxk;Δyk)\Delta^{k}:=(\Delta\epsilon^{k};\Delta x^{k};\Delta y^{k}) via solving the following linear system of equations
^(ϵk,xk,yk)+^(ϵk,xk,yk)Δk=(ζkϵ0;0;0);\widehat{\mathcal{E}}(\epsilon^{k},x^{k},y^{k})+\widehat{\mathcal{E}}^{\prime}(\epsilon^{k},x^{k},y^{k})\Delta^{k}=(\zeta_{k}\epsilon^{0};0;0); (17)
8:         Compute k\ell_{k} as the smallest nonnegative integer \ell satisfying
ϕ(ϵk+ρΔϵk,xk+ρΔxk,yk+ρΔyk)[12μ(1δ)ρ]ϕ(ϵk,xk,yk);\phi(\epsilon^{k}+\rho^{\ell}\Delta\epsilon^{k},x^{k}+\rho^{\ell}\Delta x^{k},y^{k}+\rho^{\ell}\Delta y^{k})\leq\left[1-2\mu(1-\delta)\rho^{\ell}\right]\phi(\epsilon^{k},x^{k},y^{k});
9:         Update (ϵk+1,xk+1,yk+1)=(ϵk+ρkΔϵk,xk+ρkΔxk,yk+ρkΔyk)(\epsilon^{k+1},x^{k+1},y^{k+1})=(\epsilon^{k}+\rho^{\ell_{k}}\Delta\epsilon^{k},x^{k}+\rho^{\ell_{k}}\Delta x^{k},y^{k}+\rho^{\ell_{k}}\Delta y^{k});
10:     end if
11:end for

Before establishing the convergence properties of Algorithm 1, let us first see how to solve the linear system (17) and also explain the potential advantage of the SqSN method compared to the IPM and the SSN method. Suppose ϵk>0\epsilon^{k}>0, then, we see that

^(ϵk,xk,yk)=(100κpykAκpϵIm+nκcxkV1k(1+κcϵk)ImnV2kσV2kAT),\widehat{\mathcal{E}}^{\prime}(\epsilon^{k},x^{k},y^{k})=\begin{pmatrix}1&0&0\\ \kappa_{p}y^{k}&A&\kappa_{p}\epsilon I_{m+n}\\[3.0pt] \kappa_{c}x^{k}-V_{1}^{k}&(1+\kappa_{c}\epsilon^{k})I_{mn}-V_{2}^{k}&-\sigma V_{2}^{k}A^{T}\end{pmatrix},

where V1k:=Φ1(ϵk,wk)V_{1}^{k}:=\Phi_{1}^{\prime}(\epsilon^{k},w^{k}) and V2k:=Φ2(ϵk,wk)V_{2}^{k}:=\Phi_{2}^{\prime}(\epsilon^{k},w^{k}) denote the partial derivatives of Φ\Phi with respect to the first and second arguments at the referenced point with wk:=xkσ(cATyk)w^{k}:=x^{k}-\sigma(c-A^{T}y^{k}), respectively. Note also that |(V1k)ii|[0,1/2]\left\lvert(V_{1}^{k})_{ii}\right\rvert\in[0,1/2] and |(V2k)ii|[0,1]\left\lvert(V_{2}^{k})_{ii}\right\rvert\in[0,1] for any i=1,,mni=1,\dots,mn. From (17), we get Δϵk=ϵk+ζkϵ0\Delta\epsilon^{k}=-\epsilon^{k}+\zeta_{k}\epsilon^{0} and

(AκpϵkIm+n(1+κcϵk)ImnV2kσV2kAT)(ΔxkΔyk)=(rpkrck),\begin{pmatrix}A&\kappa_{p}\epsilon^{k}I_{m+n}\\[3.0pt] (1+\kappa_{c}\epsilon^{k})I_{mn}-V_{2}^{k}&-\sigma V_{2}^{k}A^{T}\end{pmatrix}\begin{pmatrix}\Delta x^{k}\\ \Delta y^{k}\end{pmatrix}=\begin{pmatrix}r_{p}^{k}\\ r_{c}^{k}\end{pmatrix},

where

rpk:=\displaystyle r_{p}^{k}:= dAxkκpϵkykκpykΔϵk,\displaystyle\;d-Ax^{k}-\kappa_{p}\epsilon^{k}y^{k}-\kappa_{p}y^{k}\Delta\epsilon^{k},
rck:=\displaystyle r_{c}^{k}:= Φ(ϵk,xk+σ(ATykc))(1+κcϵk)xk(κcxkV1k)Δϵk.\displaystyle\;\Phi(\epsilon^{k},x^{k}+\sigma(A^{T}y^{k}-c))-(1+\kappa_{c}\epsilon^{k})x^{k}-(\kappa_{c}x^{k}-V_{1}^{k})\Delta\epsilon^{k}.

It then follows that Δxk=[(1+κcϵk)ImnV2k]1(rck+σV2kATΔyk).\Delta x^{k}=\left[(1+\kappa_{c}\epsilon^{k})I_{mn}-V_{2}^{k}\right]^{-1}\left(r_{c}^{k}+\sigma V_{2}^{k}A^{T}\Delta y^{k}\right). Furthermore, we have that

[κpϵkIm+n+σA((1+κcϵk)ImnV2k)1V2kAT]Δyk=rpkA((1+κcϵk)ImnV2k)1rck.\left[\kappa_{p}\epsilon^{k}I_{m+n}+\sigma A\left((1+\kappa_{c}\epsilon^{k})I_{mn}-V_{2}^{k}\right)^{-1}V_{2}^{k}A^{T}\right]\Delta y^{k}=r_{p}^{k}-A\left((1+\kappa_{c}\epsilon^{k})I_{mn}-V_{2}^{k}\right)^{-1}r_{c}^{k}. (18)

Denote vk:=diag[((1+κcϵk)ImnV2k)1V2k]mnv^{k}:=\mathrm{diag}\left[\left((1+\kappa_{c}\epsilon^{k})I_{mn}-V_{2}^{k}\right)^{-1}V_{2}^{k}\right]\in\mathbb{R}^{mn} and recall that wk:=xkσ(cATyk)w^{k}:=x^{k}-\sigma(c-A^{T}y^{k}). It is clear that

vik{>0if wik>0=0if wik0,i=1,,mn.v_{i}^{k}\left\{\begin{array}[]{ll}>0&\text{if }w_{i}^{k}>0\\ =0&\text{if }w_{i}^{k}\leq 0\end{array}\right.,\quad\forall i=1,\dots,mn.

Keeping in mind that zk=cATykz^{k}=c-A^{T}y^{k} for k0k\geq 0, we see that when (xk,yk)(x^{k},y^{k}) is close to the optimal solution pair, wkw^{k} is expected to have a large number of nonpositive elements and hence vkv^{k} to be highly sparse. Moreover, let Vk:=mat(vk)m×nV^{k}:=\mathrm{mat}(v^{k})\in\mathbb{R}^{m\times n}. By Fact 1, we see that

A((1+κcϵk)ImnV2k)1V2kAT=(Diag(Vken)Vk(Vk)TDiag((Vk)Tem))𝕊m+n.A\left((1+\kappa_{c}\epsilon^{k})I_{mn}-V_{2}^{k}\right)^{-1}V_{2}^{k}A^{T}=\begin{pmatrix}\mathrm{Diag}(V^{k}e_{n})&V^{k}\\ (V^{k})^{T}&\mathrm{Diag}((V^{k})^{T}e_{m})\end{pmatrix}\in\mathbb{S}^{m+n}. (19)

When VkV^{k} is sparse, it is clear that the coefficient matrix for computing Δyk\Delta y^{k} is also sparse. This explains the appealing feature of Algorithm 1 as compared to the IPM, since the attractive sparsity structure of the optimal solution is exploited in Algorithm 1. Moreover, we are always able to reduce (17) to the smaller-scale normal system(18) in Algorithm 1, which is not the case in the SSN method. We can also see that the matrix AA is not explicitly required in the implementation of Algorithm 1, which is another advantage of the proposed method compared to an IPM. Lastly, let J:=diag(Im,In)J:=\mathrm{diag}(I_{m},-I_{n}), then one can see that

JA((1+κcϵk)ImnV2k)1V2kATJ=(Diag(Vken)Vk(Vk)TDiag((Vk)Tem)),JA\left((1+\kappa_{c}\epsilon^{k})I_{mn}-V_{2}^{k}\right)^{-1}V_{2}^{k}A^{T}J=\begin{pmatrix}\mathrm{Diag}(V^{k}e_{n})&-V^{k}\\ -(V^{k})^{T}&\mathrm{Diag}((V^{k})^{T}e_{m})\end{pmatrix}, (20)

which corresponds to the Laplacian matrix of a sparse bipartite graph defined by VkV^{k} 222The bipartite graph has mm sources and nn sinks. There is an edge from source ii to sink jj if and only if Vijk>0V^{k}_{ij}>0 for 1im1\leq i\leq m and 1jn1\leq j\leq n.. First, one can identify all the connected components of the bipartite graph that are disjoint from each other. The matrix entries linking different components are always zero. Next, by permuting the coefficient matrix of the Newton equation, it can be converted into a block diagonal matrix. Finally, the Newton equation can be decomposed into several smaller equations. Each coefficient matrix of the smaller equation has a similar pattern as (20). In this way, the computational effort for computing the search direction may be further reduced.

3.3 Convergence properties of Algorithm 1

For the rest of this section, we shall establish the convergence properties of Algorithm 1 which allow us to resolve the two difficulties of the SSN method as mentioned at the end of the last section. The tools for our analysis are adapted from those in the literature (see, e.g., [42]).

We first show that the coefficient matrix in (17) is nonsingular whenever ϵk>0\epsilon^{k}>0.

Lemma 1.

^(ϵ,x,y)\widehat{\mathcal{E}}^{\prime}(\epsilon,x,y) is nonsingular for any (ϵ,x,y)++×mn×m+n(\epsilon,x,y)\in\mathbb{R}_{++}\times\mathbb{R}^{mn}\times\mathbb{R}^{m+n} .

Proof.

For any ϵ>0\epsilon>0, it is easy to see that ^\widehat{\mathcal{E}} is continuously differentiable, and it holds that

^(ϵ,x,y)=(100κpyAκpϵIm+nκcxV1(1+κcϵ)ImnV2σV2AT),\widehat{\mathcal{E}}^{\prime}(\epsilon,x,y)=\begin{pmatrix}1&0&0\\ \kappa_{p}y&A&\kappa_{p}\epsilon I_{m+n}\\ \kappa_{c}x-V_{1}&(1+\kappa_{c}\epsilon)I_{mn}-V_{2}&-\sigma V_{2}A^{T}\end{pmatrix},

where V1:=Φ1(ϵ,w)V_{1}:=\Phi_{1}^{\prime}(\epsilon,w) and V2:=Φ2(ϵ,w)V_{2}:=\Phi_{2}^{\prime}(\epsilon,w) denote the partial derivatives of Φ\Phi with respect to the first and the second arguments at the referenced point with w:=xσ(cATy)w:=x-\sigma(c-A^{T}y), respectively. To show that ^(ϵ,x,y)\widehat{\mathcal{E}}^{\prime}(\epsilon,x,y) is nonsingular, it suffices to show that the following linear system

^(ϵ,x,y)(Δϵ;Δx;Δy)=0\widehat{\mathcal{E}}^{\prime}(\epsilon,x,y)(\Delta\epsilon;\Delta x;\Delta y)=0

only has a trivial solution. It is obvious that Δϵ=0\Delta\epsilon=0. Since

[(1+κcϵ)ImnV2]ΔxσV2ATΔy=0,\left[(1+\kappa_{c}\epsilon)I_{mn}-V_{2}\right]\Delta x-\sigma V_{2}A^{T}\Delta y=0,

it follows that Δx=σ[(1+κcϵ)ImnV2]1V2ATΔy\Delta x=\sigma\left[(1+\kappa_{c}\epsilon)I_{mn}-V_{2}\right]^{-1}V_{2}A^{T}\Delta y. As a consequence, we get

[κpϵIm+n+σA((1+κcϵ)ImnV2)1V2AT]Δy=0.\left[\kappa_{p}\epsilon I_{m+n}+\sigma A\left((1+\kappa_{c}\epsilon)I_{mn}-V_{2}\right)^{-1}V_{2}A^{T}\right]\Delta y=0.

Clearly, the coefficient matrix of the above equation is symmetric positive definite. It then follows that Δy=0\Delta y=0 which further implies that Δx=0\Delta x=0. Therefore, the proof is completed. ∎

Lemma 1 shows that the linear system in (17) is always solvable, and the Newton direction (Δk,Δxk,Δyk)(\Delta^{k},\Delta x^{k},\Delta y^{k}) is well-defined. Next, we shall show that the line search procedure in Algorithm 1 is well-defined, i.e., k\ell_{k} is always finite as long as ϵk\epsilon^{k} is positive.

Lemma 2.

For any (ϵ,x,y)(\epsilon,x,y) with ϵ>0\epsilon>0, there exist a positive scalar α¯(0,1]\bar{\alpha}\in(0,1] such that for any α(0,α¯]\alpha\in(0,\bar{\alpha}], it holds that

ϕ(ϵ+αΔϵ,x+αΔx,y+αΔy)[12μ(1δ)α]ϕ(ϵ,x,y),\phi(\epsilon+\alpha\Delta\epsilon,x+\alpha\Delta x,y+\alpha\Delta y)\leq\left[1-2\mu(1-\delta)\alpha\right]\phi(\epsilon,x,y),

where (Δϵ,Δx,Δy)(\Delta\epsilon,\Delta x,\Delta y) satisfies

^(ϵ,x,y)+^(ϵ,x,y)(Δϵ;Δx;Δy)=(ζ(ϵ,x,y)ϵ0;0;0),\widehat{\mathcal{E}}(\epsilon,x,y)+\widehat{\mathcal{E}}^{\prime}(\epsilon,x,y)(\Delta\epsilon;\Delta x;\Delta y)=(\zeta(\epsilon,x,y)\epsilon^{0};0;0),

μ(0,1/2)\mu\in(0,1/2), ϵ0>0\epsilon^{0}>0, and δ:=rϵ0<1\delta:=r\epsilon^{0}<1.

Proof.

First, since ϵ>0\epsilon>0, by Lemma 1, we see that ^(ϵ,x,y)\widehat{\mathcal{E}}^{\prime}(\epsilon,x,y) is nonsingular. Hence, (Δϵ,Δx,Δy)(\Delta\epsilon,\Delta x,\Delta y) is well-defined. Next, we can check that

ϕ(ϵ,x,y),(Δϵ,Δx,Δy)=2^(ϵ,x,y)^(ϵ,x,y),(Δϵ;Δx;Δy)\displaystyle\left\langle\nabla\phi(\epsilon,x,y),(\Delta\epsilon,\Delta x,\Delta y)\right\rangle\;=\;\left\langle 2\nabla\widehat{\mathcal{E}}(\epsilon,x,y)\widehat{\mathcal{E}}(\epsilon,x,y),(\Delta\epsilon;\Delta x;\Delta y)\right\rangle (21)
=\displaystyle= 2^(ϵ,x,y),^(ϵ,x,y)(Δϵ;Δx;Δy)=2^(ϵ,x,y),(ζ(ϵ,x,y)ϵ0;0;0)^(ϵ,x,y)\displaystyle\;\left\langle 2\widehat{\mathcal{E}}(\epsilon,x,y),\widehat{\mathcal{E}}^{\prime}(\epsilon,x,y)(\Delta\epsilon;\Delta x;\Delta y)\right\rangle\;=\;\left\langle 2\widehat{\mathcal{E}}(\epsilon,x,y),(\zeta(\epsilon,x,y)\epsilon^{0};0;0)-\widehat{\mathcal{E}}(\epsilon,x,y)\right\rangle
\displaystyle\leq 2ϕ(ϵ,x,y)+2rϵ0^(x,y,z)min{1,^(ϵ,x,y)1+τ}.\displaystyle\;-2\phi(\epsilon,x,y)+2r\epsilon^{0}\left\lVert\widehat{\mathcal{E}}(x,y,z)\right\rVert\min\left\{1,\left\lVert\widehat{\mathcal{E}}(\epsilon,x,y)\right\rVert^{1+\tau}\right\}.

We consider two possible cases: ^(ϵ,x,y)>1\left\lVert\widehat{\mathcal{E}}(\epsilon,x,y)\right\rVert>1 and ^(ϵ,x,y)1\left\lVert\widehat{\mathcal{E}}(\epsilon,x,y)\right\rVert\leq 1. If ^(ϵ,x,y)>1\left\lVert\widehat{\mathcal{E}}(\epsilon,x,y)\right\rVert>1, then (21) implies that

ϕ(ϵ,x,y),(Δϵ,Δx,Δy)2ϕ(ϵ,x,y)+2rϵ0^(x,y,z)2(1δ)ϕ(ϵ,x,y).\left\langle\nabla\phi(\epsilon,x,y),(\Delta\epsilon,\Delta x,\Delta y)\right\rangle\leq-2\phi(\epsilon,x,y)+2r\epsilon^{0}\left\lVert\widehat{\mathcal{E}}(x,y,z)\right\rVert\leq-2(1-\delta)\phi(\epsilon,x,y).

If ^(ϵ,x,y)1\left\lVert\widehat{\mathcal{E}}(\epsilon,x,y)\right\rVert\leq 1, then (21) implies that

ϕ(ϵ,x,y),(Δϵ,Δx,Δy)\displaystyle\left\langle\nabla\phi(\epsilon,x,y),(\Delta\epsilon,\Delta x,\Delta y)\right\rangle\leq 2ϕ(ϵ,x,y)+2rϵ0^(ϵ,x,y)2+τ\displaystyle\;-2\phi(\epsilon,x,y)+2r\epsilon^{0}\left\lVert\widehat{\mathcal{E}}(\epsilon,x,y)\right\rVert^{2+\tau}
\displaystyle\leq 2ϕ(ϵ,x,y)+2rϵ0ϕ(ϵ,x,y)\displaystyle\;-2\phi(\epsilon,x,y)+2r\epsilon^{0}\phi(\epsilon,x,y)
=\displaystyle= 2(1δ)ϕ(ϵ,x,y).\displaystyle\;-2(1-\delta)\phi(\epsilon,x,y).

Thus, in both cases, it always holds that

ϕ(ϵ,x,y),(Δϵ,Δx,Δy)2(1δ)ϕ(ϵ,x,y).\left\langle\nabla\phi(\epsilon,x,y),(\Delta\epsilon,\Delta x,\Delta y)\right\rangle\leq-2(1-\delta)\phi(\epsilon,x,y).

Now by the Taylor expansion, we get

ϕ(ϵ+αΔϵ,x+αΔx,y+αΔy)=\displaystyle\phi(\epsilon+\alpha\Delta\epsilon,x+\alpha\Delta x,y+\alpha\Delta y)= ϕ(ϵ,x,y)+αϕ(ϵ,x,y),(Δϵ,Δx,Δy)+o(α)\displaystyle\;\phi(\epsilon,x,y)+\alpha\left\langle\nabla\phi(\epsilon,x,y),(\Delta\epsilon,\Delta x,\Delta y)\right\rangle+o(\alpha)
\displaystyle\leq ϕ(ϵ,x,y)2α(1δ)ϕ(ϵ,x,y)+o(α).\displaystyle\;\phi(\epsilon,x,y)-2\alpha(1-\delta)\phi(\epsilon,x,y)+o(\alpha).

Using the fact that μ(0,1/2)\mu\in(0,1/2), there exists α¯(0,1]\bar{\alpha}\in(0,1] such that for α(0,α¯]\alpha\in(0,\bar{\alpha}], it always holds that

ϕ(ϵ+αΔϵ,x+αΔx,y+αΔy)[12μ(1δ)α]ϕ(ϵ,x,y),\phi(\epsilon+\alpha\Delta\epsilon,x+\alpha\Delta x,y+\alpha\Delta y)\leq\left[1-2\mu(1-\delta)\alpha\right]\phi(\epsilon,x,y),

which completes the proof. ∎

Lemma 2 indicates that when ϵk>0\epsilon^{k}>0, the line search procedure in Algorithm 1 is always well-defined. Additionally, we need to show that Algorithm 1 generates a sequence {ϵk}\{\epsilon^{k}\} that is always positive before termination. The next lemma shows that ϵk\epsilon^{k} is indeed lower bounded by ζkϵ0\zeta_{k}\epsilon^{0}, which is positive before termination.

Lemma 3.

Suppose at the kk-th iteration of Algorithm 1 that ϵk>0\epsilon^{k}>0 and ϵkζ(ϵk,xk,yk)ϵ0\epsilon^{k}\geq\zeta(\epsilon^{k},x^{k},y^{k})\epsilon^{0}. Then, for any α[0,1]\alpha\in[0,1] satisfying

ϕ(ϵk+αΔϵk,xk+αΔxk,yk+αΔyk)[12μ(1δ)α]ϕ(ϵk,xk,yk),\phi(\epsilon^{k}+\alpha\Delta\epsilon^{k},x^{k}+\alpha\Delta x^{k},y^{k}+\alpha\Delta y^{k})\leq\left[1-2\mu(1-\delta)\alpha\right]\phi(\epsilon^{k},x^{k},y^{k}), (22)

it holds that ϵk+αΔϵkζ(ϵk+αΔϵk,xk+αΔxk,yk+αΔyk)ϵ0.\epsilon^{k}+\alpha\Delta\epsilon^{k}\geq\zeta(\epsilon^{k}+\alpha\Delta\epsilon^{k},x^{k}+\alpha\Delta x^{k},y^{k}+\alpha\Delta y^{k})\epsilon^{0}.

Proof.

From (17), it is not difficult to see that Δϵk=ϵk+ζ(ϵk,xk,yk)ϵ0\Delta\epsilon^{k}=-\epsilon^{k}+\zeta(\epsilon^{k},x^{k},y^{k})\epsilon^{0}. Hence, Δϵk0\Delta\epsilon^{k}\leq 0, and for any α[0,1]\alpha\in[0,1], it holds that

ϵk+αΔϵkζ(ϵk+αΔϵk,xk+αΔxk,yk+αΔyk)ϵ0\displaystyle\;\epsilon^{k}+\alpha\Delta\epsilon^{k}-\zeta(\epsilon^{k}+\alpha\Delta\epsilon^{k},x^{k}+\alpha\Delta x^{k},y^{k}+\alpha\Delta y^{k})\epsilon^{0}
\displaystyle\geq ϵk+Δϵkζ(ϵk+αΔϵk,xk+αΔxk,yk+αΔyk)ϵ0\displaystyle\;\epsilon^{k}+\Delta\epsilon^{k}-\zeta(\epsilon^{k}+\alpha\Delta\epsilon^{k},x^{k}+\alpha\Delta x^{k},y^{k}+\alpha\Delta y^{k})\epsilon^{0}
=\displaystyle= ζ(ϵk,xk,yk)ϵ0ζ(ϵk+αΔϵk,xk+αΔxk,yk+αΔyk)ϵ0\displaystyle\;\zeta(\epsilon^{k},x^{k},y^{k})\epsilon^{0}-\zeta(\epsilon^{k}+\alpha\Delta\epsilon^{k},x^{k}+\alpha\Delta x^{k},y^{k}+\alpha\Delta y^{k})\epsilon^{0}
\displaystyle\geq  0,\displaystyle\;0,

where in the last inequality we have used the fact that (22) implies that

ζ(ϵk,xk,yk)ζ(ϵk+αΔϵk,xk+αΔxk,yk+αΔyk)\zeta(\epsilon^{k},x^{k},y^{k})\geq\zeta(\epsilon^{k}+\alpha\Delta\epsilon^{k},x^{k}+\alpha\Delta x^{k},y^{k}+\alpha\Delta y^{k})

by the definition of the function ζ\zeta. Therefore, the proof is completed. ∎

The above lemma also explains the role of the auxiliary function ζ\zeta in the algorithmic design. In particular, when we choose rr to be small or τ\tau to be close to one, then the lower bound ζkϵ0\zeta_{k}\epsilon^{0} is small. As a result, ϵk\epsilon^{k} will also be small. However, a smaller ϵk\epsilon^{k} will worsen the conditioning of the linear system (17). On the other hand, if we choose rr to be large or τ\tau to be close to zero, then, ϵk\epsilon^{k} will be further away from zero and the algorithm would need more iterations to converge to an optimal solution. Therefore, there is a trade-off in choosing the parameters rr and τ\tau in the practical implementation of Algorithm 1.

Now, we are ready to state the first convergence result of the Algorithm 1.

Theorem 1.

Algorithm 1 is well-defined and generates an infinite sequence {(ϵk,xk,yk)}\{(\epsilon^{k},x^{k},y^{k})\} such that ϵkζ(ϵk,xk,yk)ϵ0\epsilon^{k}\geq\zeta(\epsilon^{k},x^{k},y^{k})\epsilon^{0} with the property that any accumulation point (ϵ¯,x¯,y¯)(\bar{\epsilon},\bar{x},\bar{y}) of the sequence {(ϵk,xk,yk)}\{(\epsilon^{k},x^{k},y^{k})\} is a solution of the nonlinear system ^(ϵ,x,y)=0\widehat{\mathcal{E}}(\epsilon,x,y)=0.

Proof.

It follows from Lemma 1, 2 and 3 that Algorithm 1 is well-defined, and it generates an infinite sequence {(ϵk,xk,yk)}\{(\epsilon^{k},x^{k},y^{k})\} such that ϵkζ(ϵk,xk,yk)ϵ0\epsilon^{k}\geq\zeta(\epsilon^{k},x^{k},y^{k})\epsilon^{0}. Let (ϵ¯,x¯,y¯)(\bar{\epsilon},\bar{x},\bar{y}) be any accumulation point of the sequence {(ϵk,xk,yk)}\{(\epsilon^{k},x^{k},y^{k})\}, if it exists. By taking a subsequence if necessary, we may assume without loss of generality that {(ϵk,xk,yk)}\{(\epsilon^{k},x^{k},y^{k})\} converges to (ϵ¯,x¯,y¯)(\bar{\epsilon},\bar{x},\bar{y}). Since the line-search scheme is well-defined, it follows that

ϕ(ϵk+1,xk+1,yk+1)<ϕ(ϵk,xk,yk),k0.\phi(\epsilon^{k+1},x^{k+1},y^{k+1})<\phi(\epsilon^{k},x^{k},y^{k}),\quad\forall k\geq 0.

That is, the sequence {ϕ(ϵk,xk,yk)}\{\phi(\epsilon^{k},x^{k},y^{k})\} is monotonically decreasing. By the definition of the function ζ\zeta, we see that the sequence {ζk}\{\zeta_{k}\} is also monotonically decreasing. Hence, there exist ϕ¯\bar{\phi} and ζ¯\bar{\zeta} such that

ϕ(ϵk,xk,yk)ϕ¯,ζkζ¯,ask.\phi(\epsilon^{k},x^{k},y^{k})\rightarrow\bar{\phi},\quad\zeta_{k}\rightarrow\bar{\zeta},\quad\text{as}\;k\rightarrow\infty.

As a consequence of the continuity of the function ϕ\phi, we see that

ϕ(ϵ¯,x¯,y¯)=ϕ¯0,ζ(ϵ¯,x¯,y¯)=ζ¯0,ϵ¯ζ(ϵ¯,x¯,y¯)ϵ0.\phi(\bar{\epsilon},\bar{x},\bar{y})=\bar{\phi}\geq 0,\quad\zeta(\bar{\epsilon},\bar{x},\bar{y})=\bar{\zeta}\geq 0,\quad\bar{\epsilon}\geq\zeta(\bar{\epsilon},\bar{x},\bar{y})\epsilon^{0}.

We next show that ϕ¯=ϕ(ϵ¯,x¯,y¯)=0\bar{\phi}=\phi(\bar{\epsilon},\bar{x},\bar{y})=0 by contradiction. To this end, let us assume that ϕ¯>0\bar{\phi}>0. This implies that ζ¯>0\bar{\zeta}>0 and that ϵ¯ζ¯ϵ0>0\bar{\epsilon}\geq\bar{\zeta}\epsilon^{0}>0. Then, by Lemma 2, we see that there exist an open neighbourhood 𝒰\mathcal{U} of (ϵ¯,x¯,y¯)(\bar{\epsilon},\bar{x},\bar{y}) and α¯(0,1]\bar{\alpha}\in(0,1] such that (ϵk,xk,yk)𝒰(\epsilon^{k},x^{k},y^{k})\in\mathcal{U} with ϵk>0\epsilon^{k}>0 and for any α(0,α¯]\alpha\in(0,\bar{\alpha}], it holds that for k0k\geq 0 sufficiently large,

ϕ(ϵk+αΔϵk,xk+αΔxk,yk+αΔyk)[12μ(1δ)α]ϕ(ϵk,xk,yk).\phi(\epsilon^{k}+\alpha\Delta\epsilon^{k},x^{k}+\alpha\Delta x^{k},y^{k}+\alpha\Delta y^{k})\leq\left[1-2\mu(1-\delta)\alpha\right]\phi(\epsilon^{k},x^{k},y^{k}).

The existence of the fixed number α¯(0,1]\bar{\alpha}\in(0,1] further indicates that there exists a fixed nonnegative integer \ell such that ρ(0,α¯]\rho^{\ell}\in(0,\bar{\alpha}] and ρkρ\rho^{\ell_{k}}\geq\rho^{\ell} for all k0k\geq 0 sufficiently large. Therefore, we can check that

ϕ(ϵk+1,xk+1,yk+1)[12μ(1δ)ρk]ϕ(ϵk,xk,yk)[12μ(1δ)ρ]ϕ(ϵk,xk,yk),\phi(\epsilon^{k+1},x^{k+1},y^{k+1})\leq\left[1-2\mu(1-\delta)\rho^{\ell_{k}}\right]\phi(\epsilon^{k},x^{k},y^{k})\leq\left[1-2\mu(1-\delta)\rho^{\ell}\right]\phi(\epsilon^{k},x^{k},y^{k}),

for all k0k\geq 0 sufficiently large. Then, by letting kk\rightarrow\infty, we see that ϕ¯[12μ(1δ)ρ]ϕ¯\bar{\phi}\leq\left[1-2\mu(1-\delta)\rho^{\ell}\right]\bar{\phi}. This implies that ϕ¯0\bar{\phi}\leq 0, which contradicts the assumption that ϕ¯>0\bar{\phi}>0. Therefore, we conclude that ϕ¯=0\bar{\phi}=0, i.e., ^(ϵ¯,x¯,y¯)=0\widehat{\mathcal{E}}(\bar{\epsilon},\bar{x},\bar{y})=0. This completes the proof. ∎

From our previous convergence analysis, we see how the natural merit function ϕ\phi helps us to establish the convergence properties. Without such a continuously differentiable merit function, further stronger assumptions and much more complicated analysis may be required. Therefore, we are able to design a Newton-type method that is able to exploit the solution sparsity and admit global convergence properties by employing the SqSN method via the Huber function. Following the classical results on the local convergence rate of Newton-type methods, next we further show that Algorithm 1 has a local superlinear convergence rate under the assumption that every element of ^(ϵ¯,x¯,y¯)\partial\widehat{\mathcal{E}}(\bar{\epsilon},\bar{x},\bar{y}) is nonsingular.

Theorem 2.

Let (ϵ¯,x¯,y¯)(\bar{\epsilon},\bar{x},\bar{y}) be any accumulation point of the sequence {(ϵk,xk,yk)}\{(\epsilon^{k},x^{k},y^{k})\} (if it exists) generated by the Algorithm 1. Suppose that every element of ^(ϵ¯,x¯,y¯)\partial\widehat{\mathcal{E}}(\bar{\epsilon},\bar{x},\bar{y}) is nonsingular, then the whole sequence {(ϵk,xk,yk)}\{(\epsilon^{k},x^{k},y^{k})\} converges to (ϵ¯,x¯,y¯)(\bar{\epsilon},\bar{x},\bar{y}) superlinearly. In particular, it holds that

(ϵk+1ϵ¯xk+1x¯yk+1y¯)=O((ϵkϵ¯xkx¯yky¯)1+τ).\left\lVert\begin{pmatrix}\epsilon^{k+1}-\bar{\epsilon}\\ x^{k+1}-\bar{x}\\ y^{k+1}-\bar{y}\end{pmatrix}\right\rVert=O\left(\left\lVert\begin{pmatrix}\epsilon^{k}-\bar{\epsilon}\\ x^{k}-\bar{x}\\ y^{k}-\bar{y}\end{pmatrix}\right\rVert^{1+\tau}\right).
Proof.

The proof is given in Appendix A. ∎

It turns out that the conditions ensuring the local convergence rate of the Algorithm 1 are highly related to those for the SSN method. Classical results (see e.g., [34, Theorem 3.2]) show that the SSN method admits local quadratic convergence rate under the conditions that FF is strongly semismooth and every element of F(x¯,y¯,z¯)\partial F(\bar{x},\bar{y},\bar{z}) is nonsingular, where (x¯,y¯,z¯)(\bar{x},\bar{y},\bar{z}) is a solution to the KKT system F(x,y,z)=0F(x,y,z)=0 in (8). The following lemma shows the connection between the nonsingularity of F(x¯,y¯,z¯)\partial F(\bar{x},\bar{y},\bar{z}) and the nonsingularity of ^(0,x¯,y¯)\partial\widehat{\mathcal{E}}(0,\bar{x},\bar{y}).

Lemma 4.

Let (x¯,y¯,z¯)(\bar{x},\bar{y},\bar{z}) be such that F(x¯,y¯,z¯)=0F(\bar{x},\bar{y},\bar{z})=0. Then the following two statements are equivalent.

  1. 1.

    Every element of F(x¯,y¯,z¯)\partial F(\bar{x},\bar{y},\bar{z}) is nonsingular.

  2. 2.

    Every element of ^(0,x¯,y¯)\partial\widehat{\mathcal{E}}(0,\bar{x},\bar{y}) is nonsingular.

Proof.

We first show statement 1 implies statement 2. Suppose that U^(0,x¯,y¯)U\in\partial\widehat{\mathcal{E}}(0,\bar{x},\bar{y}). Then there exists VΦ(0,x¯σ(cATy¯))V\in{\partial\Phi(0,\bar{x}-\sigma(c-A^{T}\bar{y}))} such that

U(Δϵ,Δx,Δy)=(ΔϵAΔxΔxV(Δϵ,Δx+σATΔy)),(Δϵ,Δx,Δy)×mn×m+n.U(\Delta\epsilon,\Delta x,\Delta y)=\begin{pmatrix}\Delta\epsilon\\ A\Delta x\\ \Delta x-V(\Delta\epsilon,\Delta x+\sigma A^{T}\Delta y)\end{pmatrix},\quad\forall(\Delta\epsilon,\Delta x,\Delta y)\in\mathbb{R}\times\mathbb{R}^{mn}\times\mathbb{R}^{m+n}.

To show that UU is nonsingular, it suffices to show that U(Δϵ,Δx,Δy)=0U(\Delta\epsilon,\Delta x,\Delta y)=0, implies that (Δϵ,Δx,Δy)=0(\Delta\epsilon,\Delta x,\Delta y)=0. To this end, we assume that U(Δϵ,Δx,Δy)=0U(\Delta\epsilon,\Delta x,\Delta y)=0. Then clearly, it holds that Δϵ=0\Delta\epsilon=0 and

(AΔxATΔyΔzΔxV(0,ΔxσΔz))=(000),\begin{pmatrix}A\Delta x\\ -A^{T}\Delta y-\Delta z\\ \Delta x-V(0,\Delta x-\sigma\Delta z)\end{pmatrix}=\begin{pmatrix}0\\ 0\\ 0\end{pmatrix}, (23)

by denoting that Δz:=ATΔy\Delta z:=-A^{T}\Delta y. By the expression of h(0,t)\partial h(0,t) given in (12), we see that there exists WΠ+(x¯σ(cATy¯))W\in\partial\Pi_{+}(\bar{x}-\sigma(c-A^{T}\bar{y})) such that V(0,h)=W(h)V(0,h)=W(h) for any hmnh\in\mathbb{R}^{mn}. Then the last equation of linear system (23) can be rewritten as ΔxW(ΔxσΔz)=0\Delta x-W(\Delta x-\sigma\Delta z)=0, and the resulting linear system only has the trivial solution since statement 1 holds true. Therefore, UU is nonsingular. Similarly, one can show that statement 2 implies statement 1. The proof is completed. ∎

By the above lemma, we can see that the conditions for ensuring the local convergence rate for Algorithm 1 are not stronger than those for the SSN method. Furthermore, the nonsingularity assumption is equivalent to the primal and dual linear independence constraint qualification (LICQ) and the local Lipschitz homeomorphism of F(x,y,z)F(x,y,z) near (x¯,y¯,z¯)(\bar{x},\bar{y},\bar{z}) (see e.g., [49, 50]). These equivalent conditions are quite strong and may not hold generally. Surprisingly, in our numerical experiments, we indeed observe such local superlinear convergence empirically. This is reasonable because the nonsingularity assumption may hold in the selected experiments. Moreover, the nonsingularity assumption is sufficient but may not be necessary for the superlinear convergence. For example, the superlinear convergence of the projected SSN method is established in [35] under the local smoothness and the error bound conditions on a submanifold, without the nonsingularity assumption.

4 Extension to the Wasserstein Barycenter Problem

In this section, we proceed to extend our Algorithm 1 to solve the Wasserstein barycenter problem.

4.1 Problem Statement

First, we briefly recall the Wasserstein distance and describe the problem of computing a Wasserstein barycenter for a set of discrete probability distributions with finite support points. Given two discrete distributions 𝒫(1)={(ai(1),qi(1)):i=1,,m1}\mathcal{P}^{(1)}=\{(a_{i}^{(1)},\,{q}_{i}^{(1)}):i=1,\dots,m_{1}\} and 𝒫(2)={(ai(2),qi(2)):i=1,,m2}\mathcal{P}^{(2)}=\{(a_{i}^{(2)},{q}_{i}^{(2)}):i=1,\dots,m_{2}\}, the pp-Wasserstein distance 𝒲p(𝒫(1),𝒫(2))\mathcal{W}_{p}(\mathcal{P}^{(1)},\mathcal{P}^{(2)}) is defined by the following OT problem:

(𝒲p(𝒫(1),𝒫(2)))p:=minXm1×m2{X,𝒟(𝒫(1),𝒫(2))Xem1=a(1),Xem2=a(2),X0},\begin{array}[]{cl}\left(\mathcal{W}_{p}\left(\mathcal{P}^{(1)},\mathcal{P}^{(2)}\right)\right)^{p}:=&\min\limits_{X\in\mathbb{R}^{m_{1}\times m_{2}}}\Big{\{}\left\langle X,\mathcal{D}\left(\mathcal{P}^{(1)},\mathcal{P}^{(2)}\right)\right\rangle\mid X^{\top}{{e}_{{m}_{{1}}}}={a}^{{(1)}},\;X{{e}_{{m}_{{2}}}}={a}^{{(2)}},\;X\geq{0}\Big{\}},\end{array}

where 𝒟(𝒫(1),𝒫(2))m1×m2\mathcal{D}(\mathcal{P}^{(1)},\mathcal{P}^{(2)})\in\mathbb{R}^{m_{1}\times m_{2}} is the distance matrix with 𝒟(𝒫(1),𝒫(2))ij=qi(1)qj(2)pp\mathcal{D}(\mathcal{P}^{(1)},\mathcal{P}^{(2)})_{ij}=\lVert q_{{i}}^{(1)}-q_{j}^{(2)}\rVert_{p}^{p} and p1p\geq 1. Then, given a set of discrete probability distributions {𝒫(t)}t=1N\{\mathcal{P}^{(t)}\}_{t=1}^{N} with 𝒫(t)={(ai(t),qi(t)):1in}\mathcal{P}^{(t)}=\{(a_{i}^{(t)},\,{q}_{i}^{(t)}):1\leq i\leq n\}, a pp-Wasserstein barycenter 𝒫:={(wi,qi):i=1,,m}\mathcal{P}:=\{(w_{i},\,{q}_{i}):i=1,\dots,m\} with mm support points is an optimal solution to the following problem:

min{t=1Nγt(𝒲p(𝒫,𝒫(t)))pw+m,emw=1,q1,,qmd}\min\Big{\{}\sum_{t=1}^{N}\gamma_{t}\left(\mathcal{W}_{p}\left(\mathcal{P},\mathcal{P}^{(t)}\right)\right)^{p}\mid w\in\mathbb{R}^{m}_{+},\;e_{m}^{\top}w=1,\;q_{1},\ldots,q_{m}\in\mathbb{R}^{d}\Big{\}} (24)

for given weights (γ1,,γN)\left(\gamma_{1},\dots,\gamma_{N}\right) satisfying t=1Nγt=1\sum_{t=1}^{N}\gamma_{t}=1 and γt>0,t=1,,N\gamma_{t}>0,\;t=1,\dots,N. Note that problem (24) is a non-convex optimization problem, in which one needs to find the optimal support q:={q1,,qm}{q}:=\{q_{1},\dots,q_{m}\} and the optimal weight vector w:=(w1,,wm){w}:=(w_{1},\dots,w_{m}) of a barycenter simultaneously. In many real applications, the support q{q} of a barycenter is pre-specified. Thus, one only needs to find the weight vector w{w} of a barycenter. In view of this, from now on, we assume that the support q{q} is given. Consequently, the problem (24) reduces to the WB problem with fixed support points:

minw,{Π(t)}t=1ND(t),Π(t)\displaystyle\min\limits_{{w},\,\{\Pi^{(t)}\}}~{}{\textstyle\sum^{N}_{t=1}}\langle D^{(t)},\,\Pi^{(t)}\rangle (25)
s.t.Π(t)emt=w,(Π(t))em=a(t),Π(t)0,t=1,,N,\displaystyle\hskip 14.22636pt\mathrm{s.t.}\hskip 14.22636pt\Pi^{(t)}{e}_{m_{t}}={w},~{}(\Pi^{(t)})^{\top}{e}_{m}={a}^{(t)},~{}\Pi^{(t)}\geq 0,~{}~{}t=1,\dots,N,
emw=1,w0,\displaystyle\hskip 42.67912pt{e}^{\top}_{m}{w}=1,~{}{w}\geq 0,

where D(t)D^{(t)} denotes γt𝒟(𝒫,𝒫(t))\gamma_{t}\mathcal{D}(\mathcal{P},\,\mathcal{P}^{(t)}) for simplicity, for t=1,,Nt=1,\dots,N. Here, we assume that the dimensions of sampling distributions a(t)a^{(t)} are equal to nn for convenience and the case with different dimensions can be extended in a straightforward manner. Since the last two constraints emw=1,w0{e}^{\top}_{m}{w}=1,~{}{w}\geq 0 are redundant, we remove them and reformulate the problem (25) as the following linear programming:

minx{c,xAx=b,x0},\begin{array}[]{cl}\min\limits_{x}\big{\{}\langle c,x\rangle\;\mid\;{A}x={b},\;x\geq 0\big{\}},\end{array} (26)

where

  • x=(vec(Π1);;vec(ΠN);w)Nmn+mx=\left(\operatorname{vec}\left(\Pi^{1}\right);\dots;\operatorname{vec}\left(\Pi^{N}\right);{w}\right)\in\mathbb{R}^{Nmn+m} ;

  • b=(a(1);a(2);;a(N);0m;;0m)Nn+Nmb=\left({a}^{(1)};{a}^{(2)};\dots;{a}^{(N)};0_{m};\dots;0_{m}\right)\in\mathbb{R}^{Nn+Nm};

  • c=(vec(D(1));;vec(D(N));0m)Nmn+mc=\left(\operatorname{vec}\left(D^{(1)}\right);\dots;\operatorname{vec}\left(D^{(N)}\right);0_{m}\right)\in\mathbb{R}^{Nmn+m} ;

  • A=(A10A2A3)(Nn+Nm)×(Nmn+m)A=\left(\begin{array}[]{cc}A_{1}&0\\ A_{2}&A_{3}\end{array}\right)\in\mathbb{R}^{(Nn+Nm)\times(Nmn+m)},   A1=Diag(Inem,,Inem)Nn×Nmn,A_{1}=\operatorname{Diag}\left(I_{n}\otimes{e}_{m}^{\top},\dots,I_{n}\otimes{e}_{m}^{\top}\right)\in\mathbb{R}^{Nn\times Nmn},

    A2=Diag(enIm,,enIm)Nm×NmnA_{2}=\operatorname{Diag}\left({e}_{n}^{\top}\otimes I_{m},\dots,{e}_{n}^{\top}\otimes I_{m}\right)\in\mathbb{R}^{Nm\times Nmn}, and A3=eNImNm×m.A_{3}=-{e}_{N}\otimes I_{m}\in\mathbb{R}^{Nm\times m}.

4.2 Newton equations

Algorithm 1 can be directly extended to solve the WB problem (26). Similar to the linear system (18) for OT problems, the most expensive step is to tackle a structured system of linear equations w.r.t. Δy\Delta y of the following form:

(λI+AΘAT)Δy=R,\left(\lambda I+A\Theta A^{T}\right)\Delta y=R, (27)

where λ=κpϵk>0\lambda=\kappa_{p}\epsilon^{k}>0 is a scalar, and

Θ:=((1+κcϵk)IV2k)1V2k(Nmn+m)×(Nmn+m),R:=rpkA((1+κcϵk)IV2k)1rckNm+Nn.\begin{array}[]{ll}\Theta:=\left(\left(1+\kappa_{c}\epsilon^{k}\right)I-V_{2}^{k}\right)^{-1}V_{2}^{k}&\in\mathbb{R}^{(Nmn+m)\times(Nmn+m)},\\[2.0pt] R:=r_{p}^{k}-A\left(\left(1+\kappa_{c}\epsilon^{k}\right)I-V_{2}^{k}\right)^{-1}r_{c}^{k}&\in\mathbb{R}^{Nm+Nn}.\end{array}

We aim to accelerate the computation by exploiting the special structure of the matrix AA. Recall that for the OT problem, we can leverage the Fact 1 to simplify the coefficient matrix of Δy\Delta y to (19). Here, for the WB problem, we use a similar technique to simplify the coefficient matrix in (27) as follows. First, we can verify that the matrix Θ\Theta is a nonnegative diagonal matrix. In particular, Θ\Theta takes the following form:

Θ:=(Diag(θ(1))Diag(θ(N))Diag(θ¯))\displaystyle\Theta:=\left(\begin{array}[]{cccc}\operatorname{Diag}(\theta^{(1)})&&&\\ &\ddots&&\\ &&\operatorname{Diag}(\theta^{(N)})&\\ &&&\operatorname{Diag}(\bar{\theta})\end{array}\right)

where θ(t)+mn\theta^{(t)}\in\mathbb{R}^{mn}_{+} for t=1,,Nt=1,\ldots,N and θ¯+m\bar{\theta}\in\mathbb{R}^{m}_{+}. Define Vt:=Mat(θ(t))m×nV_{t}:=\mathrm{Mat}(\theta^{(t)})\in\mathbb{R}^{m\times n} for t=1,,Nt=1,\dots,N. Then the system of linear equations (27) can be equivalently written as

(λI+AΘAT)Δy=(E1E2E2TE3)(Δy1Δy2)=(R1R2),(\lambda I+A\Theta A^{T})\Delta y=\left(\begin{array}[]{cc}E_{1}&E_{2}\\[2.0pt] E_{2}^{T}&E_{3}\end{array}\right)\left(\begin{array}[]{l}\Delta y_{1}\\[2.0pt] \Delta y_{2}\end{array}\right)=\left(\begin{array}[]{c}R_{1}\\[2.0pt] R_{2}\end{array}\right), (28)

where Δy:=(Δy1;Δy2)Nn+Nm,R:=(R1;R2)Nn+Nm\Delta y:=(\Delta y_{1};\Delta y_{2})\in\mathbb{R}^{Nn+Nm},R:=(R_{1};R_{2})\in\mathbb{R}^{Nn+Nm}, and

E1:=Diag((V1Tem;;VNTem))+λINn×Nn,E2:=Diag(V1T,,VNT)Nn×Nm,E3:=Diag((V1en;;VNen))+(eNeNT)Diag(θ¯)+λINm×Nm.\begin{array}[]{ll}E_{1}:={\operatorname{Diag}\left(\left(V_{1}^{T}e_{m};\dots;V_{N}^{T}e_{m}\right)\right)}+\lambda I&\in\mathbb{R}^{Nn\times Nn},\\[5.0pt] E_{2}:=\operatorname{Diag}\left({V}_{1}^{T},\dots,{V}_{N}^{T}\right)&\in\mathbb{R}^{Nn\times Nm},\\[5.0pt] E_{3}:={\mathrm{Diag}\left(\left({V}_{1}e_{n};\dots;{V}_{N}e_{n}\right)\right)}+(e_{N}e_{N}^{T})\otimes\mathrm{Diag}({\bar{\theta}})+\lambda I&\in\mathbb{R}^{{Nm\times Nm}}.\end{array}

To reduce the size of the linear system, we next eliminate Δy1\Delta y_{1} and compute Δy2\Delta y_{2} by

SΔy2=R3,{S}\Delta y_{2}={R}_{3}, (29)

where R3:=R2E2TE11R1Nm{R}_{3}:=R_{2}-E_{2}^{T}E_{1}^{-1}R_{1}\in\mathbb{R}^{Nm} and S:=E3E2TE11E2S:=E_{3}-E_{2}^{T}E_{1}^{-1}E_{2} is the Schur complement matrix such that

S:=Diag(S1,,SN)+(eNDiag(θ¯))(eNDiag(θ¯))Nm×Nm,St:=Diag(Vten+λem)VtDiag(VtTem+λen)1VtTm×m,t=1,,N.\begin{array}[]{lll}{S}&:=\mathrm{Diag}\left({S}_{1},\dots,{S}_{N}\right){+\big{(}e_{N}\otimes\mathrm{Diag}(\sqrt{\bar{\theta}})\big{)}\big{(}e_{N}\otimes\mathrm{Diag}(\sqrt{\bar{\theta}})\big{)}^{\top}}&\in\mathbb{R}^{Nm\times Nm},\\ {S}_{t}&:={\mathrm{Diag}({V}_{t}e_{n}+{\lambda e_{m}})-{V_{t}}\mathrm{Diag}\left(V_{t}^{T}e_{m}+{\lambda e_{n}}\right)^{-1}{V}_{t}^{T}}&\in\mathbb{R}^{m\times m},\;{t=1,\dots,N}.\end{array}

It is clear that the coefficient matrix SS is sparse and positive definite, so (29) can be efficiently solved directly by sparse Cholesky decomposition or iteratively by preconditioned conjugate gradient (PCG) method. Note that when forming the matrices StS_{t} for t=1,,Nt=1,\dots,N, matrix inversions are only applied to diagonal matrices. Therefore, the cost of computing the coefficient matrix is relatively modest. Finally, it follows from (28) that

Δy1=E11(R1E2Δy2).\Delta y_{1}=E_{1}^{-1}\left(R_{1}-E_{2}\Delta y_{2}\right).

We can also compute Δy1\Delta y_{1} efficiently thanks to the diagonal block structure of E1E_{1} and E2E_{2}. To see this, we denote Δy1:=(Δy11;;Δy1N)Nn\Delta y_{1}:=(\Delta y_{1}^{1};\dots;\Delta y_{1}^{N})\in\mathbb{R}^{Nn}, Δy2:=(Δy21;;Δy2N)Nm\Delta y_{2}:=(\Delta y_{2}^{1};\dots;\Delta y_{2}^{N})\in\mathbb{R}^{Nm} and R1:=(R11;;R1N)NnR_{1}:=(R_{1}^{1};\dots;R_{1}^{N})\in\mathbb{R}^{Nn}, then

Δy1t=(R1tVtTΔy2t)(VtTem+λen),t=1,,N,\Delta y_{1}^{t}=(R_{1}^{t}-{V}_{t}^{T}\Delta y_{2}^{t})\oslash({V}_{t}^{T}e_{m}+\lambda e_{n}),\quad t=1,\dots,N,

where “\oslash” is the element-wise division operator. Moreover, observe that the coefficient matrix in (29) is the sum of a symmetric positive definite and block-diagonal matrix and a low rank matrix. Thus, one can solve (29) efficiently by using the Sherman-Morrison-Woodbury formula. In our numerical experiments, we iteratively solve the equation (29) using the PCG method with incomplete Cholesky factorization as the preconditioner for the first few steps. We then switch to sparse Cholesky decomposition to solve it directly if the PCG steps exceed 80.

5 Numerical Experiments

In this section, we conduct a set of numerical experiments and present the corresponding computational results to demonstrate the efficiency of the proposed Algorithm 1.

5.1 Experimental settings

We implement our algorithm purely in Julia (version 1.8.2) and compare it with the highly optimized commercial and/or open-source LP solvers including Gurobi (version 9.5.1), HiGHS (version 1.3.0) and CPLEX (version 22.1.0.0) by calling their Julia interfaces through JuMP 333Julia: https://julialang.org/; Gurobi.jl: https://github.com/jump-dev/Gurobi.jl; HiGHS.jl: https://github.com/jump-dev/HiGHS.jl; CPLEX.jl: https://github.com/jump-dev/CPLEX.jl; JuMP.jl: https://github.com/jump-dev/JuMP.jl. For the commercial solvers Gurobi and CPLEX, we use their academic licenses.. In particular, for these powerful LP solvers, we compare our algorithm with their barrier methods and the network simplex method implemented in CPLEX 444Based on the benchmark in [19], the network simplex method and its variants are the most efficient algorithms for solving OT problems to very high accuracy. However, only the network simplex method implemented in CPLEX has a friendly interface in JuMP, to the best of our knowledge. Therefore, we only compare Algorithm 1 with the network simplex method provided by CPLEX.. Our experiments are run on a Linux PC having Intel Xeon E5-2680 (v3) cores with 96 GB of RAM.

Given an approximate KKT solution (x,y,z)mn×m+n×mn(x,y,z)\in\mathbb{R}^{mn}\times\mathbb{R}^{m+n}\times\mathbb{R}^{mn}, we define the maximal relative KKT residue as follows

ηmax:=max{ηp:=Axd1+d,ηd:=ATy+zc1+c,ηc:=xΠ+(xz)1+x+z}.\eta_{\rm max}:=\max\left\{\eta_{p}:=\frac{\left\lVert Ax-d\right\rVert}{1+\left\lVert d\right\rVert},\eta_{d}:=\frac{\left\lVert A^{T}y+z-c\right\rVert}{1+\left\lVert c\right\rVert},\eta_{c}:=\frac{\left\lVert x-\Pi_{+}(x-z)\right\rVert}{1+\left\lVert x\right\rVert+\left\lVert z\right\rVert}\right\}.

Note that ηd\eta_{d} is always zero in our case based on our algorithmic design. To measure the duality gap, we also define the following relative gap

ηg:=|c,xd,y|1+|c,x|+|d,y|.\eta_{g}:=\frac{\left\lvert\left\langle c,x\right\rangle-\left\langle d,y\right\rangle\right\rvert}{1+\left\lvert\left\langle c,x\right\rangle\right\rvert+\left\lvert\left\langle d,y\right\rangle\right\rvert}.

For a user-specified stopping tolerance 𝚝𝚘𝚕>𝟶\tt{tol}>0, we terminate the algorithm when the smoothing parameter ϵ\epsilon is smaller than 𝚝𝚘𝚕×𝟷𝟶𝟸\tt{tol}\times 10^{-2}, or max{ηmax,ηg}𝚝𝚘𝚕\max\{\eta_{\rm max},\eta_{g}\}\leq\tt{tol}, or the maximum time 𝚃𝚒𝚖𝚎𝙻𝚒𝚖𝚒𝚝\tt{TimeLimit}, or the maximum number of iterations 𝚖𝚊𝚡𝙸𝚝𝚎𝚛\tt{maxIter}, is reached. In our implementation, we set 𝚝𝚘𝚕=𝟷𝟶𝟾\tt{tol}=10^{-8}, 𝚃𝚒𝚖𝚎𝙻𝚒𝚖𝚒𝚝=𝟸𝟺(𝚑𝚛𝚜)\tt{TimeLimit}=24\,(hrs), and 𝚖𝚊𝚡𝙸𝚝𝚎𝚛=𝟷𝟶𝟶𝟶\tt{maxIter}=1000. For barrier methods and the network simplex method, we also set the related stopping tolerance as 𝚝𝚘𝚕\tt{tol} and compute the corresponding KKT residues ηmax\eta_{\rm max} from the returned approximate solutions. For OT problems, we turn off the crossover phase for all LP solvers, as it is too time-consuming and in general does not improve the quality of the output solutions. On the other hand, we turn on the presolving phase for barrier methods, since it usually enhances the numerical stability and improves the convergence. However, for the network simplex method in CPLEX, we turn off the presolving phase since it consumes a lot of computational time but does not improve the performance of the network simplex method. Finally, we notice that the barrier method implemented in HiGHS only uses one thread when solving the problem. For WB problems, we turn off the presolving phase for the barrier method, since it does not improve the performance in our numerical tests. However, we turn on the presolving phase for dual simplex method since it can dramatically decrease the computational time based on our numerical experience. To ensure fair comparisons, we restrict all methods to use only one thread for OT problems, while allowing multi-threading for WB problems.

We also mention that for the matrix AA generated from an OT problem, the last row of AA is always redundant. Thus, to improve the practical performance of all the solvers, we always drop the last equality constraint when passing the input data to a solver. Also, we notice that performing a suitable scaling scheme to the problem data can improve the efficiency of Algorithm 1. Therefore, in our implementation, for given input data dd and cc, Algorithm 1 is executed on d^\hat{d} and c^\hat{c}, where d^=d/d\hat{d}=d/\left\lVert d\right\rVert and c^=c/c\hat{c}=c/\left\lVert c\right\rVert. However, the relative KKT residues and relative duality gap are evaluated on the original data. Finally, in our implementation, we set ϵ0=1\epsilon^{0}=1, r=0.75r=0.75, τ=0.25\tau=0.25, ρ=0.5\rho=0.5, μ=108\mu=10^{-8}, σ=min{103,c}\sigma=\min\{10^{3},\left\lVert c\right\rVert\}, κp=κc=1\kappa_{p}=\kappa_{c}=1.

5.2 Computational results on DOTmark for OT problems

In our experiments, the cost matrix CC is chosen based on the squared Euclidean distance. In particular, for 1im1\leq i\leq m and 1jn1\leq j\leq n, Cij:=(12)2+(k1k2)2C_{ij}:=(\ell_{1}-\ell_{2})^{2}+(k_{1}-k_{2})^{2}, where the indices ii and jj correspond to the (1,k1)(\ell_{1},k_{1}) and (2,k2)(\ell_{2},k_{2}) positions in the two images, respectively. Since CC contains some elements that are very large, we also normalize CC by dividing it by its maximum value. The discrete probability distributions aa and bb are obtained from normalizing any two images. We consider images from the DOTmark collection [19] (a benchmark dataset for discrete OT problems) which contains ten classes of images arising from different scenarios. See Figure 1 for examples of images from each class. Within a specific class that contains ten different images, we can pair any two different images and compute the optimal transport between them. Thus, each class of images generates 45 OT problems, resulting in 450 problems for ten classes. We mention that the DOTmark collection offers images at different resolutions, ranging from 32×3232\times 32 to 512×512512\times 512. However, due to limited available memory, we only present the results for the cases with 32×3232\times 32, 64×6464\times 64 resolutions and partial results for the cases with 128×128128\times 128 resolution. As a consequence, 450+450+10=910450+450+10=910 OT problems are tested in our experiments. Table 1 summarizes the problem sizes with respect to different image resolutions. We see that for OT problems generated from 128×128128\times 128 images, the problem sizes are typically too large to be handled by those LP solvers used on our machine.

Refer to caption
(a) Cauchy
Refer to caption
(b) Classic
Refer to caption
(c) GRFm
Refer to caption
(d) GRFr
Refer to caption
(e) GRFs
Refer to caption
(f) LogGRF
Refer to caption
(g) LogitGRF
Refer to caption
(h) Microscopy
Refer to caption
(i) Shapes
Refer to caption
(j) WhiteNoise
Figure 1: Example images from the DOTmark collection [19].
Resolution #constraints #variables
32×3232\times 32 2048 1,048,576
64×6464\times 64 8192 16,777,216
128×128128\times 128 32768 268,435,456
Table 1: OT problem sizes with different images resolutions.

Notice that for most images from the “Shapes” and “MicroscopyImages” classes, the generated probability distributions may contain zero entries. In particular, if the ii-th (1im1\leq i\leq m) element in aa is zero, then we can easily see that the ii-th row of the optimal transportation plan XX is a zero vector. Similarly, if the jj-th (1jn1\leq j\leq n) element in bb is zero, then the jj-th column of the optimal transportation plan XX is a zero vector. Therefore, one is able to drop those zero rows and/or columns in a prepossessing phase to get a smaller-scale OT problem. As a consequence, we always solve the smaller OT problems.

The detailed computational results for the instances with 64×6464\times 64 resolution are presented in Table LABEL:tab:img64. (For brevity, we do not present the results for the instances with 32×3232\times 32 resolution but note that the relative performance of various methods are similar to those observed in Table LABEL:tab:img64.) In the table, the barrier method and the network simplex method in CPLEX are denoted by “CPLEX-Bar” and “CPLEX-Net”, respectively. For each image class, we present the mean values over 45 OT problems. We report the mean value of the number of iterations taken by each solver in the column “Iter”. Note that CPLEX-Net performs the network simplex iterations, which are usually very large. Moreover, currently we do not know how to extract the total number of the network simplex iterations through JuMP. Therefore, we just set Iter to be ’-’ for CPLEX-Net. In the column “Time”, the mean values of computational times for all the solvers are presented for comparing their practical efficiency. For the remaining four columns, we report the mean relative KKT residues (i.e., ηp\eta_{p}, ηd\eta_{d}, ηc\eta_{c} and ηg\eta_{g}) in order to compare the accuracy of the solutions.

From the computational results, we observe that all the solvers except HiGHS were able to solve all the OT problems accurately. Indeed, the solutions provided by HiGHS are usually less accurate than the other solvers. Among all the solvers, CPLEX-Net showed the best performance, which coincides with the results presented in [19]. For the comparison between barrier methods, we observe that CPLEX-Bar and Gurobi had comparable performance. On the other hand, we see that HiGHS took more iterations and computational time than those of CPLEX-Bar and Gurobi. While Algorithm 1 took more iterations than those of the barrier methods, it is about 2–4 times faster than CPLEX-Bar and Gurobi. This indicates that Algorithm 1 is more efficient in terms of per-iteration cost when compared to barrier methods. This is exactly the consequence of the exploration of the solution sparsity.

Table 2: Computational results for 64×6464\times 64 images.
Image Solver Time (s) Iter ηp\eta_{p} ηd\eta_{d} ηc\eta_{c} ηg\eta_{g}
CauchyDensity HiGHS 5.0e+02 29 2.3e-06 8.2e-10 2.1e-10 1.8e-06
Gurobi 3.0e+02 16 5.7e-11 1.6e-16 2.7e-09 2.2e-09
CPLEX-Bar 6.0e+02 26 1.5e-10 3.4e-16 1.3e-10 1.5e-09
CPLEX-Net 3.6e+01 - 1.3e-18 6.2e-17 0.0e+00 7.6e-18
SmoothNewton 7.8e+01 57 3.1e-12 0.0e+00 3.9e-09 4.9e-09
GRFmoderate HiGHS 4.4e+02 27 1.1e-02 9.0e-02 8.4e-10 1.7e-02
Gurobi 2.8e+02 15 2.5e-12 1.5e-16 1.6e-09 1.8e-09
CPLEX-Bar 5.9e+02 25 5.2e-10 3.1e-15 3.4e-10 5.8e-09
CPLEX-Net 3.3e+01 - 6.8e-19 4.6e-17 0.0e+00 2.6e-18
SmoothNewton 7.1e+01 54 1.3e-12 0.0e+00 3.2e-09 4.9e-09
GRFsmooth HiGHS 4.6e+02 28 1.8e-06 1.5e-12 7.5e-10 1.1e-06
Gurobi 3.0e+02 16 1.3e-11 1.6e-16 2.8e-09 2.9e-09
CPLEX-Bar 6.2e+02 27 4.4e-10 3.0e-15 4.6e-10 4.8e-09
CPLEX-Net 3.8e+01 - 7.5e-19 5.6e-17 0.0e+00 3.1e-18
SmoothNewton 7.8e+01 58 1.9e-12 0.0e+00 3.0e-09 5.0e-09
LogitGRF HiGHS 5.2e+02 30 1.8e-05 1.2e-12 6.4e-10 2.2e-05
Gurobi 3.0e+02 16 2.7e-12 1.5e-16 2.4e-09 2.6e-09
CPLEX-Bar 6.8e+02 30 3.6e-10 2.4e-15 4.1e-10 2.9e-09
CPLEX-Net 3.5e+01 - 9.1e-19 5.0e-17 0.0e+00 2.9e-18
SmoothNewton 7.7e+01 59 1.3e-12 0.0e+00 3.3e-09 5.1e-09
Shapes HiGHS 9.2e+01 21 6.4e-07 1.6e-09 3.1e-10 7.0e-08
Gurobi 6.2e+01 13 1.7e-11 9.2e-15 4.2e-10 1.8e-09
CPLEX-Bar 1.0e+02 20 7.8e-10 9.6e-16 2.4e-10 4.5e-09
CPLEX-Net 7.5e+00 - 3.3e-18 3.8e-17 0.0e+00 8.2e-18
SmoothNewton 2.1e+01 52 6.3e-11 0.0e+00 8.8e-09 3.6e-09
ClassicImages HiGHS 5.1e+02 30 1.4e-06 1.0e-09 2.2e-10 2.5e-08
Gurobi 3.0e+02 15 2.6e-12 1.4e-16 3.2e-09 2.5e-09
CPLEX-Bar 5.9e+02 25 2.3e-10 2.0e-15 1.7e-10 2.4e-09
CPLEX-Net 3.5e+01 - 7.6e-19 4.1e-17 0.0e+00 2.0e-18
SmoothNewton 7.1e+01 55 1.2e-12 0.0e+00 3.2e-09 5.0e-09
GRFrough HiGHS 4.9e+02 29 5.5e-04 2.0e-02 5.6e-10 6.1e-04
Gurobi 2.9e+02 15 1.5e-12 1.4e-16 1.8e-09 2.2e-09
CPLEX-Bar 5.6e+02 24 2.5e-10 6.1e-15 5.0e-10 3.5e-09
CPLEX-Net 3.4e+01 - 7.3e-19 2.5e-17 0.0e+00 5.7e-19
SmoothNewton 7.4e+01 56 8.8e-13 0.0e+00 2.7e-09 4.9e-09
LogGRF HiGHS 9.1e+02 47 2.1e-03 5.5e-02 8.9e-10 5.3e-03
Gurobi 3.1e+02 16 3.0e-12 1.7e-16 1.5e-09 2.3e-09
CPLEX-Bar 5.9e+02 25 4.7e-10 1.9e-14 5.4e-10 6.1e-09
CPLEX-Net 4.4e+01 - 1.6e-18 6.3e-17 0.0e+00 8.7e-18
SmoothNewton 7.6e+01 58 3.2e-12 0.0e+00 3.8e-09 5.1e-09
MicroscopyImages HiGHS 1.1e+02 40 1.1e-06 3.0e-09 3.6e-10 8.7e-08
Gurobi 3.7e+01 15 3.6e-12 1.5e-16 5.3e-09 1.2e-09
CPLEX-Bar 6.1e+01 25 6.6e-10 1.1e-14 4.6e-10 3.4e-09
CPLEX-Net 4.1e+00 - 3.6e-18 5.1e-17 0.0e+00 6.4e-18
SmoothNewton 9.8e+00 46 1.2e-10 0.0e+00 9.1e-09 2.0e-09
WhiteNoise HiGHS 4.8e+02 27 1.0e-02 9.5e-02 5.0e-10 6.4e-03
Gurobi 3.0e+02 16 8.4e-13 1.4e-16 1.8e-09 1.9e-09
CPLEX-Bar 6.1e+02 27 3.4e-10 4.3e-15 5.2e-10 4.5e-09
CPLEX-Net 3.5e+01 - 1.1e-18 1.6e-17 0.0e+00 2.8e-19
SmoothNewton 6.9e+01 56 7.9e-13 0.0e+00 3.3e-09 5.1e-09

We also present some computational results for images with 128×128128\times 128 resolution. Since solving each OT problem of such a huge size is very time-consuming, we only conduct numerical experiments on the first two images in each class. Thus, only partial results are presented in Table LABEL:tab:img128. Note also that except Algorithm 1, all other solvers usually take more than 96 GB of RAM to handle these OT problems, so they usually encounter out-of-memory issues. Hence, only the numerical results for Algorithm 1 are presented in Table LABEL:tab:img128. We can see that the accuracy of the computed solutions is still very high. Moreover, it only took less than 40 minutes to converge, even though each problem contains more than 268 million nonnegative variables. This indicates that Algorithm 1 is quite efficient and robust. More importantly, it consumes less memory than state-of-the-art solvers.

Table 3: Partial computational results for Algorithm 1 on 128×128128\times 128 images.
Image Time (s) Iter ηp\eta_{p} ηd\eta_{d} ηc\eta_{c} ηg\eta_{g}
CauchyDensity 2.21e+03 96 1.1e-12 0.0e+00 2.8e-09 3.4e-09
GRFmoderate 2.17e+03 97 6.8e-13 0.0e+00 4.0e-10 5.4e-09
GRFsmooth 2.18e+03 93 8.7e-13 0.0e+00 3.7e-09 5.2e-09
LogitGRF 1.85e+03 86 8.0e-13 0.0e+00 4.1e-10 5.6e-09
Shapes 4.92e+02 48 3.1e-12 0.0e+00 2.6e-09 4.1e-09
ClassicImages 2.22e+03 101 6.4e-13 0.0e+00 4.3e-10 5.7e-09
GRFrough 2.30e+03 97 7.6e-13 0.0e+00 4.4e-10 5.8e-09
LogGRF 2.28e+03 108 9.2e-13 0.0e+00 4.1e-10 5.5e-09
MicroscopyImages 5.74e+02 88 1.4e-12 0.0e+00 1.2e-09 5.4e-09
WhiteNoise 2.21e+03 99 8.0e-13 0.0e+00 4.4e-10 5.3e-09

To further evaluate the memory consumption for each solver, we also measure the peak memory used by each solver when solving the problems. We use the program “/usr/bin/time” provided by the Linux system for our measurements. The output of the “time” program contains the term “maximum resident set size” (maximum RSS) which provides a reasonable estimate of the peak memory usage. Since the “time” program takes significant time for generating its output, we only measure the memory usage of each solver on the OT problems derived from the first two images in each image class. The results on memory usage are summarized in Table LABEL:tab:rss. We can see that Algorithm 1 indeed used much less memory than the other solvers. This further shows the potential of Algorithm 1 for solving OT problems at scale.

Table 4: Maximum RSS (MB) for the first two images in each class.
Image Resolution HiGHS Gurobi CPLEX-Bar CPLEX-Net SmoothNewton
CauchyDensity 64×6464\times 64 22282 22604 19489 20224 3456
GRFmoderate 64×6464\times 64 22281 22503 19500 20227 3751
GRFsmooth 64×6464\times 64 22281 24152 19486 20225 3550
LogitGRF 64×6464\times 64 22331 21639 19492 20217 3850
Shapes 64×6464\times 64 7524 9236 9515 8924 2254
ClassicImages 64×6464\times 64 22282 21668 19470 20228 3744
GRFrough 64×6464\times 64 22281 24155 19499 20237 3541
LogGRF 64×6464\times 64 22330 24158 19491 20226 3803
MicroscopyImages 64×6464\times 64 2632 3049 2305 2471 919
WhiteNoise 64×6464\times 64 22282 24158 19489 20229 3572

5.3 Computational results on real data for WB problems

In this subsection, we test our Algorithm 1 for computing WB of real image data. For a given set of images, we first rescale them to a pre-specified resolution, and vectorize and normalize them so that the sum of the resulting vectors are all ones. We generate the distance matrix D(t)D^{(t)} for all t=1,Nt=1,\dots N in the same manner as before. In our experiments, we set m=nm=n so that the barycenter has the same dimension as the given images.

We conduct experiments on the MNIST dataset [51], the Coil20 dataset [52] and the Yale-Face dataset [53]. For the MNIST dataset, we randomly select 10 images for each digit (0,,9)(0,\dots,9) with the original resolution 28×2828\times 28. For the Coil20 dataset, we select 3 representative objects: Car, Duck and Pig. For each object, we choose 10 images and rescale them to 32×3232\times 32. For the Yale-Face dataset, we select three human subjects: Yale01, Yale02 and Yale03. For each subject, we randomly choose 10 images and rescale them to 30×40,36×4830\times 40,36\times 48 and 54×7254\times 72, respectively. See Figure 2 for an example of images from each class. A summary of the problem sizes is shown in Table 5. We set the weight γt=1/N\gamma_{t}=1/N for all t=1,Nt=1,\dots N and p=2p=2.

Refer to caption
(a) MNIST
Refer to caption
(b) Car
Refer to caption
(c) Duck
Refer to caption
(d) Pig
Refer to caption
(e) Yale01
Refer to caption
(f) Yale02
Refer to caption
(g) Yale03
Figure 2: Example images from different databases.
Database N Resolution #constraints #variables
MNIST 10 28×2828\times 28 15680 6,147,344
Car 10 32×3232\times 32 20480 10,486,784
Duck 10 32×3232\times 32 20480 10,486,784
Pig 10 32×3232\times 32 20480 10,486,784
Yale01 10 30×4030\times 40 24000 14,401,200
Yale02 10 36×4836\times 48 34560 29,861,568
Yale03 10 54×7254\times 72 77760 151,169,328
Table 5: WB problem sizes with different image resolutions.

The results are summarized in Table LABEL:tab:WBreal. From the computational results, we observe that SmoothNewton and Gurobi-Bar are able to solve all the WB problems on image data successfully. However, Gurobi-Spx cannot solve the largest Yale-face problem within the 24-hour limit, since the corresponding LP is of very large size. On larger problems, Gurobi-Bar is faster than Gurobi-Spx. But SmoothNewton outperforms both Gurobi-Spx and Gurobi-Bar on all the image WB problems. For the WB instances on the Duck, Pig, and Yale-Face images, SmoothNewton is about ten times faster than both Gurobi-Spx and Gurobi-Bar.

Table 6: Computational results on real data for WB problems.
Image Solver Time(s) Iter ηp\eta_{p} ηd\eta_{d} ηc\eta_{c} ηg\eta_{g}
MNIST Gurobi-Spx 4.9e+01 103070 2.8e-16 8.1e-17 7.3e-19 3.5e-18
Gurobi-Bar 1.1e+02 30 5.0e-11 5.1e-15 6.1e-11 1.2e-09
SmoothNewton 1.3e+01 91 1.3e-13 0.0e+00 2.1e-10 8.2e-09
Car Gurobi-Spx 1.2e+02 38608 1.6e-16 3.0e-17 2.0e-19 1.7e-18
Gurobi-Bar 2.2e+02 33 8.6e-12 8.4e-14 6.7e-13 6.1e-11
SmoothNewton 2.6e+01 111 6.2e-14 0.0e+00 1.6e-11 6.2e-10
Duck Gurobi-Spx 3.2e+02 536738 2.6e-16 8.2e-17 1.8e-11 5.1e-18
Gurobi-Bar 2.6e+02 43 3.0e-11 4.0e-15 1.6e-13 2.5e-12
SmoothNewton 2.6e+01 100 4.5e-13 0.0e+00 2.7e-10 9.1e-09
Pig Gurobi-Spx 5.5e+02 1306882 1.6e-16 8.0e-17 4.2e-18 1.7e-17
Gurobi-Bar 2.4e+02 38 3.8e-11 5.2e-14 4.4e-12 7.4e-09
SmoothNewton 2.3e+01 96 1.2e-12 0.0e+00 1.8e-10 8.9e-09
Yale01 Gurobi-Spx 3.7e+03 6126017 1.9e-16 8.7e-17 3.1e-17 1.9e-17
Gurobi-Bar 3.0e+02 32 3.6e-11 1.1e-15 4.1e-10 8.3e-09
SmoothNewton 3.1e+01 97 7.0e-13 0.0e+00 1.6e-10 6.6e-09
Yale02 Gurobi-Spx 2.0e+04 18892197 2.6e-16 8.5e-17 1.9e-11 2.5e-17
Gurobi-Bar 2.1e+03 45 1.4e-10 1.3e-13 2.2e-11 5.3e-09
SmoothNewton 1.8e+02 115 1.0e-12 0.0e+00 1.7e-10 9.7e-09
Yale03 Gurobi-Spx (exceed 24 hours) - - - -
Gurobi-Bar 3.3e+04 26 1.9e-13 2.4e-12 4.1e-14 2.2e-09
SmoothNewton 2.5e+03 177 2.3e-13 0.0e+00 1.2e-10 8.6e-09

6 Conclusions

In this paper, we have proposed a squared smoothing Newton method via the Huber smoothing function for solving the KKT system of the discrete optimal transport (OT) problem directly. Besides having appealing convergence properties, the proposed method is able to exploit the solution sparsity of the OT problem to greatly reduce the computational costs. We have also extended the method to solve WB problems. Our numerical experiments on solving a large set of OT and WB instances have demonstrated the excellent practical performance of the proposed method when compared to the state-of-the-art solvers.

There are some issues concerning the proposed method that deserve further investigation. For example, the condition for ensuring the local superlinear convergence rate is rather strong and can not be verified a priori in general. Since we have observed the fast local convergence rate empirically based on our numerical tests, there may exist more relaxed conditions for ensuring such strong convergence properties, which deserve further studies. Moreover, it is clear that the method can also be applied to solving general linear programming (LP) problems. But the efficiency and robustness of the proposed method for solving general LP problems need further investigation.

Acknowledgement

The research of Kim-Chuan Toh is supported by the Ministry of Education, Singapore, under its Academic Research Fund Tier 3 grant call (MOE-2019-T3-1-010).

References

  • [1] Anthony Y Fu, Liu Wenyin, and Xiaotie Deng. Detecting phishing web pages with visual similarity assessment based on earth mover’s distance (EMD). IEEE transactions on dependable and secure computing, 3(4):301–311, 2006.
  • [2] Kristen Grauman and Trevor Darrell. Fast contour matching using approximate earth mover’s distance. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., volume 1, pages I–I. IEEE, 2004.
  • [3] Dave Kendal, Cindy E. Hauser, Georgia E. Garrard, Sacha Jellinek, Katherine M. Giljohann, and Joslin L. Moore. Quantifying plant colour and colour difference as perceived by humans using digital images. PLOS ONE, 8(8):e72296, 2013.
  • [4] Gabriel Peyré and Marco Cuturi. Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning, 11(5-6):355–607, 2019.
  • [5] Yossi Rubner, Carlo Tomasi, and Leonidas J. Guibas. The earth mover’s distance as a metric for image retrieval. International J. of Computer Vision, 40(2):99–121, 2000.
  • [6] Filippo Santambrogio. Optimal transport for applied mathematicians. Birkäuser, NY, 55(58-63):94, 2015.
  • [7] Cédric Villani. Optimal transport: old and new, volume 338. Springer, 2009.
  • [8] Martial Agueh and Guillaume Carlier. Barycenters in the Wasserstein space. SIAM J. Mathematical Analysis, 43(2):904–924, 2011.
  • [9] Marco Cuturi and Arnaud Doucet. Fast computation of Wasserstein barycenters. In International Conference on Machine Learning, pages 685–693. PMLR, 2014.
  • [10] Jianbo Ye and Jia Li. Scaling up discrete distribution clustering using ADMM. In 2014 IEEE International Conference on Image Processing (ICIP), pages 5267–5271. IEEE, 2014.
  • [11] Julien Rabin, Gabriel Peyré, Julie Delon, and Marc Bernot. Wasserstein barycenter and its application to texture mixing. In Scale Space and Variational Methods in Computer Vision: Third International Conference, SSVM 2011, Ein-Gedi, Israel, May 29–June 2, 2011, Revised Selected Papers 3, pages 435–446. Springer, 2012.
  • [12] Pierre-André Chiappori, Robert J McCann, and Lars P Nesheim. Hedonic price equilibria, stable matching, and optimal transport: equivalence, topology, and uniqueness. Economic Theory, 42:317–354, 2010.
  • [13] Alfred Galichon. Optimal transport methods in economics. Princeton University Press, 2018.
  • [14] Ethan Anderes, Steffen Borgwardt, and Jacob Miller. Discrete Wasserstein barycenters: Optimal transport for discrete data. Mathematical Methods of Operations Research, 84:389–409, 2016.
  • [15] Stephen J Wright. Primal-dual interior-point methods. SIAM, 1997.
  • [16] George Dantzig. Linear programming and extensions. In Linear programming and extensions. Princeton university press, 2016.
  • [17] Xudong Li, Defeng Sun, and Kim-Chuan Toh. An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for linear programming. SIAM J. Optimization, 30(3):2410–2440, 2020.
  • [18] James B Orlin. A polynomial time primal network simplex algorithm for minimum cost flows. Mathematical Programming, 78(2):109–129, 1997.
  • [19] Jörn Schrieber, Dominic Schuhmacher, and Carsten Gottschlich. DOTmark–a benchmark for discrete optimal transport. IEEE Access, 5:271–282, 2016.
  • [20] David G Luenberger and Yinyu Ye. Linear and nonlinear programming, volume 2. Springer, 1984.
  • [21] Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in Neural Information Processing Systems, 26, 2013.
  • [22] Bernhard Schmitzer. Stabilized sparse scaling algorithms for entropy regularized transport problems. SIAM J. Scientific Computing, 41(3):A1443–A1481, 2019.
  • [23] Hong T. M. Chu, Ling Liang, Kim-Chuan Toh, and Lei Yang. An efficient implementable inexact entropic proximal point algorithm for a class of linear programming problems. Computational Optimization and Applications, 85:107–146, 2020.
  • [24] Lei Yang and Kim-Chuan Toh. Bregman proximal point algorithm revisited: A new inexact version and its inertial variant. SIAM J. Optimization, 32(3):1523–1554, 2022.
  • [25] Yiyang Liu, Zaiwen Wen, and Wotao Yin. A multiscale semi-smooth Newton method for optimal transport. Journal of Scientific Computing, 91(2):39, 2022.
  • [26] Quentin Mérigot. A multiscale approach to optimal transport. In Computer Graphics Forum, volume 30, pages 1583–1592. Wiley Online Library, 2011.
  • [27] Bernhard Schmitzer. A sparse multiscale algorithm for dense optimal transport. J. of Mathematical Imaging and Vision, 56(2):238–259, 2016.
  • [28] Jean-David Benamou, Guillaume Carlier, Marco Cuturi, Luca Nenna, and Gabriel Peyré. Iterative Bregman projections for regularized transportation problems. SIAM J. Scientific Computing, 37(2):A1111–A1138, 2015.
  • [29] Lei Yang, Jia Li, Defeng Sun, and Kim-Chuan Toh. A fast globally linearly convergent algorithm for the computation of Wasserstein barycenters. J. of Machine Learning Research, 22(1):984–1020, 2021.
  • [30] Jianbo Ye, Panruo Wu, James Z Wang, and Jia Li. Fast discrete distribution clustering using Wasserstein barycenter with sparse support. IEEE Transactions on Signal Processing, 65(9):2317–2332, 2017.
  • [31] Guojun Zhang, Yancheng Yuan, and Defeng Sun. An efficient HPR algorithm for the Wasserstein barycenter problem with O(Dim(P)/ε){O}(\mathrm{Dim(P)}/\varepsilon) computational complexity. arXiv preprint arXiv:2211.14881, 2022.
  • [32] Yinyu Ye. A potential reduction algorithm allowing column generation. SIAM J. Optimization, 2(1):7–20, 1992.
  • [33] André L Tits, Pierre-Antoine Absil, and William P Woessner. Constraint reduction for linear programs with many inequality constraints. SIAM J. Optimization, 17(1):119–146, 2006.
  • [34] Liqun Qi and Jie Sun. A nonsmooth version of Newton’s method. Mathematical programming, 58(1):353–367, 1993.
  • [35] Hao Hu, Haesol Im, Xinxin Li, and Henry Wolkowicz. A semismooth Newton-type method for the nearest doubly stochastic matrix problem. Mathematics of Operations Research, in print, 2023.
  • [36] Robert Mifflin. Semismooth and semiconvex functions in constrained optimization. SIAM J. Control and Optimization, 15(6):959–972, 1977.
  • [37] Hans Rademacher. Über partielle und totale differenzierbarkeit von funktionen mehrerer variabeln und über die transformation der doppelintegrale. Mathematische Annalen, 79(4):340–359, 1919.
  • [38] Frank H Clarke. Optimization and nonsmooth analysis. SIAM, 1990.
  • [39] Xiaojun Chen, Liqun Qi, and Defeng Sun. Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities. Mathematics of Computation, 67(222):519–540, 1998.
  • [40] Lingchen Kong, Jie Sun, and Naihua Xiu. A regularized smoothing Newton method for symmetric cone complementarity problems. SIAM J. Optimization, 19(3):1028–1047, 2008.
  • [41] Liqun Qi, Defeng Sun, and Guanglu Zhou. A new look at smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities. Mathematical programming, 87(1):1–35, 2000.
  • [42] Jie Sun, Defeng Sun, and Liqun Qi. A squared smoothing Newton method for nonsmooth matrix equations and its applications in semidefinite optimization problems. SIAM J. Optimization, 14(3):783–806, 2004.
  • [43] Defeng Sun and Liqun Qi. Solving variational inequality problems via smoothing-nonsmooth reformulations. J. of Computational and Applied Mathematics, 129(1-2):37–62, 2001.
  • [44] Bintong Chen and Patrick T Harker. A non-interior-point continuation method for linear complementarity problems. SIAM J. on Matrix Analysis and Applications, 14(4):1168–1190, 1993.
  • [45] Christian Kanzow. Some noninterior continuation methods for linear complementarity problems. SIAM J. Matrix Analysis and Applications, 17(4):851–868, 1996.
  • [46] Steve Smale. Algorithms for solving equations. In The Collected Papers of Stephen Smale: Volume 3, pages 1263–1286. World Scientific, 2000.
  • [47] Israel Zang. A smoothing-out technique for min—max optimization. Mathematical Programming, 19(1):61–77, 1980.
  • [48] Liqun Qi and Defeng Sun. A survey of some nonsmooth equations and smoothing Newton methods. In Progress in Optimization, pages 121–146. Springer, 1999.
  • [49] Lingchen Kong, Levent Tunçel, and Naihua Xiu. Equivalent conditions for Jacobian nonsingularity in linear symmetric cone programming. Journal of Optimization Theory and Applications, 148(2):364–389, 2011.
  • [50] Zi Xian Chan and Defeng Sun. Constraint nondegeneracy, strong regularity, and nonsingularity in semidefinite programming. SIAM J. Optimization, 19(1):370–396, 2008.
  • [51] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • [52] Sameer A Nene, Shree K Nayar, Hiroshi Murase, et al. Columbia object image library (coil-20). 1996.
  • [53] Peter N. Belhumeur, Joao P Hespanha, and David J. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):711–720, 1997.

Appendix A Proofs

Proof of Theorem 2.

Since (ϵ¯,x¯,y¯)(\bar{\epsilon},\bar{x},\bar{y}) is an accumulation point of the sequence {(ϵk,xk,yk)}\{(\epsilon^{k},x^{k},y^{k})\}, Theorem 1 implies that ^(ϵ¯,x¯,y¯)=0\widehat{\mathcal{E}}(\bar{\epsilon},\bar{x},\bar{y})=0. In particular, it holds that ϵ¯=0\bar{\epsilon}=0 and ζk=r^(ϵk,xk,yk)1+τ\zeta_{k}=r\lVert\widehat{\mathcal{E}}(\epsilon^{k},x^{k},y^{k})\rVert^{1+\tau} for k0k\geq 0 sufficiently large. By [34, Proposition 3.1] and the fact that ϵk>0\epsilon^{k}>0, the nonsingularity of the set ^(ϵ¯,x¯,y¯)\partial\widehat{\mathcal{E}}(\bar{\epsilon},\bar{x},\bar{y}) implies (see e.g. [34, Proposition 3.1]) that

^(ϵk,xk,yk)1=O(1),\left\lVert\widehat{\mathcal{E}}^{\prime}(\epsilon^{k},x^{k},y^{k})^{-1}\right\rVert=O(1), (30)

for k0k\geq 0 sufficiently large. For notational simplicity, we let wk=(ϵk;xk;yk)w^{k}=(\epsilon^{k};x^{k};y^{k}), w¯=(ϵ¯;x¯;y¯)\bar{w}=(\bar{\epsilon};\bar{x};\bar{y}), and Δwk=(Δϵk;Δxk;Δyk)\Delta w^{k}=(\Delta\epsilon^{k};\Delta x^{k};\Delta y^{k}). Using (30), we see that

wk+Δwkw¯=wk+^(wk)1((ζkϵ000)^(wk))w¯\displaystyle\;\left\|w^{k}+\Delta w^{k}-\bar{w}\right\|=\;\left\|w^{k}+\widehat{\mathcal{E}}^{\prime}(w^{k})^{-1}\left(\begin{pmatrix}\zeta_{k}\epsilon^{0}\\ 0\\ 0\end{pmatrix}-\widehat{\mathcal{E}}(w^{k})\right)-\bar{w}\right\| (31)
=\displaystyle= ^(wk)1(^(wk)^(wk)(wkw¯)(ζkϵ000))\displaystyle\;\left\lVert-\widehat{\mathcal{E}}^{\prime}(w^{k})^{-1}\left(\widehat{\mathcal{E}}(w^{k})-\widehat{\mathcal{E}}^{\prime}(w^{k})(w^{k}-\bar{w})-\begin{pmatrix}\zeta_{k}\epsilon^{0}\\ 0\\ 0\end{pmatrix}\right)\right\rVert
=\displaystyle= O(^(wk)^(wk)(wkw¯))+O(ζkϵ0)\displaystyle\;O\left(\left\|\widehat{\mathcal{E}}(w^{k})-\widehat{\mathcal{E}}^{\prime}(w^{k})(w^{k}-\bar{w})\right\|\right)+O\left(\zeta_{k}\epsilon^{0}\right)
=\displaystyle= O(^(wk)^(wk)(wkw¯))+O(^(ϵk,xk,yk)1+τ).\displaystyle\;O\left(\left\|\widehat{\mathcal{E}}(w^{k})-\widehat{\mathcal{E}}^{\prime}(w^{k})(w^{k}-\bar{w})\right\|\right)+O\left(\left\lVert\widehat{\mathcal{E}}(\epsilon^{k},x^{k},y^{k})\right\rVert^{1+\tau}\right).

Since ^\widehat{\mathcal{E}} is strongly semismooth everywhere, we have that, for k0k\geq 0 sufficiently large,

^(wk)^(wk)(wkw¯)=O(wkw¯2).\left\|\widehat{\mathcal{E}}(w^{k})-\widehat{\mathcal{E}}^{\prime}(w^{k})(w^{k}-\bar{w})\right\|=O\left(\left\|w^{k}-\bar{w}\right\|^{2}\right). (32)

Moreover, since ^\widehat{\mathcal{E}} is locally Lipchitz continuous at w¯\bar{w}, it follows that

^(wk)1+τ=^(wk)^(w¯)1+τ=O(wkw¯1+τ).\left\lVert\widehat{\mathcal{E}}(w^{k})\right\rVert^{1+\tau}=\left\lVert\widehat{\mathcal{E}}(w^{k})-\widehat{\mathcal{E}}(\bar{w})\right\rVert^{1+\tau}=O\left(\left\|w^{k}-\bar{w}\right\|^{1+\tau}\right). (33)

By combining (31), (32) and (33), we get for k0k\geq 0 sufficiently large,

wk+Δwkw¯=\displaystyle\left\|w^{k}+\Delta w^{k}-\bar{w}\right\|= O(wkw¯2)+O(wkw¯1+τ)=O(wkw¯1+τ).\displaystyle\;O\left(\left\|w^{k}-\bar{w}\right\|^{2}\right)+O\left(\left\|w^{k}-\bar{w}\right\|^{1+\tau}\right)\;=\;O\left(\left\|w^{k}-\bar{w}\right\|^{1+\tau}\right). (34)

On the other hand, by (30), we can conclude that (^(wk))1\big{\|}(\widehat{\mathcal{E}}^{\prime}(w^{k}))^{-1}\big{\|} is bounded away from zero for k0k\geq 0 sufficiently large. Therefore, it follows from (32) that, for k0k\geq 0 sufficiently large,

wkw¯=O(^(ϵk,xk,yk)).\left\|w^{k}-\bar{w}\right\|=O\left(\left\lVert\widehat{\mathcal{E}}(\epsilon^{k},x^{k},y^{k})\right\rVert\right). (35)

Using (34), (35) and the fact that ϕ\phi and ^\widehat{\mathcal{E}} are both locally Lipschitz continuous at (ϵ¯,x¯,y¯)(\bar{\epsilon},\bar{x},\bar{y}), we get for k0k\geq 0 sufficiently large that

ϕ(wk+Δwk)=^(wk+Δwk)^(w¯)2=O(wk+Δwkw¯2)=O(wkw¯2(1+τ))\displaystyle\>\phi(w^{k}+\Delta w^{k})=\;\left\lVert\widehat{\mathcal{E}}(w^{k}+\Delta w^{k})-\widehat{\mathcal{E}}(\bar{w})\right\rVert^{2}=\;O\left(\left\|w^{k}+\Delta w^{k}-\bar{w}\right\|^{2}\right)=\;O\left(\left\|w^{k}-\bar{w}\right\|^{2(1+\tau)}\right)
=\displaystyle= O(^(wk)2(1+τ))=O(ϕ(wk)1+τ)=o(ϕ(wk)).\displaystyle\;O\left(\left\lVert\widehat{\mathcal{E}}(w^{k})\right\rVert^{2(1+\tau)}\right)=\;O\left(\phi(w^{k})^{1+\tau}\right)=\;o\left(\phi(w^{k})\right).

This shows that, for k0k\geq 0 sufficiently large, (ϵk+1,xk+1,yk+1)=(ϵk,xk,yk)+(Δϵk,Δxk,Δyk)(\epsilon^{k+1},x^{k+1},y^{k+1})=(\epsilon^{k},x^{k},y^{k})+(\Delta\epsilon^{k},\Delta x^{k},\Delta y^{k}), i.e., the unit step size is eventually accepted. Therefore, the proof is completed. ∎