This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

On the optimal objective value of random linear programs

Marzieh Bakhshi111Department of Industrial & System Engineering, University of Tennessee, Knoxville, TN, USA. Emails: [email protected], [email protected].    James Ostrowski11footnotemark: 1    Konstantin Tikhomirov222Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA, USA. Email: [email protected].
Abstract

We consider the problem of maximizing c,x\langle c,x\rangle subject to the constraints Ax𝟏Ax\leq\mathbf{1}, where xnx\in\mathbb{R}^{n}, AA is an m×nm\times n matrix with mutually independent centered subgaussian entries of unit variance, and cc is a cost vector of unit Euclidean length. In the asymptotic regime nn\to\infty, mn\frac{m}{n}\to\infty, and under some additional assumptions on cc, we prove that the optimal objective value zz^{*} of the linear program satisfies

limn2log(m/n)z=1almost surely.\lim\limits_{n\to\infty}\sqrt{2\log(m/n)}\,z^{*}=1\quad\mbox{almost surely}.

In the context of high-dimensional convex geometry, our findings imply sharp asymptotic bounds on the spherical mean width of the random convex polyhedron P={xn:Ax𝟏}P=\{x\in\mathbb{R}^{n}:\;Ax\leq\mathbf{1}\}. We provide numerical experiments as supporting data for the theoretical predictions. Further, we carry out numerical studies of the limiting distribution and the standard deviation of zz^{*}.

The authors’ names are in alphabetical order.

1 Introduction

1.1 Problem setup and motivation

Consider the linear program (LP)

maximizec,xsubject to Ax𝟏xn\begin{split}&\text{maximize}\ \ \langle c,x\rangle\\ &\text{subject to }\ Ax\leq\mathbf{1}\\ &x\in\mathbb{R}^{n}\end{split} (1)

where the entries of the coefficient matrix AA are mutually independent random variables, and 𝟏{\bf 1} denotes the vector of all ones. Note that xx is a vector of free variables. In this note we deal with two settings: either cc is a non-random unit vector or cc is random uniformly distributed on the unit Euclidean sphere. For any realization of AA the above LP is feasible since the zero vector satisfies all the constraints. We denote the value of the objective function of (1) by z=z(x)=c,xz=z(x)=\langle c,x\rangle, the optimal objective value by zz^{*}, and any optimal solution by xx^{*}.


The main problem we would like to address here is the following: Under the assumption that the number of constraints is significantly larger than the number of variables, what is the magnitude of the optimal objective value?


We provide theoretical guarantees on the optimal objective value zz^{*} in the asymptotic regime nn\to\infty, mn\frac{m}{n}\to\infty, and carry out empirical studies which confirm that medium size random LPs (with the number of constraints in the order of a few thousands) closely follow theoretical predictions. The numerical experiments further allow us to formulate conjectures related to the distribution of the optimal objective value, which we hope to address in future works.


Our interest in LP (1) is twofold.

  • (i)

    The LP (1) with i.i.d random coefficients can be viewed as the simplest model of an “average-case” linear program. LP of this form has been considered in the mathematical literature in the context of estimating the average-case complexity of the simplex method [7, 8, 9] (see Subsection 1.4 below for a more comprehensive literature review). Despite significant progress in that direction, fundamental questions regarding properties of random LPs remain open [44, Section 6]. The results obtained in this note contribute to better understanding of the geometry of random feasible regions defined as intersections of independent random half-spaces. We view our work as a step towards investigating random LPs allowing for sparse, inhomogeneous and correlated constraints, which would serve as more realistic models of linear programs occurring in practice.

  • (ii)

    Assuming that the cost vector cc is uniformly distributed on the unit sphere and is independent from AA, the quantity

    2𝔼cz=2𝔼cmax{c,x:Ax𝟏}2\,{\mathbb{E}}_{c}\,z^{*}=2\,{\mathbb{E}}_{c}\,\max\big{\{}\langle c,x\rangle:\;Ax\leq{\bf 1}\big{\}}

    (where 𝔼c{\mathbb{E}}_{c} is the expectation taken with respect to the randomness of cc) is the spherical mean width 𝒲(P){\mathcal{W}}(P) of the random polyhedron P={xn:Ax𝟏}P=\{x\in\mathbb{R}^{n}:\;Ax\leq\mathbf{1}\}. The mean width is a classical parameter associated with a convex set; in particular, 𝒲(P)=2M(P){\mathcal{W}}(P)=2M(P^{\circ}), where PP^{\circ} is the polar body for PP defined as the convex hull of the rows of AA, and M()M(\cdot) is the average value of the Minkowski functional for PP^{\circ} on the unit Euclidean sphere,

    M(P)=𝔼ccP=𝔼cinf{λ>0:λ1cP}.M(P^{\circ})={\mathbb{E}}_{c}\,\|c\|_{P^{\circ}}={\mathbb{E}}_{c}\,\inf\big{\{}\lambda>0:\,\lambda^{-1}c\in P^{\circ}\big{\}}.

    The mean width plays a key role in the study of projections of convex bodies and volume estimates (see, in particular, [50, Chapters 9–11] and [40]). The existing body of work (which we revise in Subsection 1.4) has focused on the problem of estimating the mean width up to a constant multiple. In this paper, we provide sharp asymptotics of 𝒲(P){\mathcal{W}}(P) in the regime of parameters mentioned above.

1.2 Notation

In the course of the note we use the letter KK (with subscripts) for constant numbers and O()O(\cdot), o()o(\cdot), Ω()\Omega(\cdot), ω()\omega(\cdot) for standard asymptotic notations. Specifically, for positive functions ff and gg defined on positive real numbers, f(x)=O(g(x))f(x)=O\big{(}g(x)\big{)} if there exists K>0K>0 and a real number x0x_{0} such that f(x)Kg(x)f(x)\leq Kg(x) for all xx0x\geq x_{0}, and f(x)=o(g(x))f(x)=o\big{(}g(x)\big{)} if limxf(x)g(x)=0\lim_{x\rightarrow\infty}\frac{f(x)}{g(x)}=0. Further,

f(x)=Ω(g(x))if and only ifg(x)=O(f(x)),\displaystyle f(x)=\Omega\big{(}g(x)\big{)}\ \ \text{if and only if}\ \ g(x)=O\big{(}f(x)\big{)},
f(x)=ω(g(x))if and only ifg(x)=o(f(x)).\displaystyle f(x)=\omega\big{(}g(x)\big{)}\ \ \text{if and only if}\ \ g(x)=o\big{(}f(x)\big{)}.

Definition 1.1.

A random variable ξ\xi is KK–subgaussian if

𝔼exp(ξ2/K2)2,for some K>0.{\mathbb{E}}\,\exp(\xi^{2}/K^{2})\leq 2,\ \text{for some }\ K>0.

The smallest KK satisfying the above inequality is the subgaussian moment of ξ\xi (see, for example, [50, Chapter 2]). We will further call a random vector with mutually independent components KK–subgaussian if every component is KK–subgaussian. Examples of subgaussian variables include Rademacher (i.e symmetric ±1\pm 1) random variables, and, more generally, any bounded random variables. The Rademacher distribution is of interest to us since a matrix with i.i.d ±1\pm 1 entries produces a simplest model of a normalized system of constraints with discrete bounded coefficients.

1.3 Theoretical guarantees

In this paper, we study asymptotics of the optimal objective value of (1) in the regime when the number of free variables nn tends to infinity. We will further assume that the number of constraints m=m(n)m=m(n) is infinitely large compared to nn, i.e limnmn=\lim\limits_{n\to\infty}\frac{m}{n}=\infty (while the setting mn=O(1)\frac{m}{n}=O(1) is of significant interest, it is not covered by our argument).

The main result of this note is

Theorem 1.2 (Main result: asymptotics of zz^{*} in subgaussian setting).

Let K>0K>0. Assume that limnmn=\lim\limits_{n\to\infty}\frac{m}{n}=\infty, and that for each nn, the entries of the m×nm\times n coefficient matrix A=A(n)A=A(n) are mutually independent centered KK–subgaussian variables of unit variances. Assume further that the non-random unit cost vectors c=c(n)nc=c(n)\in\mathbb{R}^{n} satisfy limnlog3/2(m/n)c=0\lim\limits_{n\to\infty}\log^{3/2}(m/n)\,\|c\|_{\infty}=0. Then for any constant ε>0\varepsilon>0, we have

{1ε2log(m/n)z1+ε}1nω(1),{\mathbb{P}}\big{\{}1-\varepsilon\leq\sqrt{2\log(m/n)}\,z^{*}\leq 1+\varepsilon\big{\}}\geq 1-n^{-\omega(1)},

and, in particular, limn2log(m/n)z=1\lim\limits_{n\to\infty}\sqrt{2\log(m/n)}\,z^{*}=1 almost surely.

Remark.

The result is obtained as a combination of two separate statements. We refer to Theorem 3.1 and Corollary 3.6 for the lower bound on zz^{*} and Theorem 4.1 for the upper bound.

Distributional invariance under rotations makes analysis of Gaussian random polytopes considerably simpler. In the next theorem, we avoid any structural assumptions on the cost vectors, and thereby remove any implicit upper bounds on mn\frac{m}{n}:

Theorem 1.3 (Asymptotics of zz^{*} in Gaussian setting).

Assume that limnmn=\lim\limits_{n\to\infty}\frac{m}{n}=\infty and that for each nn, the entries of the m×nm\times n coefficient matrix A=A(n)A=A(n) are mutually independent standard Gaussian variables. For each nn, let c(n)c(n) be a non-random unit cost vector. Then for any constant ε>0\varepsilon>0 we have

{1ε2log(m/n)z1+ε}1nω(1).{\mathbb{P}}\big{\{}1-\varepsilon\leq\sqrt{2\log(m/n)}\,z^{*}\leq 1+\varepsilon\big{\}}\geq 1-n^{-\omega(1)}.

In particular, limn2log(m/n)z=1\lim\limits_{n\to\infty}\sqrt{2\log(m/n)}\,z^{*}=1 almost surely.

As we mentioned before, the setting when the cost vector cc is uniform on the Euclidean sphere is of special interest in the field of convex geometry since the quantity 𝒲=2𝔼cz{\mathcal{W}}=2\,{\mathbb{E}}_{c}\,z^{*} (where 𝔼c{\mathbb{E}}_{c} denotes the expectation taken with respect to cc) is the spherical mean width of the random polyhedron P=P(n)={xn:Ax𝟏}P=P(n)=\{x\in\mathbb{R}^{n}:\;Ax\leq\mathbf{1}\}. As corollaries of Theorems 1.2 and 1.3, we obtain

Corollary 1.4 (The mean width in subgaussian setting).

Let limnmn=\lim\limits_{n\to\infty}\frac{m}{n}=\infty, let matrices A=A(n)A=A(n) be as in Theorem 1.2, and assume additionally that log3/2m=o(n/logn)\log^{3/2}m=o(\sqrt{n/\log n}). Then the spherical mean width 𝒲(P){\mathcal{W}}(P) of the polyhedron P=P(n)={xn:Ax𝟏}P=P(n)=\{x\in\mathbb{R}^{n}:\;Ax\leq\mathbf{1}\} satisfies

limn2log(m/n)𝒲(P)=2almost surely.\lim\limits_{n\to\infty}\sqrt{2\log(m/n)}\,{\mathcal{W}}(P)=2\quad\mbox{almost surely.}
Corollary 1.5 (The mean width in Gaussian setting).

Let m=m(n)m=m(n) and A=A(n)A=A(n) be as in Theorem 1.3. Then the spherical mean width 𝒲(P){\mathcal{W}}(P) of the polyhedron P=P(n)={xn:Ax𝟏}P=P(n)=\{x\in\mathbb{R}^{n}:\;Ax\leq\mathbf{1}\} satisfies

limn2log(m/n)𝒲(P)=2almost surely.\lim\limits_{n\to\infty}\sqrt{2\log(m/n)}\,{\mathcal{W}}(P)=2\quad\mbox{almost surely.}

1.4 Review of related literature

1.4.1 Convex feasibility problems

In the setting of Theorem 1.2, existence of a feasible vector xx with 2log(m/n)c,x1ε\sqrt{2\log(m/n)}\,\langle c,x\rangle\geq 1-\varepsilon is equivalent to the statement that the intersection of random convex sets

Ci:={yc:rowi(A),y11ε2log(m/n)rowi(A),c},1im,C_{i}:=\bigg{\{}y\in c^{\perp}:\;\langle{\rm row}_{i}(A),y\rangle\leq 1-\frac{1-\varepsilon}{\sqrt{2\log(m/n)}}\langle{\rm row}_{i}(A),c\rangle\bigg{\}},\quad 1\leq i\leq m,

is non-empty. The sets can be naturally viewed as independent random affine subspaces of the hyperplane cc^{\perp} and, with that interpretation, the above question is a relative of the spherical perceptron problem, in which the object of interest is the intersection of random affine halfspaces of the form {y:Ri,yκ}\{y:\,\langle R_{i},y\rangle\leq-\kappa\}, for independent random vectors R1,,RmR_{1},\dots,R_{m} and a constant parameter κ\kappa\in\mathbb{R} (see [46, 47] for a comprehensive discussion). Our proof of the lower bound on zz^{*} (equivalently, showing that imCi\bigcap_{\,i\leq m}C_{i}\neq\emptyset) is based on an iterative process which falls under the umbrella of the block-iterative projection methods.

The problem of constructing or approximating a vector in the non-empty intersection of a given collection of convex sets has been actively studied in optimization literature. The special setting where the convex sets are affine hyperplanes or halfspaces, corresponding to solving systems of linear equations and inequalities, was first addressed by Kaczmarz [28], Agmon [1], Motzkin and Schoenberg [37]. We refer, among others, to papers [48, 2, 45, 17, 38, 11, 34, 25, 31, 27, 14, 5, 35, 39, 24, 33, 51, 36, 52, 53] for more recent developments and further references. In particular, the work [2] introduced the block-iterative projection methods for solving the feasibility problems, when each update is computed as a function of projections onto a subset of constraints.

In view of the numerous earlier results cited above, we would like to epmhasize that the goal of our construction is verifying that the intersection imCi\bigcap_{\,i\leq m}C_{i} is non-empty with high probability rather than approximating a solution to a problem which is known to be feasible in advance. Since we aim to get asymptotically sharp bounds on zz^{*}, we have to rely on high-precision analysis of constraint’s violations, not allowing for any significant losses in our estimates. To our best knowledge, no existing results on the block-iterative projection methods addresses the problem considered here.

1.4.2 Random linear programs

Linear programs with either the coefficient matrix or the right hand side generated randomly, have been actively studied in various contexts:

  • Average-case analysis. As we mentioned above, the setting of random LPs with the Gaussian coefficient matrix (or, more generally, a matrix with independent rows having spherically symmetric distributions) was considered by Borgwardt [7, 8, 9] in the context of estimating the average-case complexity of the shadow simplex method. The average-case complexity of the Dantzig algorithm was addressed by Smale in [43] (see also [6]).

  • Smoothed analysis. The smoothed analysis of the simplex algorithm, when the coefficient matrix is a sum of an arbitrary deterministic matrix and a small Gaussian perturbation, was first carried out by Spielman and Teng in [44], with improved estimates obtained in later works [49, 12, 26].

  • Distribution of the optimal objective value under small random perturbations. The setting when the coefficient matrix, the RHS or the cost vector is subject to small random perturbations was considered, in particular, in [4, 41, 29, 32].

  • Random integer feasibility problems. We refer to [10] for the problem of integer feasibility in the randomized setting.

  • The optimal objective value of LPs with random cost vectors [16].

1.4.3 The mean width of random polyhedra

As we have mentioned at the beginning of the introduction, the mean width of random polytopes has been studied in the convex geometry literature, although much of the work deals with the mean width of the convex hulls of random vectors i.e of the polars to the random polyhedra considered in the present work. We refer, in particular, to [13, 3, 21] for related results (see also [22, 23]).

A two-sided bound on the mean width in the i.i.d subgaussian setting was derived in [30] (see also [20] for earlier results); specifically, it was shown that under the assumption that the entries of AA are i.i.d subgaussian variables of zero mean and unit variance, the mean width of the symmetric polyhedron P={xn:Ax𝟏andAx𝟏}P=\{x\in\mathbb{R}^{n}:\;Ax\leq{\bf 1}\;\mbox{and}\;-Ax\leq{\bf 1}\} satisfies

K1log(m/n)𝒲(P)K2log(m/n)\frac{K_{1}}{\sqrt{\log(m/n)}}\leq{\mathcal{W}}(P)\leq\frac{K_{2}}{\sqrt{\log(m/n)}} (2)

with high probability, where the constants K1,K2K_{1},K_{2} depend on the subgaussian moment of the matrix entries. The argument in [30] heavily relies on results which introduce suboptimal constant multiples to the estimate of 𝒲(P){\mathcal{W}}(P), and we believe that it cannot be adapted to derive an asymptotically sharp version of (2) which is the main goal of the present work.

1.5 Organization of the paper

We derive Theorem 1.2 and Corollary 1.4 as a combination of two separate statements: a lower bound on zz^{*} obtained in Section 3, and a matching upper bound treated in Section 4. The lower bound on zz^{*} (Theorem 3.1) is in turn obtained by combining a moderate deviations estimate from Proposition 2.5 and an iterative block-projection algorithm for constructing a feasible solution which was briefly mentioned before. The upper bound on zz^{*} (Theorem 4.1) proved in Section 4 relies on moderate deviations Proposition 2.6 and a special version of a covering argument.

The Gaussian setting (Theorem 1.3 and Corollary 1.5) is considered in Section 5. Finally, results of numerical simulations are presented in Section 6.

The diagrams below provide a high-level structure of the proofs of the main results of the paper.

Theorem 1.2 Lower bound: Theorem 3.1 Moderate deviations: Proposition 2.5 An iterative block projection algorithm Upper bound: Theorem 4.1 Moderate deviations: Proposition 2.6 A covering argument and union bound estimate
Figure 1: Structure of the proof of Theorem 1.2
Theorem 1.3 Lower bound m=nO(1)m=n^{O(1)}: Theorem 3.1 m=nω(1)m=n^{\omega(1)}: standard bounds on \|\cdot\|_{\infty}–norm of a Gaussian vector Upper bound Outer radius estimate: Proposition 5.1
Figure 2: Structure of the proof of Theorem 1.3

2 Preliminaries

2.1 Compressible and incompressible vectors

Definition 2.1 (Sparse vectors).

A vector vv in n\mathbb{R}^{n} is ss–sparse for some s>0s>0 if the number of non-zero components in vv is at most ss.

Definition 2.2 (Compressible and incompressible vectors, [42]).

Let vv be a unit vector in n\mathbb{R}^{n}, and let δ,ρ(0,1)\delta,\rho\in(0,1). The vector vv is called (δ,ρ)(\delta,\rho)–compressible if the Euclidean distance from vv to the set of δn\delta n–sparse vectors is at most ρ\rho, that is, if there is a δn\delta n–sparse vector zz such that vz2ρ\|v-z\|_{2}\leq\rho. Otherwise, vv is (δ,ρ)(\delta,\rho)–incompressible.

Observe that a vector vv is (δ,ρ)(\delta,\rho)–compressible if and only if the sum of squares of its δn\lfloor\delta n\rfloor largest (by the absolute value) components is at least 1ρ21-\rho^{2}.

2.2 Standard concentration and anti-concentration for subgaussian variables

The following result is well known, see for example [50, Section 3.1].

Proposition 2.3.

Let VV be a vector in k\mathbb{R}^{k} with mutually independent centered KK–subgaussian components of unit variance. Then for every t>0t>0,

{|V2k|t}2exp(K1t2),{\mathbb{P}}\big{\{}\big{|}\|V\|_{2}-\sqrt{k}\big{|}\geq t\big{\}}\leq 2\exp(-K_{1}t^{2}),

where the constant K1>0K_{1}>0 depends only on KK.

The next proposition is an example of Paley–Zygmund–type inequalities, and its proof is standard (we provide the proof for completeness).

Proposition 2.4.

For any K>0K>0 there is K>0K^{\prime}>0 depending only on KK with the following property. Let ξ\xi be a centered KK–subgaussian variable of unit variance. Then

{ξK}K.{\mathbb{P}}\big{\{}\xi\geq K^{\prime}\big{\}}\geq K^{\prime}.
Proof.

Since ξ\xi is KK–subgaussian, all moments of ξ\xi are bounded, and, in particular, 𝔼ξ4K~{\mathbb{E}}\,\xi^{4}\leq\tilde{K} for some K~<\tilde{K}<\infty depending only on KK. Applying the Paley–Zygmund inequality, we get

{ξ12}+{ξ12}={ξ212}14𝔼ξ414K~.{\mathbb{P}}\Big{\{}\xi\geq\frac{1}{\sqrt{2}}\Big{\}}+{\mathbb{P}}\Big{\{}\xi\leq-\frac{1}{\sqrt{2}}\Big{\}}={\mathbb{P}}\Big{\{}\xi^{2}\geq\frac{1}{2}\Big{\}}\geq\frac{1}{4\,{\mathbb{E}}\xi^{4}}\geq\frac{1}{4\tilde{K}}. (3)

Let ε\varepsilon be the largest number in [0,1/2][0,1/\sqrt{2}] such that

ε+ε+2ε1/2exp(s2/K2)𝑑s116K~.\varepsilon+\sqrt{\varepsilon}+2\int\limits_{\varepsilon^{-1/2}}^{\infty}\exp(-s^{2}/K^{2})\,ds\leq\frac{1}{16\tilde{K}}.

We will show by contradiction that {ξε}ε{\mathbb{P}}\{\xi\geq\varepsilon\}\geq\varepsilon. Assume the contrary. Then {ξ12}<116K~,{\mathbb{P}}\big{\{}\xi\geq\frac{1}{\sqrt{2}}\big{\}}<\frac{1}{16\tilde{K}}, implying, in view of (3),

𝔼ξ𝟏{ξ<0}12{ξ12}>116K~.-{\mathbb{E}}\,\xi{\bf 1}_{\{\xi<0\}}\geq\frac{1}{\sqrt{2}}{\mathbb{P}}\Big{\{}\xi\leq-\frac{1}{\sqrt{2}}\Big{\}}>\frac{1}{16\tilde{K}}.

On the other hand,

𝔼ξ𝟏{ξ0}\displaystyle{\mathbb{E}}\,\xi{\bf 1}_{\{\xi\geq 0\}} ε+εε1/2{ξs}𝑑s+ε1/2{ξs}𝑑s\displaystyle\leq\varepsilon+\int\limits_{\varepsilon}^{\varepsilon^{-1/2}}{\mathbb{P}}\{\xi\geq s\}\,ds+\int\limits_{\varepsilon^{-1/2}}^{\infty}{\mathbb{P}}\{\xi\geq s\}\,ds
ε+ε+2ε1/2exp(s2/K2)𝑑s116K~.\displaystyle\leq\varepsilon+\sqrt{\varepsilon}+2\int\limits_{\varepsilon^{-1/2}}^{\infty}\exp(-s^{2}/K^{2})\,ds\leq\frac{1}{16\tilde{K}}.

We conclude that 𝔼ξ=𝔼ξ𝟏{ξ0}+𝔼ξ𝟏{ξ<0}<0,{\mathbb{E}}\,\xi={\mathbb{E}}\,\xi{\bf 1}_{\{\xi\geq 0\}}+{\mathbb{E}}\,\xi{\bf 1}_{\{\xi<0\}}<0, leading to contradiction. ∎

Remark.

Instead of the requirement that the variable ξ\xi is subgaussian in the above proposition, we could use the assumption 𝔼|ξ|2+δ<{\mathbb{E}}\,|\xi|^{2+\delta}<\infty for a fixed δ>0\delta>0.

2.3 Moderate deviations for linear combinations

In this section we are interested in upper and lower bounds on probabilities

{i=1nyiξit},{\mathbb{P}}\bigg{\{}\sum_{i=1}^{n}y_{i}\xi_{i}\geq t\bigg{\}},

where y=(y1,,yn)y=(y_{1},\dots,y_{n}) is a fixed vector and ξ1,,ξn\xi_{1},\dots,\xi_{n} are mutually independent centered subgaussian variables of unit variances. Under certain assumptions on yy and tt, the probability bounds match (up to 1±o(1)1\pm o(1) term in the power of exponent) those in the Gaussian setting. The estimates below are examples of moderate deviations results for sums of mutually independent variables (see, in particular, [15, Section 3.7]), and can be obtained with help of standard Bernstein–type bounds and the change of measure method. However, we were not able to locate the specific statements that we need in the literature, and provide proofs for completeness.

Proposition 2.5 (Moderate deviations: upper bound).

Fix any K1K\geq 1. Let nn\to\infty, and for each nn let c=c(n)c=c(n) be a non-random unit vector in n\mathbb{R}^{n}, and let δ=δ(n)(0,1)\delta=\delta(n)\in(0,1) be a sequence of numbers such that δn\delta n converges to infinity. Assume that for every ν>0\nu>0 and all large nn, the vector c(n)c(n) is (ν1δ,1ν)(\nu^{-1}\delta,1-\nu)–incompressible. For each nn, let ξ1,ξ2,,ξn\xi_{1},\xi_{2},\dots,\xi_{n} be mutually independent KK–subgaussian variables of zero means and unit variances. Then for every ε>0\varepsilon>0 and all large nn we have

{i=1nciξi(1+ε)δn}exp(δn/2).{\mathbb{P}}\bigg{\{}\sum_{i=1}^{n}c_{i}\xi_{i}\geq(1+\varepsilon)\sqrt{\delta n}\bigg{\}}\leq\exp\big{(}-\delta n/2\big{)}.
Proof.

Throughout the proof, we assume that K,ε>0K,\varepsilon>0 are fixed. Let λ:=δn\lambda:=\sqrt{\delta n}. We have, by Markov’s inequality,

{i=1nciξi(1+ε)δn}={exp(λi=1nciξi)exp(λ(1+ε)δn)}i=1n𝔼exp(λciξi)exp(λ(1+ε)δn).{\mathbb{P}}\bigg{\{}\sum_{i=1}^{n}c_{i}\xi_{i}\geq(1+\varepsilon)\sqrt{\delta n}\bigg{\}}={\mathbb{P}}\bigg{\{}\exp\Big{(}\lambda\sum_{i=1}^{n}c_{i}\xi_{i}\Big{)}\geq\exp\big{(}\lambda(1+\varepsilon)\sqrt{\delta n}\big{)}\bigg{\}}\leq\frac{\prod\limits_{i=1}^{n}{\mathbb{E}}\,\exp(\lambda c_{i}\xi_{i})}{\exp(\lambda(1+\varepsilon)\sqrt{\delta n})}.

Since ξi\xi_{i} has zero mean and unit variance, for every ini\leq n we have

𝔼exp(λciξi)=1+12λ2ci2+j=3(λci)j𝔼ξijj!.{\mathbb{E}}\,\exp(\lambda c_{i}\xi_{i})=1+\frac{1}{2}\lambda^{2}c_{i}^{2}+\sum\limits_{j=3}^{\infty}\frac{(\lambda c_{i})^{j}\,{\mathbb{E}}\,\xi_{i}^{j}}{j!}.

Hence, for all ini\leq n such that |λci|1|\lambda c_{i}|\leq 1,

𝔼exp(λciξi)=1+12λ2ci2+O(λ3ci3),{\mathbb{E}}\,\exp(\lambda c_{i}\xi_{i})=1+\frac{1}{2}\lambda^{2}c_{i}^{2}+O(\lambda^{3}c_{i}^{3}),

where the implicit constant in O()O(\dots) depends on KK only. On the other hand, for every ini\leq n we can write (see, in particular, [50, Section 2.5])

𝔼exp(λciξi)exp(K~λ2ci2),{\mathbb{E}}\,\exp(\lambda c_{i}\xi_{i})\leq\exp(\tilde{K}\lambda^{2}c_{i}^{2}),

for some K~>0\tilde{K}>0 depending only on KK. Observe that, by the assumptions on the cost vectors c=c(n)c=c(n), there is a sequence of numbers κ=κ(n)\kappa=\kappa(n) converging to zero as nn\to\infty, such that

i:|λci|>κci2=o(1).\sum_{i:\,|\lambda c_{i}|>\kappa}c_{i}^{2}=o(1).

Then, combining the above estimates, we obtain

{i=1nciξi(1+ε)δn}\displaystyle{\mathbb{P}}\bigg{\{}\sum_{i=1}^{n}c_{i}\xi_{i}\geq(1+\varepsilon)\sqrt{\delta n}\bigg{\}} i:|λci|κ(1+12λ2ci2+O(λ3ci3))i:|λci|>κexp(K~λ2ci2)exp(λ(1+ε)δn)\displaystyle\leq\frac{\prod\limits_{i:\,|\lambda c_{i}|\leq\kappa}\big{(}1+\frac{1}{2}\lambda^{2}c_{i}^{2}+O(\lambda^{3}c_{i}^{3})\big{)}\prod\limits_{i:\,|\lambda c_{i}|>\kappa}\exp(\tilde{K}\lambda^{2}c_{i}^{2})}{\exp(\lambda(1+\varepsilon)\sqrt{\delta n})}
=exp(λ(1+ε)δn)exp(12λ2+o(λ2))\displaystyle=\exp\big{(}-\lambda(1+\varepsilon)\sqrt{\delta n}\big{)}\exp\Big{(}\frac{1}{2}\lambda^{2}+o(\lambda^{2})\Big{)}
exp(1+ε2δn+o(δn)).\displaystyle\leq\exp\Big{(}-\frac{1+\varepsilon}{2}\,\delta n+o(\delta n)\Big{)}.

The result follows. ∎

Proposition 2.6 (Moderate deviations: lower bound).

Fix any K1K\geq 1. Let kk\to\infty, and for each kk let y=y(k)y=y(k) be a non-random unit vector in k\mathbb{R}^{k}, and let δ=δ(k)(0,1)\delta=\delta(k)\in(0,1) be a sequence of numbers such that δk\delta k converges to infinity. Assume that limk(δky)=0\lim\limits_{k\to\infty}\big{(}\sqrt{\delta k}\,\|y\|_{\infty}\big{)}=0. For each kk, let ξ1,ξ2,,ξk\xi_{1},\xi_{2},\dots,\xi_{k} be mutually independent KK–subgaussian variables of zero means and unit variances. Then for every ε>0\varepsilon>0 and all large kk we have

{i=1nyiξi(1ε)δk}exp(δk/2).{\mathbb{P}}\bigg{\{}\sum_{i=1}^{n}y_{i}\xi_{i}\geq(1-\varepsilon)\sqrt{\delta k}\bigg{\}}\geq\exp\big{(}-\delta k/2\big{)}.
Proof.

We shall apply the standard technique of change of measure. Define s0:=(1ε/4)δks_{0}:=(1-\varepsilon/4)\sqrt{\delta k}, and for each iki\leq k set

Bi:=𝔼exp(s0yiξi)=1+(1+o(1))s02yi22.B_{i}:={\mathbb{E}}\,\exp(s_{0}\,y_{i}\xi_{i})=1+(1+o(1))\frac{s_{0}^{2}y_{i}^{2}}{2}. (4)

For every iki\leq k, let ZiZ_{i} be the random variable defined via its cumulative distribution function

{Ziτ}=τexp(s0z)Bi𝑑μyiξi(z),τ,{\mathbb{P}}\big{\{}Z_{i}\leq\tau\big{\}}=\int\limits_{-\infty}^{\tau}\frac{\exp(s_{0}z)}{B_{i}}d\mu_{y_{i}\xi_{i}}(z),\quad\tau\in\mathbb{R},

where μyiξi\mu_{y_{i}\xi_{i}} is the probability measure on \mathbb{R} induced by yiξiy_{i}\xi_{i}, and such that Z1,Z2,,ZkZ_{1},Z_{2},\dots,Z_{k} are mutually independent. Denote by TT the set of all (z1,,zk)(z_{1},\dots,z_{k}) in k\mathbb{R}^{k} such that (1ε/2)δk>i=1kzi>(1ε)δk(1-\varepsilon/2)\sqrt{\delta k}>\sum_{i=1}^{k}z_{i}>(1-\varepsilon)\sqrt{\delta k}. We have

\displaystyle{\mathbb{P}} {i=1kyiξi(1ε)δk}\displaystyle\bigg{\{}\sum_{i=1}^{k}y_{i}\xi_{i}\geq(1-\varepsilon)\sqrt{\delta k}\bigg{\}}
T𝑑μy1ξ1(z1)𝑑μykξk(zk)\displaystyle\geq\int\limits_{T}d\mu_{y_{1}\xi_{1}}(z_{1})\dots d\mu_{y_{k}\xi_{k}}(z_{k})
(i=1kBi)exp(s0(1ε/2)δk)T(i=1kexp(s0z)Bi)𝑑μy1ξ1(z1)𝑑μykξk(zk)\displaystyle\geq\Big{(}\prod_{i=1}^{k}B_{i}\Big{)}\exp(-s_{0}(1-\varepsilon/2)\sqrt{\delta k})\int\limits_{T}\Big{(}\prod_{i=1}^{k}\frac{\exp(s_{0}z)}{B_{i}}\Big{)}\,d\mu_{y_{1}\xi_{1}}(z_{1})\dots d\mu_{y_{k}\xi_{k}}(z_{k})
=(i=1kBi)exp(s0(1ε/2)δk){(Z1,,Zk)T}.\displaystyle=\Big{(}\prod_{i=1}^{k}B_{i}\Big{)}\exp(-s_{0}(1-\varepsilon/2)\sqrt{\delta k})\,{\mathbb{P}}\big{\{}(Z_{1},\dots,Z_{k})\in T\big{\}}.

To estimate {(Z1,,Zk)T}{\mathbb{P}}\{(Z_{1},\dots,Z_{k})\in T\}, we will compute the means and the variances of Z1,,ZkZ_{1},\dots,Z_{k}. We get

𝔼Zi=zexp(s0z)Bi𝑑μyiξi(z)=1Bi(s0yi2+j=2s0jyij+1𝔼ξij+1j!)=s0yi2(1+o(1))Bi,{\mathbb{E}}\,Z_{i}=\int\limits_{\mathbb{R}}\frac{z\exp(s_{0}z)}{B_{i}}d\mu_{y_{i}\xi_{i}}(z)=\frac{1}{B_{i}}\bigg{(}s_{0}y_{i}^{2}+\sum_{j=2}^{\infty}\frac{s_{0}^{j}y_{i}^{j+1}{\mathbb{E}}\,\xi_{i}^{j+1}}{j!}\bigg{)}=\frac{s_{0}y_{i}^{2}(1+o(1))}{B_{i}},

and

𝔼Zi2=z2exp(s0z)Bi𝑑μyiξi(z)=1Bi(yi2+j=1s0jyij+2𝔼ξij+2j!)=yi2(1+o(1))Bi.{\mathbb{E}}\,Z_{i}^{2}=\int\limits_{\mathbb{R}}\frac{z^{2}\exp(s_{0}z)}{B_{i}}d\mu_{y_{i}\xi_{i}}(z)=\frac{1}{B_{i}}\bigg{(}y_{i}^{2}+\sum_{j=1}^{\infty}\frac{s_{0}^{j}y_{i}^{j+2}{\mathbb{E}}\,\xi_{i}^{j+2}}{j!}\bigg{)}=\frac{y_{i}^{2}(1+o(1))}{B_{i}}.

Thus, in view of (4),

𝔼i=1kZi=(1ε/4±o(1))δk,{\mathbb{E}}\,\sum_{i=1}^{k}Z_{i}=(1-\varepsilon/4\pm o(1))\sqrt{\delta k},

and

Var(i=1kZi)1+o(1).{\rm Var}\Big{(}\sum_{i=1}^{k}Z_{i}\Big{)}\leq 1+o(1).

It follows that {(Z1,,Zk)T}=1o(1){\mathbb{P}}\{(Z_{1},\dots,Z_{k})\in T\}=1-o(1). To complete the proof, it remains to note that

(i=1kBi)exp(s0(1ε/2)δk)=exp(1±o(1)2s02s0(1ε/2)δk)=ω(exp(δk/2)).\Big{(}\prod_{i=1}^{k}B_{i}\Big{)}\exp(-s_{0}(1-\varepsilon/2)\sqrt{\delta k})=\exp\Big{(}\frac{1\pm o(1)}{2}s_{0}^{2}-s_{0}(1-\varepsilon/2)\sqrt{\delta k}\Big{)}=\omega\big{(}\exp(-\delta k/2)\big{)}.

3 The lower bound on zz^{*} in subgaussian setting

The main result of this section is

Theorem 3.1 (Lower bound on zz^{*}).

Let K>0K>0. Assume that limnmn=\lim\limits_{n\to\infty}\frac{m}{n}=\infty and that for each nn, the entries of the m×nm\times n coefficient matrix A=A(n)A=A(n) are mutually independent centered KK–subgaussian variables of unit variances. Assume further that the non-random unit cost vectors c=c(n)c=c(n) satisfy

for every κ>0\kappa>0 there is n0(κ)n_{0}(\kappa)\in\mathbb{N} such that for all nn0(κ)n\geq n_{0}(\kappa),
c=c(n)c=c(n) is (κ1log(m/n)/n,1κ)(\kappa^{-1}\log(m/n)/n,1-\kappa)–incompressible.

Let z=z(n)z^{*}=z^{*}(n) be the optimal objective value of (1). Then for any constant ε,K>0\varepsilon,K^{\prime}>0 and all sufficiently large nn, we have

{2log(m/n)z1ε}1nK.{\mathbb{P}}\big{\{}\sqrt{2\log(m/n)}\,z^{*}\geq 1-\varepsilon\big{\}}\geq 1-n^{-K^{\prime}}.

In particular, lim infn2log(m/n)z1\liminf\limits_{n\to\infty}\sqrt{2\log(m/n)}\,z^{*}\geq 1 almost surely.

Although the assumptions on the cost vectors in Theorem 3.1 may appear excessively technical, we conjecture that they are optimal in the following sense: whenever there is κ>0\kappa>0 such that c(n)c(n) is (o(log(m/n)/n),1κ)(o(\log(m/n)/n),1-\kappa)–compressible for infinitely many nn\in\mathbb{N}, there is K>0K>0 (independent of nn) and a sequence of random matrices A(n)A(n) with mutually independent centered entries of unit variances and subgaussian moments bounded by KK, such that lim infn2log(m/n)z<1\liminf\limits_{n\to\infty}\sqrt{2\log(m/n)}\,z^{*}<1 almost surely. We refer to Section 6.2 for numerical experiments regarding the assumptions on the cost vectors.

As we mentioned in the introduction, the main idea of our proof of Theorem 3.1 is to construct a feasible solution xx to the LP (1) satisfying 2log(m/n)c,x1ε\sqrt{2\log(m/n)}\,\langle c,x\rangle\geq 1-\varepsilon via an iterative process. We start by defining a vector x0x_{0} as the (1ε)(2log(m/n))1/2(1-\varepsilon)(2\log(m/n))^{-1/2} multiple of the cost vector cc. Our goal is to add a perturbation to x0x_{0} which would restore its feasibility. We shall select a vector x1cx_{1}\in c^{\perp} which would repair the constraints violated by x0x_{0}, i.e rowi(A),x0+x11\langle{\rm row}_{i}(A),x_{0}+x_{1}\rangle\leq 1 for all i[m]i\in[m] with rowi(A),x0>1\langle{\rm row}_{i}(A),x_{0}\rangle>1. The vector x1x_{1} may itself introduce some constraints’ violation; to repair those constraints, we will find another vector x2cx_{2}\in c^{\perp}, and consider x0+x1+x2x_{0}+x_{1}+x_{2}, etc. At kk–th iteration, the vector xkx_{k} will be chosen as a linear combination of the matrix rows having large scalar products with xk1x_{k-1}. The process continues for some finite number of steps rr which depends on the values of mm and nn. We shall prove that the resulting vector x:=x0+x1++xrx:=x_{0}+x_{1}+\dots+x_{r} is feasible with high probability. To carry this argument over, we start with a number of preliminary estimates concerning subgaussian vectors as well as certain estimates on the condition numbers of random matrices.


We write [m][m] for the set of integers {1,2,,m}\{1,2,\dots,m\}. Further, given a finite subset of integers II, we write I\mathbb{R}^{I} for the linear real space of |I||I|–dimensional vectors with components indexed over II. Given a set or an event SS, we will write χS\chi_{S} for the indicator of the set/event.

3.1 Auxiliary results

Proposition 2.3 and a simple union bound argument yields

Corollary 3.2.

For every K1K\geq 1 there is a number K1=K1(K)>0K_{1}=K_{1}(K)>0 depending only on KK with the following property. Let km/ek\leq m/e, let I[m]I\subset[m] be a kk–set, and let VV be a random vector in [m]I\mathbb{R}^{[m]\setminus I} with mutually independent centered KK–subgaussian components of unit variance. Then the event

{\displaystyle\bigg{\{} The number of indices i[m]Ii\in[m]\setminus I with ViK1log(m/k)V_{i}\geq K_{1}\sqrt{\log(m/k)} is at most k/2k/2, and
(i[m]IVi2χ{ViK1log(m/k)})1/2K1klog(m/k)}\displaystyle\Big{(}\sum_{i\in[m]\setminus I}V_{i}^{2}\,\chi_{\{V_{i}\geq K_{1}\sqrt{\log(m/k)}\}}\Big{)}^{1/2}\leq K_{1}\sqrt{k\log(m/k)}\bigg{\}}

has probability as least 12(km)2k1-2\big{(}\frac{k}{m}\big{)}^{2k}.

Proof.

Set

L:=K1log(m/k),L:=K_{1}\sqrt{\log(m/k)},

where K1=K1(K)1K_{1}=K_{1}(K)\geq 1 is chosen sufficiently large (the specific requirements on K1K_{1} can be extracted from the argument below). Since the vector VV is subgaussian, every component of VV satisfies

{ViL}exp(K2L2){\mathbb{P}}\big{\{}V_{i}\geq L\big{\}}\leq\exp(-K_{2}L^{2})

for some K2>0K_{2}>0 depending only on KK. Hence, the probability that more than k/2k/2 components of VV are greater than LL, can be estimated from above by

(mk/2)exp(K2L2k/2)(emk/2)k/2exp(K2L2k/2).{m\choose\lceil k/2\rceil}\exp(-K_{2}L^{2}\,\lceil k/2\rceil)\leq\bigg{(}\frac{em}{\lceil k/2\rceil}\bigg{)}^{\lceil k/2\rceil}\exp(-K_{2}L^{2}\,\lceil k/2\rceil). (5)

If K1K_{1} is chosen so that emk/2exp(K2L2)(km)4\frac{em}{\lceil k/2\rceil}\exp(-K_{2}L^{2})\leq\big{(}\frac{k}{m}\big{)}^{4} then the last expression in (5) is bounded above by (km)2k\big{(}\frac{k}{m}\big{)}^{2k}.

Similarly to the above and assuming K1K_{1} is sufficiently large, in view of Proposition 2.3, for any choice of a k/2\lfloor k/2\rfloor–subset J[m]IJ\subset[m]\setminus I, the probability that the vector (Vi)iJ(V_{i})_{i\in J} has the Euclidean norm greater than LkL\sqrt{k}, is bounded above by

(mk/2)1(km)2k.{m\choose\lfloor k/2\rfloor}^{-1}\Big{(}\frac{k}{m}\Big{)}^{2k}.

Combining the two observations, we get the result. ∎


Our next observation concerns random matrices with subgaussian entries. Recall that the largest and smallest singular values of an n×kn\times k (knk\leq n) matrix MM are defined by

smax(M)=M:=supy:y2=1My2,s_{\max}(M)=\|M\|:=\sup\limits_{y:\,\|y\|_{2}=1}\|My\|_{2},

and

smin(M):=infy:y2=1My2.s_{\min}(M):=\inf\limits_{y:\,\|y\|_{2}=1}\|My\|_{2}.
Proposition 3.3.

For every K>0K>0 there are K1,K2>0K_{1},K_{2}>0 depending only on KK with the following property. Let nkK1\frac{n}{k}\geq K_{1} and let MM be an n×kn\times k random matrix with mutually independent centered subgaussian entries of unit variance and subgaussian moment at most KK. Further, let PP be a non-random n×nn\times n projection operator of rank n1n-1 i.e PP is the orthogonal projection onto an (n1)(n-1)–dimensional subspace of n\mathbb{R}^{n}. Then

{smax(PM)smin(PM)2andsmin(PM)n/2}2exp(K2n).{\mathbb{P}}\bigg{\{}\frac{s_{\max}(PM)}{s_{\min}(PM)}\leq 2\quad\mbox{and}\quad s_{\min}(PM)\geq\sqrt{n}/2\bigg{\}}\leq 2\exp(-K_{2}\,n).
Proof.

Let ww be the unit vector in the null space of PP (note that it is determined uniquely up to changing the sign).

Fix any unit vector yky\in\mathbb{R}^{k}. Then the random vector MyMy has mutually independent components of unit variance, and is subgaussian, with the subgaussian moment of the components depending only on KK (see [50, Section 2.6]). Applying Proposition 2.3, we get

{|My2n|t}2exp(K1t2),t>0,{\mathbb{P}}\big{\{}\big{|}\|My\|_{2}-\sqrt{n}\big{|}\geq t\big{\}}\leq 2\exp(-K_{1}t^{2}),\quad t>0, (6)

where K1>0K_{1}>0 only depends on KK. Since the projection operator PP acts as a contraction we have PMy2My2\|PMy\|_{2}\leq\|My\|_{2} everywhere on the probability space. On the other hand, PMy2My2|w,My|\|PMy\|_{2}\geq\|My\|_{2}-|\langle w,My\rangle|, where, again according to [50, Section 2.6], w,My\langle w,My\rangle is a subgaussian random variable with the subgaussian moment only depending on KK, and therefore

{My2PMy2t}2exp(K2t2),t>0.{\mathbb{P}}\big{\{}\|My\|_{2}-\|PMy\|_{2}\geq t\big{\}}\leq 2\exp(-K_{2}t^{2}),\quad t>0. (7)

Combining (6) and (7), we get

{|PMy2n|t}2exp(K3t2),t>0,{\mathbb{P}}\big{\{}\big{|}\|PMy\|_{2}-\sqrt{n}\big{|}\geq t\big{\}}\leq 2\exp(-K_{3}t^{2}),\quad t>0,

for some K3>0K_{3}>0 depending only on KK.

The rest of the proof of the proposition is based on the standard covering argument (see, for example, [50, Chapter 4]), and we only sketch the idea. Consider a Euclidean δ\delta–net 𝒩{\mathcal{N}} on Sk1S^{k-1} (for a sufficiently small constant δ(0,1/2]\delta\in(0,1/2]) i.e a non-random discrete subset of the Euclidean sphere such that for every vector uu in Sk1S^{k-1} there is some vector in 𝒩{\mathcal{N}} at distance at most δ\delta from uu. It is known that 𝒩{\mathcal{N}} can be chosen to have cardinality at most (2δ)k\big{(}\frac{2}{\delta}\big{)}^{k} [50, Section 4.2]. In view of the last probability estimate, the event

{PMy2n|0.1n for all y𝒩}\big{\{}\|PMy\|_{2}-\sqrt{n}\big{|}\leq 0.1\sqrt{n}\mbox{ for all }y\in{\mathcal{N}}\big{\}}

has probability at least 12(2δ)kexp(K3n/100)1-2\big{(}\frac{2}{\delta}\big{)}^{k}\exp(-K_{3}n/100). Everywhere on that event, we have (see [50, Section 4.4])

smax(PM)=PM11δmaxy𝒩PMy21.1n1δ,s_{\max}(PM)=\|PM\|\leq\frac{1}{1-\delta}\max\limits_{y\in{\mathcal{N}}}\|PMy\|_{2}\leq\frac{1.1\sqrt{n}}{1-\delta},

and

smin(PM)miny𝒩PMy2δPM0.9n1.1δn1δ,s_{\min}(PM)\geq\min\limits_{y\in{\mathcal{N}}}\|PMy\|_{2}-\delta\|PM\|\geq 0.9\sqrt{n}-\frac{1.1\delta\sqrt{n}}{1-\delta},

and, in particular,

smax(PM)smin(PM)2andsmin(PM)n/2,\frac{s_{\max}(PM)}{s_{\min}(PM)}\leq 2\quad\mbox{and}\quad s_{\min}(PM)\geq\sqrt{n}/2,

assuming that δ\delta is a sufficiently small positive constant. It remains to note that, as long as the aspect ratio nk\frac{n}{k} is sufficiently large, the quantity 2(2δ)kexp(K3n/100)2\big{(}\frac{2}{\delta}\big{)}^{k}\exp(-K_{3}n/100) is exponentially small in nn. ∎

Remark.

It can be shown that smax(PM)smin(PM)1+ε\frac{s_{\max}(PM)}{s_{\min}(PM)}\leq 1+\varepsilon and smin(PM)(1ε)ns_{\min}(PM)\geq(1-\varepsilon)\sqrt{n} with probability exponentially close to one for arbitrary constant ε>0\varepsilon>0 as long as K1=K1(ε)K_{1}=K_{1}(\varepsilon) is sufficiently large.

Corollary 3.4.

For every K>0K>0 there are K1,K2>0K_{1},K_{2}>0 depending only on KK with the following property. Let nkK1\frac{n}{k}\geq K_{1}, and let PP be a non-random n×nn\times n projection operator of rank n1n-1. Further, let V1,V2,,VkV_{1},V_{2},\dots,V_{k} be independent random vectors in n\mathbb{R}^{n} with mutually independent centered KK–subgaussian components of unit variance. Let UU be any non-random vector in k\mathbb{R}^{k}. Then with probability 1exp(K2n)1-\exp(-K_{2}n) there is the unique vector yy in the linear span of P(V1),P(V2),,P(Vk)P(V_{1}),P(V_{2}),\dots,P(V_{k}) such that

Vj,y=Uj,1jk,\langle V_{j},y\rangle=U_{j},\quad 1\leq j\leq k, (8)

and, moreover, that vector satisfies y24U2/n\|y\|_{2}\leq 4\|U\|_{2}/\sqrt{n}.

Proof.

Denote by MM the n×kn\times k matrix with columns V1,V2,,VkV_{1},V_{2},\dots,V_{k}, and let the random vector vv in k\mathbb{R}^{k} satisfy (PM)PMv=U(PM)^{\top}PMv=U. We then set y:=PMvy:=PMv. Note that yy is in the linear span of P(V1),P(V2),,P(Vk)P(V_{1}),P(V_{2}),\dots,P(V_{k}) and satisfies (8). Further, we can write

y2smax(PM)v2smax(PM)U2smin((PM)PM)=smax(PM)U2smin(PM)2.\|y\|_{2}\leq s_{\max}(PM)\,\|v\|_{2}\leq\frac{s_{\max}(PM)\,\|U\|_{2}}{s_{\min}((PM)^{\top}PM)}=\frac{s_{\max}(PM)\,\|U\|_{2}}{s_{\min}(PM)^{2}}.

It remains to apply Proposition 3.3 to get the result. ∎

The next lemma provides a basis for a discretization argument which we will apply in the proof of the theorem.

Lemma 3.5.

There is a universal constant K11K_{1}\geq 1 with the following property. Let k1k\geq 1. Then there is a set 𝒩k{\mathcal{N}}_{k} of vectors in k\mathbb{R}^{k} satisfying all of the following:

  • All vectors in 𝒩k{\mathcal{N}}_{k} have non-negative components taking values in 1k0\frac{1}{\sqrt{k}}\mathbb{N}_{0};

  • The Euclidean norm of every vector in 𝒩k{\mathcal{N}}_{k} is in the interval [1,2][1,2];

  • For every unit vector uku\in\mathbb{R}^{k} there is a vector y=y(u)𝒩ky=y(u)\in{\mathcal{N}}_{k} such that ujyju_{j}\leq y_{j}, jkj\leq k;

  • The size of 𝒩k{\mathcal{N}}_{k} is at most K1kK_{1}^{k}.

Proof.

Define 𝒩k{\mathcal{N}}_{k} as the collection of all vectors in k\mathbb{R}^{k} with non-negative components taking values in 1k0\frac{1}{\sqrt{k}}\mathbb{N}_{0} and with the Euclidean norm in the interval [1,2][1,2]. Observe that for every unit vector uku\in\mathbb{R}^{k}, by defining yy coordinate-wise as

yj:=1kk|uj|,jk,y_{j}:=\frac{1}{\sqrt{k}}\,\big{\lceil}\sqrt{k}|u_{j}|\big{\rceil},\quad j\leq k,

we get a vector with Euclidean norm in the interval [1,2][1,2], i.e a vector from 𝒩k{\mathcal{N}}_{k}. Thus, 𝒩k{\mathcal{N}}_{k} constructed this way satisfies the first three properties listed above. It remains to verify the upper bound on |𝒩k||{\mathcal{N}}_{k}|.

To estimate the size of 𝒩k{\mathcal{N}}_{k} we apply the standard volumetric argument. Observe that |𝒩k||{\mathcal{N}}_{k}| can be bounded from above by the number of integer lattice points in the rescaled Euclidean ball kB2k\sqrt{k}B_{2}^{k} of radius k\sqrt{k}. That, in turn, is bounded above by the maximal number of non-intersecting parallel translates of the unit cube (0,1)k(0,1)^{k} which can be placed inside the ball 2kB2k2\sqrt{k}B_{2}^{k}. Comparison of the Lebesgue volumes of 2kB2k2\sqrt{k}B_{2}^{k} and (0,1)k(0,1)^{k} confirms that the latter is of order exp(O(k))\exp(O(k)), completing the proof. ∎


Equipped with Corollaries 3.2 and 3.4, and Lemma 3.5, we can complete the proof of the theorem.

3.2 Proof of Theorem 3.1

From now on, we fix a small constant ε>0\varepsilon>0.

First, suppose that m=nω(1)m=n^{\omega(1)}. Observe that in this case 2log(m/n)=(1o(1))2logm\sqrt{2\log(m/n)}=(1-o(1))\sqrt{2\log m}. Define δ=δ(n):=2(1+ε/2)(logm)/n\delta=\delta(n):=2(1+\varepsilon/2)(\log m)/n, so that by the assumptions of the theorem, for every constant κ(0,1)\kappa\in(0,1) and all large enough nn, the cost vectors c(n)c(n) are (κ1δ,1κ)(\kappa^{-1}\delta,1-\kappa)–incompressible. Hence, applying Proposition 2.5, we get for every imi\leq m and all large enough nn,

{(Ac)i(1+ε/2)δn}exp(δn/2),{\mathbb{P}}\big{\{}(Ac)_{i}\geq(1+\varepsilon/2)\sqrt{\delta n}\big{\}}\leq\exp(-\delta n/2),

implying

{Ac(1+ε/2)δn}mexp(δn/2)=nω(1).{\mathbb{P}}\big{\{}\|Ac\|_{\infty}\geq(1+\varepsilon/2)\sqrt{\delta n}\big{\}}\leq m\exp(-\delta n/2)=n^{-\omega(1)}.

We infer that, by considering x:=c(1+ε/2)2(1+ε/2)logmx:=\frac{c}{(1+\varepsilon/2)\sqrt{2(1+\varepsilon/2)\log m}}, the optimal objective value of the LP satisfies

2log(m/n)z2log(m/n)(1+ε/2)2(1+ε/2)logm>1ε\sqrt{2\log(m/n)}\,z^{*}\geq\frac{\sqrt{2\log(m/n)}}{(1+\varepsilon/2)\sqrt{2(1+\varepsilon/2)\log m}}>1-\varepsilon

with probability 1nω(1)1-n^{-\omega(1)}, and the statement of the theorem follows. In view of the above, to complete the proof it is sufficient to verify the statement under the assumption m(n)=nO(1)m(n)=n^{O(1)}. In what follows, we assume that m(n)nKm(n)\leq n^{K^{\prime}} for some number K>0K^{\prime}>0 independent of nn.


We shall construct a “good” event of high probability, condition on a realization of the matrix AA from that event, and then follow the iterative scheme outlined at the beginning of the section to construct a feasible solution. Set

x0=x0(n):=(1ε)c2log(m/n),x_{0}=x_{0}(n):=\frac{(1-\varepsilon)c}{\sqrt{2\log(m/n)}},

and define an integer k0=k0(n)k_{0}=k_{0}(n) as

k0:=max(n,m(n/m)1+ε/4).k_{0}:=\big{\lceil}\max\big{(}\sqrt{n},m(n/m)^{1+\varepsilon/4}\big{)}\big{\rceil}.

Definition of a “good” event. Define

c:={\displaystyle{\mathcal{E}}_{c}:=\big{\{} At most k0k_{0} components of the vector Ax0Ax_{0} are greater than 1ε/21-\varepsilon/2,
and the sum of squares of those components is bounded above by K~2k0},\displaystyle\mbox{and the sum of squares of those components is bounded above by $\tilde{K}^{2}\,k_{0}$}\big{\}},

where K~>0\tilde{K}>0 is a sufficiently large universal constant whose value can be extracted from the argument below. Observe that, by our definition of k0k_{0} and by our assumption that ε\varepsilon is small,

(1+ε/16)2(1+ε/16)log(m/k0)1ε/21ε2log(m/n),(1+\varepsilon/16)\sqrt{2(1+\varepsilon/16)\log(m/k_{0})}\leq\frac{1-\varepsilon/2}{1-\varepsilon}\sqrt{2\log(m/n)},

whence for every imi\leq m, in view of the definition of x0x_{0},

{(Ax0)i1ε/2}{(Ac)i(1+ε/16)2(1+ε/16)log(m/k0)}.{\mathbb{P}}\big{\{}(Ax_{0})_{i}\geq 1-\varepsilon/2\big{\}}\leq{\mathbb{P}}\big{\{}(Ac)_{i}\geq(1+\varepsilon/16)\sqrt{2(1+\varepsilon/16)\log(m/k_{0})}\big{\}}.

Define δ=δ(n):=2(1+ε/16)log(m/k0)/n\delta=\delta(n):=2(1+\varepsilon/16)\log(m/k_{0})/n, and observe that, by the assumptions of the theorem, for every constant κ(0,1)\kappa\in(0,1) and all large enough nn, the cost vectors c(n)c(n) are (κ1δ,1κ)(\kappa^{-1}\delta,1-\kappa)–incompressible. Hence, we can apply Proposition 2.5 to get for all large enough nn,

{(Ac)i(1+ε/16)2log(m/k0)}exp(δn/2).{\mathbb{P}}\big{\{}(Ac)_{i}\geq(1+\varepsilon/16)\sqrt{2\log(m/k_{0})}\big{\}}\leq\exp\big{(}-\delta n/2\big{)}.

In particular, the probability that (Ax0)i1ε/2(Ax_{0})_{i}\geq 1-\varepsilon/2 for more than k0k_{0} indices ii, is bounded above by

(mk0)exp(δnk0/2)(emexp(δn/2)k0)k0=o((k0m)Ω(ε2)k0).{m\choose k_{0}}\exp\big{(}-\delta nk_{0}/2\big{)}\leq\bigg{(}\frac{em\exp(-\delta n/2)}{k_{0}}\bigg{)}^{k_{0}}=o\bigg{(}\bigg{(}\frac{k_{0}}{m}\bigg{)}^{\Omega(\varepsilon^{2})k_{0}}\bigg{)}.

On the other hand, for every t2k02log(m/n)t\geq\frac{2\sqrt{k_{0}}}{\sqrt{2\log(m/n)}}, by Proposition 2.3 and by the union bound over k0k_{0}–subsets of [m][m], the probability that the sum of squares of largest k0k_{0} components of Ax0Ax_{0} is greater than t2t^{2}, is bounded above by

(mk0)2exp(K3t2log(m/n))2(emk0)k0exp(K3t2log(m/n)),{m\choose k_{0}}\cdot 2\exp(-K_{3}t^{2}\log(m/n))\leq 2\,\Big{(}\frac{em}{k_{0}}\Big{)}^{k_{0}}\exp(-K_{3}t^{2}\log(m/n)),

where K3>0K_{3}>0 is a universal constant. Taking tt to be a sufficiently large constant multiple of k0\sqrt{k_{0}} and combining the above estimates, we obtain that the event c{\mathcal{E}}_{c} has probability 1nω(1)1-n^{-\omega(1)}.


Fix for a moment any subset I[m]I\subset[m] of size at most k0k_{0} and at least logn\log n. Further, denote by 𝒩I{\mathcal{N}}_{I} the non-random discrete subset of vectors in I\mathbb{R}^{I} with the properties listed in Lemma 3.5. Denote by PP the projection onto the orthogonal complement of the cost vector cc. Define the event

I:={\displaystyle{\mathcal{E}}_{I}:=\bigg{\{} For every u𝒩Iu\in{\mathcal{N}}_{I} there is the unique vector yy in the linear span of P(rowi(A))P({\rm row}_{i}(A)), iIi\in I,
such that y,rowi(A)=ui\langle y,{\rm row}_{i}(A)\rangle=u_{i}, iIi\in I; moreover, that vector satisfies:
(a) y28/n\|y\|_{2}\leq 8/\sqrt{n};
(b) the number of indices i[m]Ii\in[m]\setminus I with (Ay)iK′′log(m/|I|)/n(Ay)_{i}\geq K^{\prime\prime}\sqrt{\log(m/|I|)}/\sqrt{n} is at most |I|/2|I|/2;
(c)(i[m]I(Ay)i2χ{(Ay)iK′′log(m/|I|)/n})1/2K′′|I|log(m/|I|)/n},\displaystyle(c)\;\Big{(}\sum_{i\in[m]\setminus I}(Ay)_{i}^{2}\,\chi_{\{(Ay)_{i}\geq K^{\prime\prime}\sqrt{\log(m/|I|)}/\sqrt{n}\}}\Big{)}^{1/2}\leq K^{\prime\prime}\sqrt{|I|\log(m/|I|)}/\sqrt{n}\bigg{\}},

where K′′=K′′(K)>0K^{\prime\prime}=K^{\prime\prime}(K)>0 is a sufficiently large constant depending only on KK. By combining Corollaries 3.2 and 3.4, we obtain that the event I{\mathcal{E}}_{I} has probability at least 12(|I|/m)2|I|exp(Ω(n))1-2(|I|/m)^{2|I|}-\exp(-\Omega(n)), whence the intersection of events II\bigcap\limits_{I\in\mathcal{I}}{\mathcal{E}}_{I} taken over the collection \mathcal{I} of all subsets of [m][m] of size at least logn\log n and at most k0k_{0}, has probability 1nω(1)1-n^{-\omega(1)}.


Define the “good” event

good:=cII,{\mathcal{E}}_{good}:={\mathcal{E}}_{c}\;\cap\;\bigcap\limits_{I\in\mathcal{I}}{\mathcal{E}}_{I},

so that (good)=1nω(1){\mathbb{P}}({\mathcal{E}}_{good})=1-n^{-\omega(1)}. Our goal is to show that, conditioned on any realization of AA from good{\mathcal{E}}_{good}, the optimal objective value of the LP is at least 1ε1-\varepsilon. From now on, we assume that the matrix AA is non-random and satisfies the conditions from the definition of good{\mathcal{E}}_{good}.


Definition of x1x_{1}. Let I0I_{0} be the k0k_{0}–subset of [m][m] corresponding to k0k_{0} largest components of the vector Ax0Ax_{0} (we resolve ties arbitrarily). In view of our conditioning on c{\mathcal{E}}_{c}, for every i[m]I0i\in[m]\setminus I_{0} we have

rowi(A),x01ε/2.\langle{\rm row}_{i}(A),x_{0}\rangle\leq 1-\varepsilon/2.

Denote by z0I0z_{0}\in\mathbb{R}^{I_{0}} the restriction of the vector Ax0Ax_{0} to I0\mathbb{R}^{I_{0}}, and let u0𝒩I0u_{0}\in{\mathcal{N}}_{I_{0}} be a vector which majorizes z0/z02z_{0}/\|z_{0}\|_{2} coordinate-wise. Note that in view of the definition of c{\mathcal{E}}_{c}, the Euclidean norm of z0z_{0} is of order O(k0)O(\sqrt{k_{0}}). Further, let y1y_{1} be the unique vector in the linear combination of P(rowi(A))P({\rm row}_{i}(A)), iI0i\in I_{0}, such that y1,rowi(A)=(u0)i\langle y_{1},{\rm row}_{i}(A)\rangle=(u_{0})_{i}, iI0i\in I_{0}, satisfying the conditions (a), (b), (c) from the definition of I0{\mathcal{E}}_{I_{0}}. Define x1:=z02y1x_{1}:=-\|z_{0}\|_{2}\,y_{1}, so that rowi(A),x1=z02(u0)i\langle{\rm row}_{i}(A),x_{1}\rangle=-\|z_{0}\|_{2}\,(u_{0})_{i}, iI0i\in I_{0}. Observe that with such a definition of x1x_{1}, the sum x0+x1x_{0}+x_{1} satisfies

rowi(A),x0+x1=(z0)iz02(u0)i0,iI0,\langle{\rm row}_{i}(A),x_{0}+x_{1}\rangle=(z_{0})_{i}-\|z_{0}\|_{2}\,(u_{0})_{i}\leq 0,\quad i\in I_{0},

whereas x0+x1,c=x0,c=1ε2log(m/n)\langle x_{0}+x_{1},c\rangle=\langle x_{0},c\rangle=\frac{1-\varepsilon}{\sqrt{2\log(m/n)}}. Applying the definition of the event I0{\mathcal{E}}_{I_{0}} and the bound z02=O(k0)\|z_{0}\|_{2}=O(\sqrt{k_{0}}), we get that the vector x1x_{1} satisfies

The number of indices i[m]I0i\in[m]\setminus I_{0} with (Ax1)iK~(k0/n)log(m/k0)(Ax_{1})_{i}\geq\tilde{K}\sqrt{(k_{0}/n)\log(m/k_{0})} is at most k0/2k_{0}/2;
(i[m]I0(Ax1)i2χ{(Ax1)iK~(k0/n)log(m/k0)})1/2K~(k02/n)log(m/k0),\displaystyle\Big{(}\sum_{i\in[m]\setminus I_{0}}(Ax_{1})_{i}^{2}\,\chi_{\{(Ax_{1})_{i}\geq\tilde{K}\sqrt{(k_{0}/n)\log(m/k_{0})}\}}\Big{)}^{1/2}\leq\tilde{K}\sqrt{(k_{0}^{2}/n)\log(m/k_{0})},

where K~=(~K)>0\tilde{K}=\tilde{(}K)>0 depends only on KK. We will simplify the conditions on x1x_{1}. First, observe that in view of the definition of k0k_{0} and since m=nO(1)m=n^{O(1)}, we can assume that K~(k0/n)log(m/k0)ε/16\tilde{K}\sqrt{(k_{0}/n)\log(m/k_{0})}\leq\varepsilon/16. Further, we can estimate K~(k02/n)log(m/k0)\tilde{K}\sqrt{(k_{0}^{2}/n)\log(m/k_{0})} from above by k0/2\sqrt{\lfloor k_{0}/2\rfloor}. Thus, the vector x1x_{1} satisfies

The number of indices i[m]I0i\in[m]\setminus I_{0} with (Ax1)iε/16(Ax_{1})_{i}\geq\varepsilon/16 is at most k0/2\lfloor k_{0}/2\rfloor;
(i[m]I0(Ax1)i2χ{(Ax1)iε/16})1/2k0/2.\displaystyle\Big{(}\sum_{i\in[m]\setminus I_{0}}(Ax_{1})_{i}^{2}\,\chi_{\{(Ax_{1})_{i}\geq\varepsilon/16\}}\Big{)}^{1/2}\leq\sqrt{\lfloor k_{0}/2\rfloor}.

Definition of x2,x3,x_{2},x_{3},\dots The vectors x2,x3,x_{2},x_{3},\dots are constructed via an inductive process. Let s:=log2k014log2ns:=\lceil\log_{2}k_{0}-\frac{1}{4}\log_{2}n\rceil. We define numbers k1,k2,,ksk_{1},k_{2},\dots,k_{s} recursively by the formula

kj:=kj1/2,1js.k_{j}:=\lfloor k_{j-1}/2\rfloor,\quad 1\leq j\leq s.

With nn large, we then have

ks=n1/4±o(1).k_{s}=n^{1/4\pm o(1)}.

Further, assuming the vector xjx_{j} has been defined, we let IjI_{j} be a kjk_{j}–subset of [m]Ij1[m]\setminus I_{j-1} corresponding to kjk_{j} largest components of AxjAx_{j} (with ties resolved arbitrarily).

Let 1hs11\leq h\leq s-1, and assume that a vector xhx_{h} has been constructed that satisfies

The number of indices i[m]Ih1i\in[m]\setminus I_{h-1} with (Axh)i(3/4)h1ε/16(Ax_{h})_{i}\geq(3/4)^{h-1}\varepsilon/16 is at most khk_{h};
(i[m]Ih1(Axh)i2χ{(Axh)i(3/4)h1ε/16})1/2kh\displaystyle\Big{(}\sum_{i\in[m]\setminus I_{h-1}}(Ax_{h})_{i}^{2}\,\chi_{\{(Ax_{h})_{i}\geq(3/4)^{h-1}\varepsilon/16\}}\Big{)}^{1/2}\leq\sqrt{k_{h}}

(observe that x1x_{1} satisfies the above assumptions, which provides a basis of induction). We denote by zhIhz_{h}\in\mathbb{R}^{I_{h}} the restriction of the vector AxhAx_{h} to Ih\mathbb{R}^{I_{h}} (note that zhz_{h} has the Euclidean norm at most kh\sqrt{k_{h}} by the induction hypothesis), and let uh𝒩Ihu_{h}\in{\mathcal{N}}_{I_{h}} be a vector which majorizes zh/zh2z_{h}/\|z_{h}\|_{2} coordinate-wise. Similarly to the above, we let yh+1y_{h+1} be the unique vector in the linear span of P(rowi(A))P({\rm row}_{i}(A)), iIhi\in I_{h}, such that rowi(A),yh+1=(uh)i\langle{\rm row}_{i}(A),y_{h+1}\rangle=(u_{h})_{i}, iIhi\in I_{h}, and satisfying the conditions (a), (b), (c) from the definition of Ih{\mathcal{E}}_{I_{h}}, and define xh+1:=zh2yh+1x_{h+1}:=-\|z_{h}\|_{2}\,y_{h+1}, so that rowi(A),xh+1=zh2(uh)i\langle{\rm row}_{i}(A),x_{h+1}\rangle=-\|z_{h}\|_{2}\,(u_{h})_{i}, iIhi\in I_{h}. By the definition of Ih{\mathcal{E}}_{I_{h}}, the vector Axh+1Ax_{h+1} satisfies

The number of indices i[m]Ih with (Axh+1)iK~khlog(m/kh)/n is at most kh+1;(i[m]Ih(Axh+1)i2χ{(Axh+1)iK′′khlog(m/kh)/n})1/2K~kh2log(m/kh)/n,\begin{split}&\mbox{The number of indices $i\in[m]\setminus I_{h}$ with $(Ax_{h+1})_{i}\geq\tilde{K}\sqrt{k_{h}\log(m/k_{h})}/\sqrt{n}$ is at most $k_{h+1}$;}\\ &\Big{(}\sum_{i\in[m]\setminus I_{h}}(Ax_{h+1})_{i}^{2}\,\chi_{\{(Ax_{h+1})_{i}\geq K^{\prime\prime}\sqrt{k_{h}\log(m/k_{h})}/\sqrt{n}\}}\Big{)}^{1/2}\leq\tilde{K}\sqrt{k_{h}^{2}\log(m/k_{h})}/\sqrt{n},\end{split} (9)

for some constant K~=K~(K)>0\tilde{K}=\tilde{K}(K)>0. In is not difficult to check that, with our definition of k1,k2,k_{1},k_{2},\dots and the assumption m=nO(1)m=n^{O(1)} and as long as nn is large, the quantity K~khlog(m/kh)/n\tilde{K}\sqrt{k_{h}\log(m/k_{h})}/\sqrt{n} is (much) less than (3/4)hε/16(3/4)^{h}\varepsilon/16; further, K~kh2log(m/kh)/n\tilde{K}\sqrt{k_{h}^{2}\log(m/k_{h})}/\sqrt{n} is dominated by kh+1\sqrt{k_{h+1}}. Thus, xh+1x_{h+1} satisfies

The number of indices i[m]Ihi\in[m]\setminus I_{h} with (Axh+1)i(3/4)hε/16(Ax_{h+1})_{i}\geq(3/4)^{h}\varepsilon/16 is at most kh+1k_{h+1};
(i[m]Ih(Axh+1)i2χ{(Axh+1)i(3/4)hε/16})1/2kh+1,\displaystyle\Big{(}\sum_{i\in[m]\setminus I_{h}}(Ax_{h+1})_{i}^{2}\,\chi_{\{(Ax_{h+1})_{i}\geq(3/4)^{h}\varepsilon/16\}}\Big{)}^{1/2}\leq\sqrt{k_{h+1}},

completing the induction step.


Restoring feasibility. The above process produces vectors x1,x2,,xrx_{1},x_{2},\dots,x_{r}, where xrx_{r} satisfies, in view of (9) and the choice of rr,

(i[m]Ir1(Axr)i2χ{(Axr)i(3/4)r1ε/16})1/2<ε/16,\displaystyle\Big{(}\sum_{i\in[m]\setminus I_{r-1}}(Ax_{r})_{i}^{2}\,\chi_{\{(Ax_{r})_{i}\geq(3/4)^{r-1}\varepsilon/16\}}\Big{)}^{1/2}<\varepsilon/16,

so that, in particular, (xr)iε/16(x_{r})_{i}\leq\varepsilon/16 for every i[m]Ir1i\in[m]\setminus I_{r-1}. Note also that for every hr1h\leq r-1 and iIhi\in I_{h}, we have

rowi(A),xh+xh+10.\langle{\rm row}_{i}(A),x_{h}+x_{h+1}\rangle\leq 0.

To complete the proof, we have to verify that the sum x:=x0+x1++xrx:=x_{0}+x_{1}+\dots+x_{r} is a feasible solution to the linear program. Fix any index i[m]i\in[m]. Let χ0(i)\chi_{0}(i) be the Boolean indicator of the expression “rowi(A),x01ε/2\langle{\rm row}_{i}(A),x_{0}\rangle\geq 1-\varepsilon/2”, for each 1hr11\leq h\leq r-1, let χh(i)\chi_{h}(i) be the indicator of “rowi(A),xh(3/4)h1ε/16\langle{\rm row}_{i}(A),x_{h}\rangle\geq(3/4)^{h-1}\varepsilon/16”, and set χr(i):=0\chi_{r}(i):=0. Note that, in view of the definition of the sets I0,I1,I_{0},I_{1},\dots, χh(i)=1\chi_{h}(i)=1 only if χh+1(i)=0\chi_{h+1}(i)=0. The set {0,1,,r}\{0,1,\dots,r\} can be partitioned into consecutive subsets S1<S2<<SpS_{1}<S_{2}<\dots<S_{p} in such a way that

  • The size of each set SS_{\ell} is either 11 or 22, and

  • Whenever χh(i)=1\chi_{h}(i)=1 for some hh, the index hh belongs to a 22–subset from the partition, and it is the smaller of the two indices in that subset.

We have

rowi(A),x=r=1phSrrowi(A),xh.\langle{\rm row}_{i}(A),x\rangle=\sum_{r=1}^{p}\sum_{h\in S_{r}}\langle{\rm row}_{i}(A),x_{h}\rangle.

Fix for a moment any rpr\leq p and consider the corresponding sum hSrrowi(A),xh\sum_{h\in S_{r}}\langle{\rm row}_{i}(A),x_{h}\rangle. First, assume that SrS_{r} is a singleton, and let ii be its element. By our construction of the partition, we have χh(i)=0\chi_{h}(i)=0.

  • If i=0i=0 then

    rowi(A),xi1ε/2;\langle{\rm row}_{i}(A),x_{i}\rangle\leq 1-\varepsilon/2;
  • If 1ir11\leq i\leq r-1 then

    rowi(A),xi(3/4)h1ε/16;\langle{\rm row}_{i}(A),x_{i}\rangle\leq(3/4)^{h-1}\varepsilon/16;
  • If i=ri=r then

    rowi(A),xiε/16.\langle{\rm row}_{i}(A),x_{i}\rangle\leq\varepsilon/16.

Next, assume that the size of SrS_{r} is two, and represent the subset as Sr={h,h+1}S_{r}=\{h,h+1\}. By the construction of the vectors xhx_{h} and xh+1x_{h+1}, we then have

rowi(A),xh+xh+10.\langle{\rm row}_{i}(A),x_{h}+x_{h+1}\rangle\leq 0.

Combining the two possible cases and summing over rr, we get

rowi(A),x1ε/2+h=1(3/4)h1ε/16+ε/161,\langle{\rm row}_{i}(A),x\rangle\leq 1-\varepsilon/2+\sum_{h=1}^{\infty}(3/4)^{h-1}\varepsilon/16+\varepsilon/16\leq 1,

and the proof is finished.

3.3 Applications of Theorem 3.1

As a simple sufficient condition on the cost vectors which implies the assumptions in Theorem 3.1, we consider a bound on the \|\cdot\|_{\infty}–norms which, in particular, implies the lower bound in the main Theorem 1.2:

Corollary 3.6 (Lower bound on zz^{*} under sup-norm delocalization of the cost vectors).

Let m=m(n)m=m(n) satisfy limnmn=\lim\limits_{n\to\infty}\frac{m}{n}=\infty, and let matrices A=A(n)A=A(n) be as in Theorem 3.1. Consider a sequence of non-random cost vectors c=c(n)c=c(n) satisfying limnlog(m/n)c=0\lim_{n\rightarrow\infty}\sqrt{\log(m/n)}\,\|c\|_{\infty}=0. Then for any constant ε>0\varepsilon>0 and all sufficiently large nn,

{2log(m/n)z1ε}1nω(1),{\mathbb{P}}\big{\{}\sqrt{2\log(m/n)}\,z^{*}\geq 1-\varepsilon\big{\}}\geq 1-n^{-\omega(1)},

implying that lim infn2log(m/n)z1\liminf\limits_{n\to\infty}\sqrt{2\log(m/n)}\,z^{*}\geq 1 almost surely.

Proof.

Fix any constant κ(0,1)\kappa\in(0,1), and let I=I(n)I=I(n) be any subset of [n][n] of size at most κ1log(m/n)\kappa^{-1}\log(m/n). In view of the assumptions on cc, we have

iIci2c2|I|κ1log(m/n)c20.\sum_{i\in I}c_{i}^{2}\leq\|c\|_{\infty}^{2}|I|\leq\kappa^{-1}\log(m/n)\|c\|_{\infty}^{2}\longrightarrow 0.

Since the choice of the subsets I(n)I(n) was arbitrary, the last relation implies that cc is (κ1log(m/n),1κ)(\kappa^{-1}\log(m/n),1-\kappa)–incompressible for all large nn. It remains to apply Theorem 3.1. ∎

The next corollary constitutes the lower bound in Corollary 1.4:

Corollary 3.7 (The mean width: a sharp lower bound).

Let m=m(n)m=m(n) and A=A(n)A=A(n) be as in Theorem 3.1, and assume additionally that m=exp(o(n))m=\exp(o(n)). Then the spherical mean width 𝒲(P){\mathcal{W}}(P) of the polyhedron P=P(n)={xn:Ax𝟏}P=P(n)=\{x\in\mathbb{R}^{n}:\;Ax\leq\mathbf{1}\} satisfies

lim infn2log(m/n)𝒲(P)2almost surely.\liminf\limits_{n\to\infty}\sqrt{2\log(m/n)}\,{\mathcal{W}}(P)\geq 2\quad\mbox{almost surely.}
Proof.

In this proof, we assume that for each nn, c=c(n)c=c(n) is a uniform random unit vector in n\mathbb{R}^{n}, and that c(n)c(n), nn\in\mathbb{N}, are mutually independent and independent from {A(n),n}\{A(n),\;n\in\mathbb{N}\}. Recall that the mean width of the polyhedron P=P(n)={xn:Ax𝟏}P=P(n)=\{x\in\mathbb{R}^{n}:\;Ax\leq{\bf 1}\} is defined as

𝒲(P)=2𝔼cmax{c,x:xP},{\mathcal{W}}(P)=2\,{\mathbb{E}}_{c}\,\max\{\langle c,x\rangle:\;x\in P\},

where 𝔼c{\mathbb{E}}_{c} denotes the expectation taken with respect to c=c(n)c=c(n). It will be most convenient for us to prove the statement by contradiction. Namely, assume that there is a small positive number δ>0\delta>0 such that

{lim infn2log(m/n)𝒲(P(n))2δ}δ.{\mathbb{P}}\big{\{}\liminf\limits_{n\to\infty}\sqrt{2\log(m/n)}{\mathcal{W}}(P(n))\leq 2-\delta\big{\}}\geq\delta.

Denote by \mathcal{F} the σ\sigma–field generated by the random matrices {A(n),n}\{A(n),\;n\in\mathbb{N}\}. Observe that the event

:={lim infn2log(m/n)𝒲(P(n))2δ}{\mathcal{E}}:=\big{\{}\liminf\limits_{n\to\infty}\sqrt{2\log(m/n)}{\mathcal{W}}(P(n))\leq 2-\delta\big{\}}

is \mathcal{F}–measurable. Condition for a moment on any realization of the polyhedra P(n)P(n), nn\in\mathbb{N}, from {\mathcal{E}}, and let (nk)k=1(n_{k})_{k=1}^{\infty} be an increasing sequence of integers such that 2log(m/nk)𝒲(P(nk))2δ/2\sqrt{2\log(m/n_{k})}{\mathcal{W}}(P(n_{k}))\leq 2-\delta/2 for every kk. The condition 𝒲(P(nk))2δ/22log(m/nk){\mathcal{W}}(P(n_{k}))\leq\frac{2-\delta/2}{\sqrt{2\log(m/n_{k})}} implies

1δ/42log(m/nk)\displaystyle\frac{1-\delta/4}{\sqrt{2\log(m/n_{k})}} 𝔼cmax{c(nk),x:xP(nk)}\displaystyle\geq{\mathbb{E}}_{c}\,\max\{\langle c(n_{k}),x\rangle:\;x\in P(n_{k})\}
1δ/82log(m/nk)c{max{c(nk),x:xP(nk)}1δ/82log(m/nk)},\displaystyle\geq\frac{1-\delta/8}{\sqrt{2\log(m/n_{k})}}\,{\mathbb{P}}_{c}\bigg{\{}\max\{\langle c(n_{k}),x\rangle:\;x\in P(n_{k})\}\geq\frac{1-\delta/8}{\sqrt{2\log(m/n_{k})}}\bigg{\}},

and hence

c{max{c(nk),x:xP(nk)}<1δ/82log(m/nk)}11δ/41δ/8,{\mathbb{P}}_{c}\bigg{\{}\max\{\langle c(n_{k}),x\rangle:\;x\in P(n_{k})\}<\frac{1-\delta/8}{\sqrt{2\log(m/n_{k})}}\bigg{\}}\geq 1-\frac{1-\delta/4}{1-\delta/8},

where c{\mathbb{P}}_{c} is the conditional probability given the realization of P(n)P(n), nn\in\mathbb{N}, from {\mathcal{E}}. By the Borel–Cantelli lemma, the last assertion yields

c{lim infn2log(m/n)max{c(n),x:xP(n)}1δ/8}=1.{\mathbb{P}}_{c}\bigg{\{}\liminf\limits_{n\to\infty}\sqrt{2\log(m/n)}\max\{\langle c(n),x\rangle:\;x\in P(n)\}\leq 1-\delta/8\bigg{\}}=1.

Removing the conditioning on {\mathcal{E}}, we arrive at the estimate

{lim infn2log(m/n)max{c(n),x:xP(n)}1δ/8}δ.{\mathbb{P}}\bigg{\{}\liminf\limits_{n\to\infty}\sqrt{2\log(m/n)}\max\{\langle c(n),x\rangle:\;x\in P(n)\}\leq 1-\delta/8\bigg{\}}\geq\delta. (10)

In the second part of the proof, we will show that the assertion (10) leads to contradiction. Fix for a moment any κ(0,1)\kappa\in(0,1). We claim that c(n)c(n) is (κ1log(m/n),1κ)(\kappa^{-1}\log(m/n),1-\kappa)–incompressible for every large enough nn with probability 1nω(1)1-n^{-\omega(1)}. Indeed, the standard concentration inequality on the sphere (see, in particular, [50, Chapter 5]) implies that for every choice of a non-random κ1log(m/n)\lfloor\kappa^{-1}\log(m/n)\rfloor–subset I[n]I\subset[n],

{iIci2K1|I|n+t}2exp(K2nt2),t>0,{\mathbb{P}}\bigg{\{}\sqrt{\sum_{i\in I}c_{i}^{2}}\geq\sqrt{\frac{K_{1}|I|}{n}}+t\bigg{\}}\leq 2\exp(-K_{2}nt^{2}),\quad t>0,

for some universal constants K1,K2>0K_{1},K_{2}>0. Taking the union bound, we then get

\displaystyle{\mathbb{P}} {iIci2K1|I|n+tfor some κ1log(m/n)–subset I}\displaystyle\bigg{\{}\sqrt{\sum_{i\in I}c_{i}^{2}}\geq\sqrt{\frac{K_{1}|I|}{n}}+t\;\;\mbox{for some $\lfloor\kappa^{-1}\log(m/n)\rfloor$--subset $I$}\bigg{\}}
2(nκ1log(m/n))exp(K2nt2),t>0,\displaystyle\leq 2{n\choose\lfloor\kappa^{-1}\log(m/n)\rfloor}\exp(-K_{2}nt^{2}),\quad t>0,

Our assumption m=exp(o(n))m=\exp(o(n)) implies that

K1κ1log(m/n)nκ/2for all large enough n.\sqrt{\frac{K_{1}\,\lfloor\kappa^{-1}\log(m/n)\rfloor}{n}}\leq\kappa/2\quad\mbox{for all large enough $n$.}

For all such nn, the last estimate with t:=κ/2t:=\kappa/2 yields

{iIci2κfor a κ1log(m/n)–subset I}2(nκ1log(m/n))exp(K2nκ2/4).{\mathbb{P}}\bigg{\{}\sqrt{\sum_{i\in I}c_{i}^{2}}\geq\kappa\;\;\mbox{for a $\lfloor\kappa^{-1}\log(m/n)\rfloor$--subset $I$}\bigg{\}}\leq 2{n\choose\lfloor\kappa^{-1}\log(m/n)\rfloor}\exp(-K_{2}n\kappa^{2}/4). (11)

It remains to observe that as long as m=exp(o(n))m=\exp(o(n)), we have

(nκ1log(m/n))=exp(o(n)),{n\choose\lfloor\kappa^{-1}\log(m/n)\rfloor}=\exp(o(n)),

to that the expression on the right hand side of (11) is bounded above by 2exp(Ω(n))2\exp(-\Omega(n)). This verifies the claim.


With the claim above confirmed, it remains to apply Theorem 3.1. From the theorem, we obtain that

lim infn2log(m/n)max{c,x:xP(n)}1\liminf\limits_{n\to\infty}\sqrt{2\log(m/n)}\max\{\langle c,x\rangle:\;x\in P(n)\}\geq 1

with probability one. However, that contradicts (10), completing the proof. ∎

4 The upper bound on zz^{*} in subgaussian setting

The goal of this section is to prove the following result:

Theorem 4.1 (Upper bound on zz^{*}).

Fix any K1K\geq 1. Let nn\to\infty, mn\frac{m}{n}\to\infty. For each nn let c=c(n)c=c(n) be a non-random unit vector in n\mathbb{R}^{n}, and assume that limn(log3/2(m/n)c)=0\lim\limits_{n\to\infty}\big{(}\log^{3/2}(m/n)\,\|c\|_{\infty}\big{)}=0. For each nn, let A=A(n)A=A(n) be an m×nm\times n matrix with mutually independent centered KK–subgaussian entries of unit variances. Then for every ε>0\varepsilon>0 and all large nn we have

{2log(m/n)z1+ε}1nω(1).{\mathbb{P}}\bigg{\{}\sqrt{2\log(m/n)}z^{*}\leq 1+\varepsilon\bigg{\}}\geq 1-n^{-\omega(1)}.

In particular, lim supn2log(m/n)z1\limsup\limits_{n\to\infty}\sqrt{2\log(m/n)}z^{*}\leq 1 almost surely.

Observe that the statement of the theorem is equivalent to the assertion that the intersection of the feasible polyhedron P={xn:Ax𝟏}P=\{x\in\mathbb{R}^{n}:\;Ax\leq{\bf 1}\} with the affine hyperplane

H:={w+1+ε2log(m/n)c:wc}H:=\bigg{\{}w+\frac{1+\varepsilon}{\sqrt{2\log(m/n)}}\,c:\;w\in c^{\perp}\bigg{\}}

is empty with probability 1nω(1)1-n^{-\omega(1)}. To prove the result, we will combine the moderate deviations estimate from Proposition 2.6 with a covering argument.

4.1 Auxiliary results

Proposition 2.6 cannot be directly applied in our setting since the vectors in HH generally do not satisfy (even if normalized) the required \|\cdot\|_{\infty}–norm bound. To overcome this issue, we generalize the deviation bound to sums of two orthogonal vectors where one of the vectors satisfies a strong \|\cdot\|_{\infty}–norm estimate:

Proposition 4.2.

Fix any K1K\geq 1. Let nn\to\infty, and for each nn let c=c(n)c=c(n) be a non-random unit vector in n\mathbb{R}^{n}, w=w(n)w=w(n) be any fixed vector in cc^{\perp}, and let δ=δ(n)(0,1)\delta=\delta(n)\in(0,1) be a sequence of numbers such that δn\delta n converges to infinity. Assume that limn((δn)3/2c)=0\lim\limits_{n\to\infty}\big{(}(\delta n)^{3/2}\,\|c\|_{\infty}\big{)}=0. For each nn, let ξ1,ξ2,,ξn\xi_{1},\xi_{2},\dots,\xi_{n} be mutually independent centered KK–subgaussian variables of unit variances. Then for every ε>0\varepsilon>0 and all large nn we have

{i=1n(ci+wi)ξi(1ε)δn}exp(δn/2).{\mathbb{P}}\bigg{\{}\sum_{i=1}^{n}(c_{i}+w_{i})\xi_{i}\geq(1-\varepsilon)\sqrt{\delta n}\bigg{\}}\geq\exp\big{(}-\delta n/2\big{)}.
Proof.

Fix a small ε>0\varepsilon>0. In view of the assumptions on c(n)c(n), there is a sequence of positive numbers ν(n)\nu(n) converging to zero such that (δn)3/2cν(\delta n)^{3/2}\|c\|_{\infty}\leq\nu for all nn. We shall consider two scenarios:

  • c+w2ν1/8δn=ω(δn)\|c+w\|_{2}\geq\nu^{-1/8}\sqrt{\delta n}=\omega(\sqrt{\delta n}). Observe that Var(i=1n(ci+wi)ξi)=c+w22=ω(δn){\rm Var}(\sum_{i=1}^{n}(c_{i}+w_{i})\xi_{i})=\|c+w\|_{2}^{2}=\omega(\delta n), whereas the variable i=1n(ci+wi)ξi\sum_{i=1}^{n}(c_{i}+w_{i})\xi_{i} is centered and Kci+wi2K^{\prime}\|c_{i}+w_{i}\|_{2}–subgaussian, for some KK^{\prime} depending only on KK. Lemma 2.4 then implies that, assuming nn is sufficiently large,

    {i=1n(ci+wi)ξiδn}K~=ω(exp(δn/2)),{\mathbb{P}}\bigg{\{}\sum_{i=1}^{n}(c_{i}+w_{i})\xi_{i}\geq\sqrt{\delta n}\bigg{\}}\geq\tilde{K}=\omega\big{(}\exp(-\delta n/2)\big{)},

    for some K~>0\tilde{K}>0 depending only on KK.

  • c+w2<ν1/8δn\|c+w\|_{2}<\nu^{-1/8}\sqrt{\delta n}. Let I(n)I(n) be a ν1/2δ2n2\lfloor\nu^{-1/2}\delta^{2}n^{2}\rfloor–subset of [n][n] corresponding to ν1/2δ2n2\lfloor\nu^{-1/2}\delta^{2}n^{2}\rfloor largest (by the absolute value) components of c+wc+w, with ties resolved arbitrarily. Further, let y=y(n)[n]Iy^{\prime}=y^{\prime}(n)\in\mathbb{R}^{[n]\setminus I} be the restriction of c+wc+w to [n]I[n]\setminus I. In view of the condition c+w2<ν1/8δn\|c+w\|_{2}<\nu^{-1/8}\sqrt{\delta n}, we have

    yν1/8δn|I|(1+o(1))ν1/8(δn)1/2=o((δn)1/2).\|y^{\prime}\|_{\infty}\leq\frac{\nu^{-1/8}\sqrt{\delta n}}{\sqrt{|I|}}\leq(1+o(1))\nu^{1/8}(\delta n)^{-1/2}=o\big{(}(\delta n)^{-1/2}\big{)}.

    Further, in view of orthogonality of ww and cc,

    iIwici\displaystyle\sum_{i\in I}w_{i}c_{i} =i[n]I(yici)ci\displaystyle=-\sum_{i\in[n]\setminus I}(y_{i}^{\prime}-c_{i})c_{i}
    i[n]Ici2yi2\displaystyle\geq\sum_{i\in[n]\setminus I}c_{i}^{2}-\|y_{i}^{\prime}\|_{2}
    1|I|c2yi2\displaystyle\geq 1-|I|\,\|c\|_{\infty}^{2}-\|y_{i}^{\prime}\|_{2}
    1o(1)yi2,\displaystyle\geq 1-o(1)-\|y_{i}^{\prime}\|_{2},

    whereas,

    iIwiciw2c|I|(1+o(1))ν1/8ν3/4=o(1).\sum_{i\in I}w_{i}c_{i}\leq\|w\|_{2}\,\|c\|_{\infty}\,\sqrt{|I|}\leq(1+o(1))\nu^{-1/8}\cdot\nu^{3/4}=o(1).

    We infer that yi21o(1)\|y_{i}^{\prime}\|_{2}\geq 1-o(1). Thus, setting y=y(n):=y/y2y=y(n):=y^{\prime}/\|y^{\prime}\|_{2}, we obtain a unit vector with δny=o(1)\sqrt{\delta n}\|y\|_{\infty}=o(1). Applying Proposition 2.6 with δ(1ε/2)\delta(1-\varepsilon/2) in place of δ\delta and ε/2\varepsilon/2 in place of ε\varepsilon, we get

    {i[n]Iyiξi(1ε)δn}\displaystyle{\mathbb{P}}\bigg{\{}\sum_{i\in[n]\setminus I}y_{i}^{\prime}\xi_{i}\geq(1-\varepsilon)\sqrt{\delta n}\bigg{\}} {i[n]Iyiξi(1ε/2)δ(1ε/2)n}\displaystyle\geq{\mathbb{P}}\bigg{\{}\sum_{i\in[n]\setminus I}y_{i}^{\prime}\xi_{i}\geq(1-\varepsilon/2)\sqrt{\delta(1-\varepsilon/2)n}\bigg{\}}
    exp(δ(1ε/2)n/2),\displaystyle\geq\exp\big{(}-\delta(1-\varepsilon/2)n/2\big{)},

    assuming nn is sufficiently large. On the other hand, again applying Lemma 2.4,

    {iI(ci+wi)ξi0}K~,{\mathbb{P}}\bigg{\{}\sum_{i\in I}(c_{i}+w_{i})\xi_{i}\geq 0\bigg{\}}\geq\tilde{K},

    for some K~>0\tilde{K}>0 depending only on KK. Combining the last two relations, we get the desired estimate.

To apply the last proposition to our setting of interest, we shall bound the number of constraints violated by a test vector:

Corollary 4.3.

Let KK, m=m(n)m=m(n), A=A(n)A=A(n), and c=c(n)c=c(n) be as in Theorem 4.1, and let w=w(n)w=w(n) be any sequence of vectors orthogonal to c=c(n)c=c(n). Then for every small constant ε>0\varepsilon>0 and all large nn we have

(A(w+1+ε2log(m/n)c))i1+ε/2for at least m(nm)1ε/8 indices im\Big{(}A\Big{(}w+\frac{1+\varepsilon}{\sqrt{2\log(m/n)}}\,c\Big{)}\Big{)}_{i}\geq 1+\varepsilon/2\quad\mbox{for at least $m\big{(}\frac{n}{m}\big{)}^{1-\varepsilon/8}$ indices $i\leq m$}

with probability at least 1exp(m(nm)1ε/5)1-\exp\big{(}-m\big{(}\frac{n}{m}\big{)}^{1-\varepsilon/5}\big{)}.

Proof.

In view of Proposition 4.2, applied with δ:=2log(m/n)(1+ε/2)2(1+ε)2(1ε/4)2n\delta:=\frac{2\log(m/n)\,(1+\varepsilon/2)^{2}}{(1+\varepsilon)^{2}(1-\varepsilon/4)^{2}n}, for all large nn and every imi\leq m we have

{(A(w+1+ε2log(m/n)c))i1+ε/2}(nm)1ε/4.{\mathbb{P}}\Big{\{}\Big{(}A\Big{(}w+\frac{1+\varepsilon}{\sqrt{2\log(m/n)}}\,c\Big{)}\Big{)}_{i}\geq 1+\varepsilon/2\Big{\}}\geq\Big{(}\frac{n}{m}\Big{)}^{1-\varepsilon/4}.

Hence, the probability that the last condition holds for less than m(nm)1ε/8m\big{(}\frac{n}{m}\big{)}^{1-\varepsilon/8} indices imi\leq m, is bounded above by

(e(nm)ε/81)m(nm)1ε/8(1(nm)1ε/4)mm(nm)1ε/8exp(m(nm)1ε/5)\bigg{(}e\Big{(}\frac{n}{m}\Big{)}^{\varepsilon/8-1}\bigg{)}^{m(\frac{n}{m})^{1-\varepsilon/8}}\bigg{(}1-\Big{(}\frac{n}{m}\Big{)}^{1-\varepsilon/4}\bigg{)}^{m-m\big{(}\frac{n}{m}\big{)}^{1-\varepsilon/8}}\leq\exp\bigg{(}-m\Big{(}\frac{n}{m}\Big{)}^{1-\varepsilon/5}\bigg{)}

for all sufficiently large nn. ∎

Recall that the outer radius R(P)R(P) of a polyhedron PP is defined as R(P):=maxxPx2R(P):=\max\limits_{x\in P}\|x\|_{2}.

Proposition 4.4 (An upper bound on the outer radius).

Let K,n,m,A(n)K,n,m,A(n) be as in Theorem 4.1. Then with probability 1nω(1)1-n^{-\omega(1)}, the polyhedron {xn:Ax𝟏}\big{\{}x\in\mathbb{R}^{n}:\;Ax\leq{\bf 1}\big{\}} has the outer radius R(P)R(P) bounded above by a function of KK.

Proof.

Let κ=κ(K)>0\kappa=\kappa(K)>0 be a small parameter to be defined later, and let 𝒩{\mathcal{N}} be a Euclidean κ\kappa–net on Sn1S^{n-1} of size at most (2κ)n\big{(}\frac{2}{\kappa}\big{)}^{n} [50, Chapter 4]. Lemma 2.4 implies that for some K~>0\tilde{K}>0 depending only on KK and for every y𝒩y^{\prime}\in{\mathcal{N}} and imi\leq m,

{rowi(A),yK~}K~.{\mathbb{P}}\big{\{}\langle{\rm row}_{i}(A),y^{\prime}\rangle\geq\tilde{K}\big{\}}\geq\tilde{K}.

Define u>0u>0 as the largest number satisfying

(eu)u(1K~)1uexp(K~/2).\bigg{(}\frac{e}{u}\bigg{)}^{u}(1-\tilde{K})^{1-u}\leq\exp(-\tilde{K}/2).

Then, by the above, for every y𝒩y^{\prime}\in{\mathcal{N}},

{rowi(A),y<K~ for at least 1u fraction of indices i[m]}\displaystyle{\mathbb{P}}\big{\{}\langle{\rm row}_{i}(A),y^{\prime}\rangle<\tilde{K}\mbox{ for at least $1-u$ fraction of indices $i\in[m]$}\big{\}} (eu)um(1K~)mum\displaystyle\leq\bigg{(}\frac{e}{u}\bigg{)}^{um}(1-\tilde{K})^{m-um}
exp(K~m/2).\displaystyle\leq\exp(-\tilde{K}m/2).

Setting κ:=K~u8\kappa:=\frac{\tilde{K}\sqrt{u}}{8} and using the above bound on the size of 𝒩{\mathcal{N}}, we obtain that the event

{A2m}y𝒩{rowi(A),yK~ for at least um indices i[m]}\{\|A\|\leq 2\sqrt{m}\}\cap\bigcap\limits_{y^{\prime}\in{\mathcal{N}}}\big{\{}\langle{\rm row}_{i}(A),y^{\prime}\rangle\geq\tilde{K}\mbox{ for at least $um$ indices $i\in[m]$}\big{\}}

has probability 1nω(1)1-n^{-\omega(1)} (recall that the A2m\|A\|\leq 2\sqrt{m} with probability 1exp(Ω(m))1-\exp(-\Omega(m)); see [50, Chapter 4]). Condition on any realization of AA from the above event. Assume that there exists y(2K~1Sn1)Py\in(2\tilde{K}^{-1}S^{n-1})\cap P. Let y𝒩y^{\prime}\in{\mathcal{N}} be a vector satisfying K~2yy2κ\big{\|}\frac{\tilde{K}}{2}y-y^{\prime}\big{\|}_{2}\leq\kappa. In view of the conditioning, there are at least umum indices i[m]i\in[m] such that rowi(A),yyK~/2\langle{\rm row}_{i}(A),y^{\prime}-y\rangle\geq\tilde{K}/2. Consequently,

A(yy)2K~2um.\|A(y-y^{\prime})\|_{2}\geq\frac{\tilde{K}}{2}\sqrt{um}.

Since A2m\|A\|\leq 2\sqrt{m}, we get

yy2K~u4,\|y-y^{\prime}\|_{2}\geq\frac{\tilde{K}\sqrt{u}}{4},

contradicting our choice of κ\kappa. The contradiction shows that the outer radius of PP is at most 2/K~2/\tilde{K}, and the claim is verified. ∎

4.2 Proof of Theorem 4.1, and an upper bound on 𝒲(P){\mathcal{W}}(P)

Proof of Theorem 4.1.

In view of Proposition 4.4, with probability 1nω(1)1-n^{-\omega(1)} the outer radius of PP is bounded above by K~1\tilde{K}\geq 1, for some K~\tilde{K} depending only on KK.

Fix any small ε>0\varepsilon>0. By the above, to complete the proof of the theorem it is sufficient to show that with probability 1nω(1)1-n^{-\omega(1)}, every vector zz orthogonal to cc and having the Euclidean norm at most 2K~2\tilde{K}, satisfies

z+1+ε2log(m/n)cP.z+\frac{1+\varepsilon}{\sqrt{2\log(m/n)}}\,c\notin P.

Let kk be the largest integer bounded above by

m(nm)1ε/8.m\Big{(}\frac{n}{m}\Big{)}^{1-\varepsilon/8}.

Define δ:=exp((m/n)K)\delta:=\exp(-(m/n)^{K^{\prime}}), where K>0K^{\prime}>0 is a sufficiently small constant depending only on ε\varepsilon and KK (the value of KK^{\prime} can be extracted from the argument below). Let 𝒩{\mathcal{N}}^{\prime} be a Euclidean δ\delta–net in (2K~B2n)c(2\tilde{K}\,B_{2}^{n})\cap c^{\perp}. It is known that 𝒩{\mathcal{N}}^{\prime} can be chosen to have cardinality at most (4K~δ)n\big{(}\frac{4\tilde{K}}{\delta}\big{)}^{n} (see [50, Chapter 4]).

Fix for a moment any z𝒩z^{\prime}\in{\mathcal{N}}^{\prime}, and observe that, in view of Corollary 4.3 and the definition of kk, we have

(A(z+1+ε2log(m/n)c))i1+ε/2for at least k indices im\Big{(}A\Big{(}z^{\prime}+\frac{1+\varepsilon}{\sqrt{2\log(m/n)}}\,c\Big{)}\Big{)}_{i}\geq 1+\varepsilon/2\quad\mbox{for at least $k$ indices $i\leq m$} (12)

with probability at least

1exp(m(nm)1ε/5).1-\exp\Big{(}-m\Big{(}\frac{n}{m}\Big{)}^{1-\varepsilon/5}\Big{)}.

Therefore, assuming that KK^{\prime} is sufficiently small and taking the union bound over all elements of 𝒩{\mathcal{N}}^{\prime}, we obtain that with probability 1nω(1)1-n^{-\omega(1)}, (12) holds simultaneously for all z𝒩z^{\prime}\in{\mathcal{N}}^{\prime}.


Condition on any realization of AA such that the last event holds and, moreover, such that A2m\|A\|\leq 2\sqrt{m}. Assume for a moment that there is z(2K~B2n)cz\in(2\tilde{K}\,B_{2}^{n})\cap c^{\perp} such that

(A(z+1+ε2log(m/n)c))i1for all im.\Big{(}A\Big{(}z+\frac{1+\varepsilon}{\sqrt{2\log(m/n)}}\,c\Big{)}\Big{)}_{i}\leq 1\quad\mbox{for all $i\leq m$}.

In view of the above, it implies that there is a vector z𝒩z^{\prime}\in{\mathcal{N}}^{\prime} at distance at most δ\delta from zz such that

(A(zz))i12ε(A(z^{\prime}-z))_{i}\geq\frac{1}{2}\varepsilon

for at least kk indices imi\leq m. Hence, the spectral norm of AA can be estimated as

Aεk2δ>2m,\|A\|\geq\frac{\varepsilon\sqrt{k}}{2\delta}>2\sqrt{m},

contradicting the bound A2m\|A\|\leq 2\sqrt{m}. The contradiction implies the result. ∎

The next corollary constitutes an upper bound in Corollary 1.4. Since its proof is very similar to that of Corollary 3.7, we only sketch the argument.

Corollary 4.5.

Let K,n,m,A(n)K,n,m,A(n) be as in Theorem 4.1, and assume additionally that log3/2(m/n)=o(n/logn)\log^{3/2}(m/n)=o(\sqrt{n/\log n}). Then, lim supn2log(m/n)𝒲(P)2\limsup\limits_{n\to\infty}\sqrt{2\log(m/n)}{\mathcal{W}}(P)\leq 2 almost surely.

Proof.

In view of Proposition 4.4, there is K~>0\tilde{K}>0 depending only on KK such that the event

~:={lim supnR(P(n))K~}\tilde{\mathcal{E}}:=\big{\{}\limsup\limits_{n\to\infty}R(P(n))\leq\tilde{K}\big{\}}

has probability one, where, as before, R(P(n))R(P(n)) denotes the outer radius of the polyhedron P(n)P(n). Set

δ:={lim supn2log(m/n)𝒲(P(n))2+δ}~,δ>0.{\mathcal{E}}_{\delta}:=\big{\{}\limsup\limits_{n\to\infty}\sqrt{2\log(m/n)}{\mathcal{W}}(P(n))\geq 2+\delta\big{\}}\cap\tilde{\mathcal{E}},\quad\delta>0.

We will prove the corollary by contradiction. Assume that the assertion of the corollary does not hold. Then, in view of the above definitions, there is δ>0\delta>0 such that

(δ)δ.{\mathbb{P}}\big{(}{\mathcal{E}}_{\delta}\big{)}\geq\delta.

Condition for a moment on any realization of the polyhedra P(n)P(n), nn\in\mathbb{N}, from δ{\mathcal{E}}_{\delta}, and let (nk)k=1(n_{k})_{k=1}^{\infty} be an increasing sequence of integers such that 2log(m/nk)𝒲(P(nk))2+δ/2\sqrt{2\log(m/n_{k})}{\mathcal{W}}(P(n_{k}))\geq 2+\delta/2 and R(P(nk))2K~R(P(n_{k}))\leq 2\tilde{K} for every kk. Assume that c(n)c(n), n1n\geq 1, are mutually independent uniform random unit vectors which are also independent from the matrices {A(n),n}\{A(n),\;n\in\mathbb{N}\}. The conditions

𝒲(P(nk))2+δ/22log(m/nk)andR(P(nk))2K~{\mathcal{W}}(P(n_{k}))\geq\frac{2+\delta/2}{\sqrt{2\log(m/n_{k})}}\quad\mbox{and}\quad R(P(n_{k}))\leq 2\tilde{K}

imply that for some δ~>0\tilde{\delta}>0 depending only on δ\delta and K~\tilde{K},

c{max{c(nk),x:xP(nk)}>1+δ~2log(m/nk)}δ~,{\mathbb{P}}_{c}\bigg{\{}\max\{\langle c(n_{k}),x\rangle:\;x\in P(n_{k})\}>\frac{1+\tilde{\delta}}{\sqrt{2\log(m/n_{k})}}\bigg{\}}\geq\tilde{\delta},

where c{\mathbb{P}}_{c} is the conditional probability given the realization of P(n)P(n), nn\in\mathbb{N}, from δ{\mathcal{E}}_{\delta}. By the Borel–Cantelli lemma, the last assertion yields

c{lim supn2log(m/n)max{c(n),x:xP(n)}1+δ~}=1.{\mathbb{P}}_{c}\bigg{\{}\limsup\limits_{n\to\infty}\sqrt{2\log(m/n)}\max\{\langle c(n),x\rangle:\;x\in P(n)\}\geq 1+\tilde{\delta}\bigg{\}}=1.

Removing the conditioning on δ{\mathcal{E}}_{\delta}, we arrive at the estimate

{lim supn2log(m/n)max{c(n),x:xP(n)}1+δ~}δ~.{\mathbb{P}}\bigg{\{}\limsup\limits_{n\to\infty}\sqrt{2\log(m/n)}\max\{\langle c(n),x\rangle:\;x\in P(n)\}\geq 1+\tilde{\delta}\bigg{\}}\geq\tilde{\delta}. (13)

The standard concentration inequality on the sphere [50, Chapter 5] and the conditions on mm imply that log3/2(m/n)c=o(1)\log^{3/2}(m/n)\|c\|_{\infty}=o(1) with probability 1nω(1)1-n^{-\omega(1)}, and therefore, by Theorem 4.1,

{2log(m/n)max{c(n),x:xP(n)}1+δ~/2}1nω(1).{\mathbb{P}}\big{\{}\sqrt{2\log(m/n)}\max\{\langle c(n),x\rangle:\;x\in P(n)\}\leq 1+\tilde{\delta}/2\big{\}}\geq 1-n^{-\omega(1)}.

The latter contradicts (13), and the result follows. ∎

5 The Gaussian setting

In this section, we prove Theorem 1.3 and Corollary 1.5. Note that in view of rotational invariance of the Gaussian distribution, we can assume without loss of generality that the cost vectors in Theorem 1.3 are of the form c(n)=(1n,,1n)c(n)=(\frac{1}{\sqrt{n}},\dots,\frac{1}{\sqrt{n}}). Thus, in the regime log3/2m=o(n)\log^{3/2}m=o(\sqrt{n}), the statement is already covered by the more general results of Sections 3 and 4.

5.1 The outer radius of random Gaussian polytopes

In this subsection, we consider the bound on the outer radius of random Gaussian polytopes. The next result immediately implies upper bounds on zz^{*} and 𝒲(P){\mathcal{W}}(P) in Theorem 1.3 and Corollary 1.5 in the entire range m=ω(n)m=\omega(n). We anticipate that the statement is known although we are not aware of a reference, and for that reason provide a proof.

Proposition 5.1 (Outer radius of the feasible region in the Gaussian setting).

Let m=m(n)m=m(n) satisfy limnmn=\lim\limits_{n\to\infty}\frac{m}{n}=\infty, and for every nn let AA be an m×nm\times n random matrix with i.i.d standard Gaussian entries. Then, denoting by R(P(n))R(P(n)) the outer radius of the feasible region, for any constant ε>0\varepsilon>0 we have

{2log(m/n)R(P(n))1+ε}1nω(1).{\mathbb{P}}\big{\{}\sqrt{2\log(m/n)}\,R(P(n))\leq 1+\varepsilon\big{\}}\geq 1-n^{-\omega(1)}.

In particular, lim supn2log(m/n)R(P(n))1\limsup\limits_{n\to\infty}\sqrt{2\log(m/n)}\,R(P(n))\leq 1 almost surely.


The following approximation of the standard Gaussian distribution is well known (see, for example, [19, Volume I, Chapter VII, Lemma 2]):

Lemma 5.2.

Let gg be a standard Gaussian variable. Then for every t>0t>0,

(1t1t3)exp(t2/2){gt}1texp(t2/2).\Big{(}\frac{1}{t}-\frac{1}{t^{3}}\Big{)}\exp(-t^{2}/2)\leq{\mathbb{P}}\{g\geq t\}\leq\frac{1}{t}\exp(-t^{2}/2).

Let GG be a standard Gaussian random vector in m\mathbb{R}^{m}, and G(1)G(2)G(m)G_{(1)}\leq G_{(2)}\leq\dots\leq G_{(m)} be the non-decreasing rearrangement of its components. Let 1km/21\leq k\leq m/2. Then, from the above lemma, for every t>0t>0 we have

{G(mk)t}(mk){gt}mk(emk)k(1(1t1t3)exp(t2/2))mk.{\mathbb{P}}\big{\{}G_{(m-k)}\leq t\big{\}}\leq{m\choose k}{\mathbb{P}}\{g\leq t\}^{m-k}\leq\Big{(}\frac{em}{k}\Big{)}^{k}\Big{(}1-\Big{(}\frac{1}{t}-\frac{1}{t^{3}}\Big{)}\exp(-t^{2}/2)\Big{)}^{m-k}. (14)

As a corollary, we obtain the following deviation estimate:

Corollary 5.3.

Fix any ε(0,1)\varepsilon\in(0,1). Let sequences of positive integers m(n)m(n), n1n\geq 1, and k(n)k(n), n1n\geq 1, satisfy m=ω(n)m=\omega(n), k=ω(1)k=\omega(1), and k=o(m)k=o(m). Further, for each nn, let G=G(n)G=G(n) be a standard Gaussian vector in m\mathbb{R}^{m}. Then

{G(mk)(1ε)2log(m/k)}=exp((mk)Ω(1)k).{\mathbb{P}}\big{\{}G_{(m-k)}\leq(1-\varepsilon)\sqrt{2\log(m/k)}\big{\}}=\exp\Big{(}-\Big{(}\frac{m}{k}\Big{)}^{\Omega(1)}k\Big{)}.
Proof.

Applying (14), we get

\displaystyle{\mathbb{P}} {G(mk)(1ε)2log(m/k)}\displaystyle\big{\{}G_{(m-k)}\leq(1-\varepsilon)\sqrt{2\log(m/k)}\big{\}}
(emk)k(1(1(1ε)(2log(m/k))1/21(1ε)3(2log(m/k))3/2)(km)(1ε)2)mk\displaystyle\leq\Big{(}\frac{em}{k}\Big{)}^{k}\bigg{(}1-\Big{(}\frac{1}{(1-\varepsilon)(2\log(m/k))^{1/2}}-\frac{1}{(1-\varepsilon)^{3}(2\log(m/k))^{3/2}}\Big{)}\Big{(}\frac{k}{m}\Big{)}^{(1-\varepsilon)^{2}}\bigg{)}^{m-k}
=exp((mk)Ω(1)k),\displaystyle=\exp\Big{(}-\Big{(}\frac{m}{k}\Big{)}^{\Omega(1)}k\Big{)},

implying the claim. ∎

Proof of Proposition 5.1.

Fix any ε>0\varepsilon>0. Observe that the statement is equivalent to showing that with probability 1nω(1)1-n^{-\omega(1)} the matrix AA has the property that for every ySn1y\in S^{n-1} there is imi\leq m with (Ay)i(1ε)2log(m/n)(Ay)_{i}\geq(1-\varepsilon)\sqrt{2\log(m/n)}.

The argument below is similar to the proof of Proposition 4.4. Let kk be the largest integer bounded above by

m(nm)1ε/8.m\Big{(}\frac{n}{m}\Big{)}^{1-\varepsilon/8}.

Define δ:=exp((m/n)K)\delta:=\exp(-(m/n)^{K}), where K>0K>0 is a sufficiently small constant depending only on ε\varepsilon. Let 𝒩{\mathcal{N}} be a Euclidean δ\delta–net in Sn1S^{n-1} of cardinality at most (2δ)n\big{(}\frac{2}{\delta}\big{)}^{n} (see [50, Chapter 4]).


Fix for a moment any y𝒩y^{\prime}\in{\mathcal{N}}, and observe that, in view of Corollary 5.3 and the definition of kk, we have

(Ay)i(1ε/2)2log(m/n)for at least k indices im(Ay^{\prime})_{i}\geq(1-\varepsilon/2)\sqrt{2\log(m/n)}\quad\mbox{for at least $k$ indices $i\leq m$} (15)

with probability at least 1exp((mk)Ω(1)k)1exp(k)1-\exp\big{(}-\big{(}\frac{m}{k}\big{)}^{\Omega(1)}k\big{)}\geq 1-\exp(-k). Therefore, assuming that KK is sufficiently small and taking the union bound over all elements of 𝒩{\mathcal{N}}, we obtain that with probability 1nω(1)1-n^{-\omega(1)}, (15) holds simultaneously for all y𝒩y^{\prime}\in{\mathcal{N}}.


Condition on any realization of AA such that the last event holds and, moreover, such that A2m\|A\|\leq 2\sqrt{m} (recall that the latter occurs with probability 1exp(Ω(m))1-\exp(-\Omega(m)); see [50, Chapter 4]). Assume for a moment that there is ySn1y\in S^{n-1} such that (Ay)i(1ε)2log(m/n)(Ay)_{i}\leq(1-\varepsilon)\sqrt{2\log(m/n)} for all imi\leq m. In view of the above, it implies that there is a vector y𝒩y^{\prime}\in{\mathcal{N}} at distance at most δ\delta from yy such that

(A(yy))i12ε2log(m/n)(A(y^{\prime}-y))_{i}\geq\frac{1}{2}\varepsilon\sqrt{2\log(m/n)}

for at least kk indices imi\leq m. Hence, the spectral norm of AA can be estimated as

Aε2klog(m/n)2δ>2m,\|A\|\geq\frac{\varepsilon\sqrt{2k\log(m/n)}}{2\delta}>2\sqrt{m},

contradicting the bound A2m\|A\|\leq 2\sqrt{m}. The contradiction shows that for every ySn1y\in S^{n-1} there is imi\leq m such that (Ay)i(1ε)2log(m/n)(Ay)_{i}\geq(1-\varepsilon)\sqrt{2\log(m/n)}. The result follows. ∎

5.2 Proof of Theorem 1.3 and Corollary 1.5

As we have noted, the upper bounds in Theorem 1.3 and Corollary 1.5 follow readily from Proposition 5.1, and so we only need to verify the lower bounds. In view of results of Section 3 which also apply in the Gaussian setting, we may assume that m=m(n)m=m(n) satisfies m=nω(1)m=n^{\omega(1)}.

Regarding the proof of Theorem 1.3, since AcAc is a standard Gaussian vector, it follows from the formula for the Gaussian density that

{Ac(1+ε)2logm}1mΩ(1)=1nω(1),{\mathbb{P}}\big{\{}\|Ac\|_{\infty}\leq(1+\varepsilon)\sqrt{2\log m}\big{\}}\geq 1-m^{-\Omega(1)}=1-n^{-\omega(1)},

whereas, by the assumption m=nω(1)m=n^{\omega(1)}, we have 2logm=(1+o(1))2log(m/n)\sqrt{2\log m}=(1+o(1))\sqrt{2\log(m/n)}. This implies the result.

The proof of the lower bound in Corollary 1.5 follows exactly the same scheme as the first part of the proof of Corollary 3.7.

6 Numerical experiments

Our results described in the previous sections naturally give rise to the questions: (a) How close to each other are the asymptotic estimates of the optimal objective value and empirical observations?, (b) What is the asymptotic distribution and standard deviation of zz^{*}?, and (c) How does the algorithm in the proof of Theorem  3.1 perform in practice? We discuss these questions in the following subsections, and make conjectures while providing numerical evidence. We use Gurobi 10.0.1 for solving instances of the LP (1), and set all Gurobi parameters to default.

6.1 Magnitude of the optimal objective value

Below, we investigate the quality of approximation of the optimal objective value zz^{*} by the asymptotic bound ab:=(2log(m/n))12ab:=(2\log(m/n))^{-\frac{1}{2}} given in Theorems 1.2 and 1.3. The empirical outcomes are obtained through multiple runs of the LP (1) under various choices of parameters. We consider two distributions of the entries of AA: either AA is standard Gaussian or its entries are i.i.d Rademacher (±1\pm 1) random variables. In either case and for different choices of mm, we sample the random LP (1) taking the sample size 5050 and letting cc be the vector of i.i.d Rademacher variables rescaled to have unit Euclidean norm. The cost vector is generated once and remains the same for each of the 5050 instances of the LP within a sample.

mm nn abab μ^\hat{\mu} relative gap(%\%)
1000 50 0.40853 0.50626 23.92
2000 50 0.36816 0.44119 19.83
6000 50 0.32317 0.37256 15.28
10000 50 0.30719 0.35176 14.50
20000 50 0.28888 0.32473 12.41
Table 1: Asymptotic versus empirically observed optimal objective value in the Gaussian setting (the sample size = 5050)

As is shown in Tables 1 and 2, for a fixed value of n=50n=50, as the number of constraints mm progressively increases in magnitude, the relative gap between abab and the sample mean μ^\hat{\mu} of the optimal objective values, defined as

(|abμ^|ab×100)%,\left(\frac{|ab-\hat{\mu}|}{ab}\times 100\right)\%,

decreases.

mm nn abab μ^\hat{\mu} relative gap(%\%)
1000 50 0.40853 0.50369 23.29
2000 50 0.36816 0.43801 18.97
6000 50 0.32317 0.37200 15.11
10000 50 0.30719 0.35220 14.65
20000 50 0.28888 0.32856 13.73
Table 2: Asymptotic versus empirically observed optimal objective value in the Rademacher setting (the sample size = 5050)

6.2 Structural assumptions on the cost vector

In this subsection, we carry out a numerical study of the technical conditions on the cost vectors from Theorem 1.2. Roughly speaking, those conditions stipulate that the cost vector has significantly more than log3/2(m/n)\log^{3/2}(m/n) “essentially non-zero” components. Since Theorem 1.2 is an asymptotic result, it is not at all clear what practical assumptions on the cost vectors should be imposed to guarantee that the resulting optimal objective value is close to the asymptotic bound.

kk μ^(k)\hat{\mu}(k) relative gap (%\%)
1 0.429907 15.08
2 0.454357 10.25
3 0.459387 9.25
4 0.479007 5.38
5 0.481839 4.82
6 0.486141 3.97
7 0.479635 5.25
8 0.493168 2.58
9 0.490829 3.04
10 0.498035 1.62
Table 3: The sample mean of the optimal objective value corresponding to c(n,k)c(n,k) for a given value of the parameter kk. The random matrix AA has dimensions m=1000m=1000 and n=50n=50, with each entry equidistributed with the product of Bernoulli(1/21/2) and N(0,22) variables. The sample size = 5050.

In the following experiment, we consider a random coefficient matrix AA with i.i.d subgaussian components equidistributed with the product of Bernoulli(1/21/2) and N(0,22) random variables. Note that with this definition the entries have zero mean and unit variance. Further, we consider a family of cost vectors c(n,k)c(n,k) parametrized by an integer kk, so that the vector c(n,k)c(n,k) has kk non zero components equal to 1/k1/\sqrt{k} each, and the rest of the components are zeros. For a given choice of kk, we sample the LP (1) with the above distribution of the entries and the cost vector c(n,k)c(n,k). Our goal is to compare the resulting sample mean of the optimal objective value to the sample mean obtained in the previous subsection for the same matrix dimensions and for the strongly incompressible cost vector. We fix m=1000m=1000 and n=50n=50, and let μ^(k)\hat{\mu}(k) to be the sample mean of optimal objective value of LP(1) with the cost vector c(n,k)c(n,k). We denote the sample mean of objective value with the strongly incompressible cost vector by μ~\tilde{\mu}. Our empirical results show that, as long as kk is small (so that c(n,k)c(n,k) is very sparse), the value of the sample mean differs significantly from the one in Table 1. On the other hand, for kk sufficiently large the relative gap between the sample means defined by

(|μ~μ^(k)|μ~×100)%\left(\frac{|\tilde{\mu}-\hat{\mu}(k)|}{\tilde{\mu}}\times 100\right)\%

becomes negligible. The results are presented in Table 3.

6.3 Limiting distribution of the optimal objective value

Recall that, given mutually independent standard Gaussian variables, their maximum converges (up to appropriate rescaling and translation) to the Gumbel distribution as the number of variables tends to infinity. If the vertices of the random polyhedron P={xn:Ax𝟏}P=\{x\in\mathbb{R}^{n}:\;Ax\leq{\bf 1}\} were behaving like independent Gaussian vectors (with a proper rescaling) then the limiting distribution of zz^{*} for any given fixed cost vector would be asymptotically Gumbel. Our computations suggest that in fact the limiting distribution is Gaussian.

Refer to caption
((a))
Refer to caption
((b))
Figure 3: The frequency histogram and the empirical cumulative distribution function of the optimal objective values for the Gaussian matrix AA with m=1000m=1000, n=50n=50 (the sample size = 10001000). The KS test results: KS statistic = 0.02320.0232, pp-value = 0.64530.6453.
Refer to caption
((a))
Refer to caption
((b))
Figure 4: The frequency histogram and the empirical cumulative distribution function of the optimal objective values for the Rademacher matrix AA with m=1000m=1000, n=50n=50 (the sample size = 10001000). The KS test results: KS statistic = 0.02190.0219, pp-value = 0.71610.7161.

In our numerical experiments, we consider Gaussian and Rademacher random coefficient matrices of dimensions m=1000m=1000 and n=50n=50. The sample size in either case is taken to be 10001000. As in the previous experiment, we take cc to be a random vector with i.i.d Rademacher components rescaled to have the Euclidean norm one. We generate the cost vector once so that it is the same for every LP from a sample. We employ the Kolmogorov–Smirnov (KS) test to compare the sample distribution of zz^{*} to Gaussian of a same mean and variance. In either case, the obtained p-value exceeds the conventional significance level of 0.05, indicating lack of substantial evidence to refute the null hypothesis that the data is normally distributed. Further, visual representation of the frequency histogram and the empirical cumulative distribution function of the optimal values shown in Figures 3 and 4 validate this conjecture.


The next conjecture summarizes the experimental findings:

Conjecture 6.1 (Limiting distribution of zz^{*}).

Let mm, nn, A(n)A(n) be as in Theorem 1.2, and for each nn let c=c(n)c=c(n) be uniform random unit cost vector independent from A(n)A(n). Then the sequence of normalized random variables

z𝔼zVarz\frac{z^{*}-{\mathbb{E}}\,z^{*}}{\sqrt{{\rm Var}\,z^{*}}}

converges in distribution to the standard Gaussian.

6.4 Standard deviation of the optimal objective value

As in the previous two numerical experiments, in this one we consider two types of the entries distributions: Gaussian and Rademacher. The cost vector is generated according to the same procedure as above. We sample the LP (1) taking the sample size to be 5050, and compute the sample standard deviation σ^\hat{\sigma} (the square root of the sample variance) of the optimal objective value. Based on our numerical studies we speculate that standard deviation of zz^{*} is roughly of the order 1/m1/\sqrt{m}, at least in the setting where mm is very large compared to nn. Indeed, upon examination of the last column in Tables 4 and 5, we observe a consistent phenomenon: the quantities are of order 11 for various choices of mm and nn.

mm nn abab σ^\hat{\sigma} σ^×m\hat{\sigma}\times\sqrt{m}
1000 50 0.408539 0.02420 0.765
2000 50 0.368161 0.01728 0.773
4000 50 0.337791 0.01333 0.843
6000 50 0.323170 0.01142 0.885
8000 50 0.313877 0.01059 0.947
10000 50 0.307196 0.01027 1.027
1000 100 0.465991 0.03014 0.953
2000 100 0.408539 0.01592 0.712
4000 100 0.368161 0.01172 0.742
6000 100 0.349456 0.00990 0.767
1000 150 0.528257 0.03192 1.010
2000 150 0.441515 0.015066 0.674
4000 150 0.391745 0.01155 0.731
6000 150 0.368161 0.011850 0.918
Table 4: Sample standard deviation of the optimal objective value with Gaussian matrix AA (the sample size = 5050)
mm nn abab σ^\hat{\sigma} σ^×m\hat{\sigma}\times\sqrt{m}
1000 50 0.408539 0.020420 0.645
2000 50 0.368161 0.017393 0.777
4000 50 0.337791 0.011962 0.756
6000 50 0.323170 0.010207 0.790
8000 50 0.313877 0.010333 0.924
10000 50 0.307196 0.008997 0.899
1000 100 0.465991 0.023097 0.730
2000 100 0.408539 0.014648 0.655
4000 100 0.368161 0.010436 0.660
6000 100 0.349456 0.011791 0.913
1000 150 0.528257 0.032600 1.030
2000 150 0.441515 0.015466 0.691
4000 150 0.391745 0.011030 0.697
6000 150 0.368161 0.009153 0.708
Table 5: The sample standard deviation of the optimal objective value with Rademacher matrix AA (the sample size = 5050)

6.5 An algorithm for finding a near-optimal feasible solution

As previously discussed, the proof of Theorem 3.1 involves an iterative projection process to restore feasibility from an initial point. In the process, we discretize the continuous space spanned by violated constraints at each iteration (i.e use a covering argument) as a means to apply probabilistic union bound. Recall that the goal of the algorithm described in the proof of Theorem 3.1 is to provide theoretical guarantees for zz^{*}. An exact software implementation of that algorithm is heavy in terms of both the amount of code and computations. On the other hand, it is of considerable interest to measure performance of block-iterative projection methods in the context of finding near optimal solution to instances of the random LP (1). To approach this task, we consider a “light” version of the algorithm from the proof of Theorem 3.1, which does not involve any covering arguments (see Algorithm 1 below). The algorithm is a version of the classical Kaczmarz block-iterative projection method as presented in [18] and its randomized counterparts in [38] and [53]. In our context, the blocks correspond to the rows of the matrix AA that are violated by the initial point and its updates at each iteration. Table 6 shows the results of running the algorithm for various parameters mm and nn. For each instance of the LP, rr is the the number of iterations required to restore feasibility, x0=(2log(m/n))12cx_{0}=(2\log(m/n))^{-\frac{1}{2}}c is the initial point and xx is the output of the Algorithm 1. Note that the obtained perturbations to x0x_{0} are in the orthogonal complement of the cost vector cc and as a result x02=z(x)\|x_{0}\|_{2}=z(x). The numbers |I0||I_{0}| and |I1||I_{1}| are the number of violated constraints by x0x_{0} at iteration 1 and x0+x1x_{0}+x_{1} at iteration 2, respectively. Note that these numbers are rather small compared to the size of the LP instance. We note that in our experiment, the algorithm did not converge for the 20000×5020000\times 50 instance of the LP, which we attribute to the random nature of the problem. At the same time, the Algorithm converged fast due to small number of constraint violations for all other given instances.


Input: Am×nA_{m\times n}: standard Gaussian random matrix,
             cc: random cost vector uniformly distributed on the sphere
Output: xx: near-optimal feasible solution to LP (1)
1 x(2log(m/n))12cx\leftarrow(2\log(m/n))^{-\frac{1}{2}}c;
2 j0j\leftarrow 0;
3 ϵ0.1\epsilon\leftarrow 0.1;
4 while xx is not feasible do
5       jj+1j\leftarrow j+1;
6       Find the set of constraints Ij1I_{j-1} that are violated or nearly violated by xx, i.e all indices ii with rowi(A),x>1ϵ\langle{\rm row}_{i}(A),x\rangle>1-\epsilon;
7       Compute vector bb where bi=1ϵrowi(A),xb_{i}=1-\epsilon-\langle{\rm row}_{i}(A),x\rangle for all iIj1i\in I_{j-1};
8       Project the constraints into the space cc^{\perp}  (i.e define vi=rowi(A)rowi(A),ccv_{i}={\rm row}_{i}(A)-\langle{\rm row}_{i}(A),c\rangle c for all iIj1i\in I_{j-1}) ;
9       Find xjx_{j} as a linear combination of viv_{i}’s, iIj1i\in I_{j-1}, so that the constraints violated by xx are repaired (specifically, solve the linear system MTMu=bM^{T}Mu=b for uu, then solve xj=Mux_{j}=Mu where MM is the matrix with columns viv_{i});
10       ϵϵ0.1\epsilon\leftarrow\epsilon*0.1;
11       xx+xjx\leftarrow x+x_{j};
12      
13 end while
14return xx
Algorithm 1 Algorithm for finding near-optimal feasible solution to LP (1), an instance of block-iterative Kaczmarz projection method
mm nn rr x02=z(x)\|x_{0}\|_{2}=z(x) |I0||I_{0}| |I1||I_{1}|
1000 50 2 0.408539 11 1
1000 100 2 0.465991 28 4
2000 50 2 0.368161 6 1
2000 100 2 0.408539 23 1
4000 50 2 0.337791 13 2
4000 100 2 0.368161 30 1
6000 50 2 0.323170 22 21
6000 100 2 0.349456 31 8
8000 50 2 0.313877 22 6
8000 100 2 0.337791 21 5
10000 50 2 0.307196 19 1
10000 100 1 0.329505 34 0
20000 100 1 0.307196 29 0
40000 50 1 0.273493 12 0
40000 100 2 0.288881 34 3
60000 50 2 0.265558 16 2
60000 100 1 0.279576 36 0
80000 50 1 0.260329 21 0
80000 100 2 0.273493 36 1
100000 50 2 0.256479 28 5
100000 100 2 0.269040 35 1
Table 6: Numerical results for the Algorithm 1 for various mm and nn (rr = the number of iterations to restore the feasibility).

7 Conclusion

In this paper, we study the optimal objective value zz^{*} of a random linear program where the entries of the m×nm\times n coefficient matrix AA have subgaussian distributions, the variables x1,,xnx_{1},\dots,x_{n} are free and the right hand side of each constraint is 11. We establish sharp asymptotics z=(1±o(1))(2log(m/n))12z^{*}=(1\pm o(1))(2\log(m/n))^{-\frac{1}{2}} in the regime nn\to\infty, mn\frac{m}{n}\to\infty and some additional assumptions on the cost vectors. This asymptotic provides quantitative information about the geometry of the random polyhedron defined by the feasible region of the random LP (1), specifically about its spherical mean width. The computational experiments support our theoretical guarantees and give insights into several open questions regarding the asymptotic behaviour of the random LP. The connection between the proof of the main result and convex feasibility problems allows us to provide a fast algorithm for obtaining a near optimal solution in our randomized setting. The random LP (1) is a step towards studying models with inhomogeneous coefficient matrices and establishing stronger connections to practical applications.

Acknowledgements

We are grateful to Dylan Altschuler for bringing our attention to [15, Section 3.7].

Declarations

Funding. Konstantin Tikhomirov is partially supported by the NSF grant DMS-2331037.

Conflict of interest/Competing interests. The authors declare no conflict of interest in regard to this publication.

Code availability. The code for the numerical experiments is available on https://github.com/marzb93/RandomLinearProgram

References

  • [1] Shmuel Agmon. The relaxation method for linear inequalities. Canad. J. Math., 6:382–392, 1954.
  • [2] Ron Aharoni and Yair Censor. Block-iterative projection methods for parallel computation of solutions to convex feasibility problems. Linear Algebra and Its Applications, 120:165–175, 1989.
  • [3] David Alonso-Gutiérrez and Joscha Prochno. On the Gaussian behavior of marginals and the mean width of random polytopes. Proc. Amer. Math. Soc., 143(2):821–832, 2015.
  • [4] Madan Mohan Babbar. Distributions of solutions of a set of linear equations (with an application to linear programming). Journal of the American Statistical Association, 50(271):854–869, 1955.
  • [5] Zhong-Zhi Bai and Wen-Ting Wu. On greedy randomized Kaczmarz method for solving large sparse linear systems. SIAM J. Sci. Comput., 40(1):A592–A606, 2018.
  • [6] Charles Blair. Random linear programs with many variables and few constraints. Math. Programming, 34(1):62–71, 1986.
  • [7] K.-H. Borgwardt. The average number of pivot steps required by the simplex-method is polynomial. Z. Oper. Res. Ser. A-B, 26(5):A157–A177, 1982.
  • [8] Karl-Heinz Borgwardt. The simplex method, volume 1 of Algorithms and Combinatorics: Study and Research Texts. Springer-Verlag, Berlin, 1987. A probabilistic analysis.
  • [9] Karl Heinz Borgwardt. A sharp upper bound for the expected number of shadow vertices in LP-polyhedra under orthogonal projection on two-dimensional planes. Math. Oper. Res., 24(3):544–603, 1999.
  • [10] Karthekeyan Chandrasekaran and Santosh S Vempala. Integer feasibility of random polytopes: random integer programs. In Proceedings of the 5th conference on Innovations in theoretical computer science, pages 449–458, 2014.
  • [11] Sergei Chubanov. A polynomial projection algorithm for linear feasibility problems. Math. Program., 153(2, Ser. A):687–713, 2015.
  • [12] Daniel Dadush and Sophie Huiberts. A friendly smoothed analysis of the simplex method. In STOC’18—Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 390–403. ACM, New York, 2018.
  • [13] N. Dafnis, A. Giannopoulos, and A. Tsolomitis. Asymptotic shape of a random polytope in a convex body. J. Funct. Anal., 257(9):2820–2839, 2009.
  • [14] Jesús A. De Loera, Jamie Haddock, and Deanna Needell. A sampling Kaczmarz-Motzkin algorithm for linear feasibility. SIAM J. Sci. Comput., 39(5):S66–S87, 2017.
  • [15] Amir Dembo and Ofer Zeitouni. Large deviations techniques and applications, volume 38 of Applications of Mathematics (New York). Springer-Verlag, New York, second edition, 1998.
  • [16] Martin E Dyer, Alan M Frieze, and Colin JH McDiarmid. On linear programs with random costs. Mathematical Programming, 35:3–16, 1986.
  • [17] Yonina C. Eldar and Deanna Needell. Acceleration of randomized Kaczmarz method via the Johnson-Lindenstrauss lemma. Numer. Algorithms, 58(2):163–177, 2011.
  • [18] Tommy Elfving. Block-iterative methods for consistent and inconsistent linear equations. Numerische Mathematik, 35:1–12, 1980.
  • [19] William Feller. An introduction to probability theory and its applications, Volume 2, volume 81. John Wiley & Sons, 1991.
  • [20] Apostolos Giannopoulos and Marianna Hartzoulaki. Random spaces generated by vertices of the cube. Discrete Comput. Geom., 28(2):255–273, 2002.
  • [21] Apostolos Giannopoulos, Labrini Hioni, and Antonis Tsolomitis. Asymptotic shape of the convex hull of isotropic log-concave random vectors. Adv. in Appl. Math., 75:116–143, 2016.
  • [22] E. D. Gluskin. An octahedron is poorly approximated by random subspaces. Funktsional. Anal. i Prilozhen., 20(1):14–20, 96, 1986.
  • [23] E. D. Gluskin. Extremal properties of orthogonal parallelepipeds and their applications to the geometry of Banach spaces. Mat. Sb. (N.S.), 136(178)(1):85–96, 1988.
  • [24] Robert M. Gower, Denali Molitor, Jacob Moorman, and Deanna Needell. On adaptive sketch-and-project for solving linear systems. SIAM J. Matrix Anal. Appl., 42(2):954–989, 2021.
  • [25] Robert M. Gower and Peter Richtárik. Randomized iterative methods for linear systems. SIAM J. Matrix Anal. Appl., 36(4):1660–1690, 2015.
  • [26] Sophie Huiberts, Yin Tat Lee, and Xinzhi Zhang. Upper and lower bounds on the smoothed complexity of the simplex method. In STOC’23—Proceedings of the 55th Annual ACM Symposium on Theory of Computing, pages 1904–1917. ACM, New York, [2023] ©2023.
  • [27] Yuling Jiao, Bangti Jin, and Xiliang Lu. Preasymptotic convergence of randomized Kaczmarz method. Inverse Problems, 33(12):125012, 21, 2017.
  • [28] S Kaczmarz. Angenäherte auflösung von systemen linearer gleichungen (english translation by jason stockmann): Bulletin international de l’académie polonaise des sciences et des lettres. 1937.
  • [29] Marcel Klatt, Axel Munk, and Yoav Zemel. Limit laws for empirical optimal solutions in random linear programs. Annals of Operations Research, 315(1):251–278, 2022.
  • [30] A. E. Litvak, A. Pajor, M. Rudelson, and N. Tomczak-Jaegermann. Smallest singular value of random matrices and geometry of random polytopes. Adv. Math., 195(2):491–523, 2005.
  • [31] Ji Liu and Stephen J. Wright. An accelerated randomized Kaczmarz algorithm. Math. Comp., 85(297):153–178, 2016.
  • [32] Shuyu Liu, Florentina Bunea, and Jonathan Niles-Weed. Asymptotic confidence sets for random linear programs. arXiv preprint arXiv:2302.12364, 2023.
  • [33] Yong Liu and Chuan-Qing Gu. On greedy randomized block Kaczmarz method for consistent linear systems. Linear Algebra Appl., 616:178–200, 2021.
  • [34] Anna Ma, Deanna Needell, and Aaditya Ramdas. Convergence properties of the randomized extended Gauss-Seidel and Kaczmarz methods. SIAM J. Matrix Anal. Appl., 36(4):1590–1604, 2015.
  • [35] Md Sarowar Morshed, Md Saiful Islam, and Md. Noor-E-Alam. Accelerated sampling Kaczmarz Motzkin algorithm for the linear feasibility problem. J. Global Optim., 77(2):361–382, 2020.
  • [36] Md Sarowar Morshed, Md Saiful Islam, and Md Noor-E-Alam. Sampling kaczmarz-motzkin method for linear feasibility problems: generalization and acceleration. Mathematical Programming, 194(1-2):719–779, 2022.
  • [37] Theodore Samuel Motzkin and Isaac Jacob Schoenberg. The relaxation method for linear inequalities. Canadian Journal of Mathematics, 6:393–404, 1954.
  • [38] Deanna Needell and Joel A. Tropp. Paved with good intentions: analysis of a randomized block Kaczmarz method. Linear Algebra Appl., 441:199–221, 2014.
  • [39] Yu-Qi Niu and Bing Zheng. A greedy block Kaczmarz algorithm for solving large-scale linear systems. Appl. Math. Lett., 104:106294, 8, 2020.
  • [40] Gilles Pisier. The volume of convex bodies and Banach space geometry, volume 94 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge, 1989.
  • [41] András Prékopa. On the probability distribution of the optimum of a random linear program. SIAM Journal on Control, 4(1):211–222, 1966.
  • [42] Mark Rudelson and Roman Vershynin. The Littlewood-Offord problem and invertibility of random matrices. Adv. Math., 218(2):600–633, 2008.
  • [43] Steve Smale. On the average number of steps of the simplex method of linear programming. Math. Programming, 27(3):241–262, 1983.
  • [44] Daniel A. Spielman and Shang-Hua Teng. Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time. J. ACM, 51(3):385–463, 2004.
  • [45] Thomas Strohmer and Roman Vershynin. A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl., 15(2):262–278, 2009.
  • [46] Michel Talagrand. Mean field models for spin glasses. Volume I, volume 54 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer-Verlag, Berlin, 2011. Basic examples.
  • [47] Michel Talagrand. Mean field models for spin glasses. Volume II, volume 55 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer, Heidelberg, 2011. Advanced replica-symmetry and low temperature.
  • [48] Jan Telgen. On relaxation methods for systems of linear inequalities. European J. Oper. Res., 9(2):184–189, 1982.
  • [49] Roman Vershynin. Beyond Hirsch conjecture: walks on random polytopes and smoothed complexity of the simplex method. SIAM J. Comput., 39(2):646–678, 2009.
  • [50] Roman Vershynin. High-dimensional probability, volume 47 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge, 2018. An introduction with applications in data science, With a foreword by Sara van de Geer.
  • [51] Yanjun Zhang and Hanyu Li. Block sampling kaczmarz–motzkin methods for consistent linear systems. Calcolo, 58(3):39, 2021.
  • [52] Yanjun Zhang and Hanyu Li. Greedy Motzkin-Kaczmarz methods for solving linear systems. Numer. Linear Algebra Appl., 29(4):Paper No. e2429, 24, 2022.
  • [53] Yanjun Zhang and Hanyu Li. Randomized block subsampling Kaczmarz-Motzkin method. Linear Algebra Appl., 667:133–150, 2023.