This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Empirical forms of the Petty projection inequality

Grigoris Paouris    Peter Pivovarov    Kateryna Tatarko
Abstract

The Petty projection inequality is a fundamental affine isoperimetric principle for convex sets. It has shaped several directions of research in convex geometry which forged new connections between projection bodies, centroid bodies, and mixed volume inequalities. We establish several different empirical forms of the Petty projection inequality by re-examining these key relationships from a stochastic perspective. In particular, we derive sharp extremal inequalities for several multiple-entry functionals of random convex sets, including mixed projection bodies and mixed volumes.

Dedicated to the memory of Clinton Myers Petty, 1923–2021.

2020 Mathematics Subject Classification. Primary 52A21. Secondary 52A22, 52A39.Keywords. Affine isoperimetric inequalities, projection bodies, random approximation, zonotopes.

1 Introduction

1.1 The Petty projection inequality and its reach

Affine isoperimetric inequalities concern functionals on classes of sets in which ellipsoids play an extremal role. Typically such inequalities involve convex bodies, taken modulo affine (or linear) transformations, and are strictly stronger than their Euclidean counterparts. The standard isoperimetric inequality can be derived from several different affine strengthenings. Such affine inequalities have come to form an integral part of convex geometry and have been extensively investigated within Brunn-Minkowski theory; see the expository survey [22] and books [9, 34] for foundational work on this subject.

A fundamental example is the Petty projection inequality. Recall that the projection body Π(K)\Pi(K) of a convex body KK in n\mathbb{R}^{n} is defined as follows: given a direction θ\theta on the sphere Sn1S^{n-1}, the support function of Π(K)\Pi(K) is the volume of orthogonal projection of KK onto θ\theta^{\perp} (see §2 for precise definitions). We write Π(K)\Pi^{\circ}(K) for the polar of the projection body. Petty’s inequality states that among all convex bodies of the same volume, ellipsoids maximize the volume of the polar projection body. Formally, it can be stated as

|Π(K)||Π(K)||\Pi^{\circ}(K)|\leq|\Pi^{\circ}(K^{*})| (1)

where K=rB2nK^{*}=rB_{2}^{n} is the centered Euclidean ball with radius rr chosen to satisfy |K|=|rB2n||K^{*}|=|rB_{2}^{n}|. The polar projection operator Π\Pi^{\circ} satisfies Π(TK)=TΠ(K)\Pi^{\circ}(TK)=T\Pi^{\circ}(K) for any volume-preserving affine transformation TT, which explains the use of ‘affine’ in this context.

Projection bodies are an important class of convex bodies in geometry and functional analysis [2, 3, 14, 35]. The volume of Π(K)\Pi^{\circ}(K) is related to the surface area S(K)S(K) via

ωn1/n|Π(K)|1nS(K),\omega_{n}^{1/n}|\Pi^{\circ}(K)|^{-\frac{1}{n}}\leq S(K),

where ωn=|B2n|\omega_{n}=|B_{2}^{n}|; the latter follows directly from Cauchy’s formula and Hölder’s inequality (see [34, Remark 10.9.1]). Thus Petty’s inequality implies the classical isoperimetric inequality for convex sets. Up to normalization, the surface area S(K)S(K) is one of the quermassintegrals of KK, while the quantity |Π(K)|1/n|\Pi^{\circ}(K)|^{-1/n} is an affine quermassintegral of KK. Alexandrov’s inequalities state that among convex bodies of a given volume, all quermassintegrals are minimized on balls (see [34, §7.4]) . In a recent breakthrough [27], E. Milman and Yehudayoff proved that all affine quermassintegrals are minimized on ellipsoids, verifying a long-standing conjecture of Lutwak. This result establishes a family of affine inequalities that interpolate between the Petty projection inequality and the fundamental Blaschke-Santaló inequality for the volume of the polar body of KK. The latter is equalivalent to the affine isoperimetric inequality, see e.g., [22].

Petty originally built on work of Busemann concerning the expected volume of random simplices in convex bodies, and established what is known as the Busemann-Petty centroid inequality [30]. He connected the latter to projection bodies [31] by using an inequality about mixed volumes, known as Minkowski’s first inequality ([34, §7.2]), which asserts that

V1(K,L)|K|n1|L|.V_{1}(K,L)\geq|K|^{n-1}|L|.

This idea was further developed by Lutwak and plays an important role in kindred inequalities (see [22]).

Since Petty’s seminal work in 1972, his inequality has been proven by a number of different methods, e.g., [22, 33, 26, 27]. Moreover, several generalizations of the inequality have been established. In particular, Lutwak, Yang and Zhang introduced LpL_{p} and Orlicz versions of the projection body and proved the corresponding Petty inequalities [24, 25]. In [11, 1], a generalization to Minkowski valuations was obtained (see also [18] for a characterization of the projection body operator). Another generalization, established by Lutwak involves the notion of mixed projection bodies. Let K1,,Kn1K_{1},\cdots,K_{n-1} be convex sets in n\mathbb{R}^{n}. The support function of the mixed projection body Π(K1,,Kn1)\Pi(K_{1},\cdots,K_{n-1}) in a direction θ\theta is defined as the following mixed volume:

hΠ(K1,,Kn1)(θ)=nV(K1,,Kn1,[0,θ]),h_{\Pi(K_{1},\ldots,K_{n-1})}(\theta)=nV(K_{1},\ldots,K_{n-1},[0,\theta]), (2)

where [0,θ][0,\theta] is the line segment joining the origin and θ\theta. Lutwak established several inequalities for mixed projection bodies, one of which gives Petty’s projection inequality as a special case [19, Theorem 3.8]; namely,

|Π(K1,,Kn1)||Π(K1,,Kn1)|.\lvert\Pi^{\circ}(K_{1},\ldots,K_{n-1})\rvert\leq\lvert\Pi^{\circ}(K_{1}^{*},\ldots,K_{n-1}^{*})\rvert. (3)

Recent active investigation around the notion of the projection body with respect to other measures and generalizations appear in [16, 15].

In this note we establish empirical versions of the Petty projection inequality and its generalizations for mixed projection bodies. The study of empirical versions of affine isoperimetric inequalities for centroid bodies and their LpL_{p}-analogues was initiated by the first two authors in [28] and further developed in [6]. A number of inequalities in Brunn-Minkowski theory have been shown to have stronger empirical forms [29], but Petty’s projection inequality has eluded our previous efforts. Inspired by recent results of E. Milman and Yehudayoff [27], and also by the approach of Campi and Gronchi in [4], our work here is intended to fill this gap.

1.2 Empirical mixed projection body inequalities

Our main results concern randomly generated sets, obtained as linear images of a compact, convex set CmC\subseteq\mathbb{R}^{m} under an n×mn\times m random matrix 𝑿\bm{X}. Namely, we will consider sets of the form

𝑿C={c1X1++cNXN:(cj)C},\bm{X}C=\left\{c_{1}X_{1}+\ldots+c_{N}X_{N}:(c_{j})\in C\right\},

where X1,,XmX_{1},\ldots,X_{m} are independent random vectors distributed according to densities of continuous probability distributions on n\mathbb{R}^{n}. We will write 𝑿#{\bm{X}}^{\scriptscriptstyle\#} for the n×mn\times m random matrix that has independent columns distributed according to ff^{*}, the symmetric decreasing rearrangement of ff (see § 2.3).

More generally, it will be convenient to work with matrices 𝑿\bm{X} whose column vectors are grouped into blocks. Assume that {Xij}\{X_{ij}\} is a collection of independent random vectors such that XijX_{ij} is distributed according to fijf_{ij}, i=1,,ni=1,\ldots,n, j=1,,mij=1,\ldots,m_{i}, where mi1m_{i}\geq 1. For =1,,n\ell=1,\ldots,n, we write m()=m1++mm(\ell)=m_{1}+\cdots+m_{\ell}, and form 𝑿=[𝑿1𝑿]\bm{X}=[\bm{X}_{1}\ldots\bm{X}_{\ell}] with n×min\times m_{i} blocks 𝑿i=[Xi1Ximi]\bm{X}_{i}=[X_{i1}\ldots X_{im_{i}}]. We adopt a similar convention for 𝑿#{\bm{X}}^{\scriptscriptstyle\#}, which consists of n×min\times m_{i} blocks 𝑿i#=[Xi1Ximi]\bm{X}_{i}^{\scriptscriptstyle\#}=[X_{i1}^{*}\cdots X_{im_{i}}^{*}], where XijX_{ij}^{*} are independent and distributed according to fijf_{ij}^{*}. For ease of reference, we summarize this notation in Table 1.

n×mn\times m matrix with \ell blocks n×min\times m_{i} block, 1i\ 1\leq i\leq\ell columns with densities
𝑿=[𝑿1𝑿]\bm{X}=[\bm{X}_{1}\ldots\bm{X}_{\ell}] 𝑿i=[Xi1Ximi]\bm{X}_{i}=[X_{i1}\ldots X_{im_{i}}] XijfijX_{ij}\sim f_{ij}
𝑿#=[𝑿1#𝑿#]\bm{X}^{\scriptscriptstyle\#}=[\bm{X}_{1}^{\scriptscriptstyle\#}\ldots\bm{X}_{\ell}^{\scriptscriptstyle\#}] 𝑿i#=[Xi1Ximi]\bm{X}_{i}^{\scriptscriptstyle\#}=[X_{i1}^{*}\ldots X^{*}_{im_{i}}] XijfijX^{*}_{ij}\sim f^{*}_{ij}
Table 1: Random matrices with independent columns

With this notation, our first main result concerns mixed projection bodies of random sets generated by 𝑿\bm{X} and 𝑿#\bm{X}^{\scriptscriptstyle\#}.

Theorem 1.1

Let C1,,Cn1C_{1},\ldots,C_{n-1} be compact convex sets such that dim(Ci)=mi\mathop{\rm dim}(C_{i})=m_{i} for i=1,,n1i=1,\ldots,n-1 and let m=m1++mn1m=m_{1}+\ldots+m_{n-1}. Let 𝐗\bm{X} and 𝐗#\bm{X}^{\scriptscriptstyle\#} be n×mn\times m random matrices with =n1\ell=n-1 in Table 1. Then for any radial measure ν\nu with a decreasing density,

𝔼ν(Π(𝑿C1,,𝑿Cn1))𝔼ν(Π(𝑿#C1,,𝑿#Cn1)).\mathbb{E}\nu\left(\Pi^{\circ}(\bm{X}C_{1},\ldots,\bm{X}C_{n-1})\right)\leq\mathbb{E}\nu\left(\Pi^{\circ}(\bm{X}^{\scriptscriptstyle\#}C_{1},\ldots,\bm{X}^{\scriptscriptstyle\#}C_{n-1})\right).

A special case of central importance concerns the classical projection body operator. Taking =1\ell=1 and writing m=m(1)=m1m=m(1)=m_{1} and C=C1==CnC=C_{1}=\ldots=C_{n}, we have the following consequence.

Theorem 1.2

Let CC be a compact convex set in m\mathbb{R}^{m}. Let 𝐗\bm{X} and 𝐗#\bm{X}^{\scriptscriptstyle\#} be n×mn\times m random matrices with independent columns distributed according to ff and ff^{*}, respectively. Then for any radial measure ν\nu with a decreasing density,

𝔼ν(Π(𝑿C))𝔼ν(Π(𝑿#C)).\mathbb{E}\nu(\Pi^{\circ}(\bm{X}C))\leq\mathbb{E}\nu(\Pi^{\circ}(\bm{X}^{\scriptscriptstyle\#}C)).

Theorem 1.2 extends the Petty projection inequality (1) in various ways. Indeed, let KK be a convex body in n\mathbb{R}^{n} and let X1,,XmX_{1},\ldots,X_{m} be independent random vectors drawn uniformly from KK. We denote their convex hull by

[K]m=conv{X1,,Xm}.[K]_{m}=\mathop{\rm conv}\{X_{1},\ldots,X_{m}\}.

In matrix notation, we have [K]m=𝑿Sm[K]_{m}={\bm{X}}S_{m}, where 𝑿=[X1Xm]{\bm{X}}=[X_{1}\dots X_{m}] and SmS_{m} is the simplex Sm:=conv{e1,,em}S_{m}:=\mathop{\rm conv}\{e_{1},\dots,e_{m}\}. Thus if ν\nu is Lebesgue measure, the above theorem states that

𝔼|Π([K]m)|𝔼|Π([K]m)|.\mathbb{E}|\Pi^{\circ}([K]_{m})|\leq\mathbb{E}|\Pi^{\circ}([K^{*}]_{m})|. (4)

Note that Π([K]m)\Pi^{\circ}([K^{*}]_{m}) is not a ball and the above statement does not follow from Petty’s inequality. However, when mm\rightarrow\infty, we get that [K]mK[K]_{m}\rightarrow K, which implies Π([K]m)Π(K)\Pi^{\circ}([K]_{m})\rightarrow\Pi^{\circ}(K), hence (1) follows from (4). The inequality ν(Π(K))ν(Π(K))\nu\left(\Pi^{\circ}(K)\right)\leq\nu\left(\Pi^{\circ}(K^{*})\right) that we get from Theorem 1.2 as mm\to\infty can also be directly obtained by adapting the proof of Petty’s inequality in [27, Section 8.2].

More generally, let KK and LL be convex bodies in n\mathbb{R}^{n} and assume that the columns of 𝑿1{\bm{X}}_{1} and 𝑿2{\bm{X}}_{2} are distributed according to 1|K|𝟙K\frac{1}{\lvert K\rvert}\mathds{1}_{K} and 1|L|𝟙L\frac{1}{\lvert L\rvert}\mathds{1}_{L}. For p1,p21p_{1},p_{2}\geq 1, we define [K]m1p1=𝑿1Bp1m1[K]_{m_{1}}^{p_{1}}=\bm{X}_{1}B_{p_{1}}^{m_{1}} and [L]m2p2=𝑿2Bp2m2[L]_{m_{2}}^{p_{2}}=\bm{X}_{2}B_{p_{2}}^{m_{2}}, we have

𝔼ν(Π([K]m1p1+p[L]m2p2))𝔼ν(Π([K]m1p1+p[L]m2p2)),\displaystyle\mathbb{E}\nu(\Pi^{\circ}([K]_{m_{1}}^{p_{1}}+_{p}[L]_{m_{2}}^{p_{2}}))\leq\mathbb{E}\nu(\Pi^{\circ}([K^{*}]_{m_{1}}^{p_{1}}+_{p}[L^{*}]_{m_{2}}^{p_{2}})), (5)

where +p+_{p} denotes LpL_{p}-addition of sets p1p\geq 1 (see § 2); in fact, we can accommodate more general MM-addition and Orlicz addition operations (see § 2). In a similar manner, Theorem 1.1 implies

𝔼ν(Π([K]m1p1,,[K]mn1pn1))𝔼ν(Π([K]m1p1,,[K]mn1pn1))\mathbb{E}\nu(\Pi^{\circ}([K]_{m_{1}}^{p_{1}},\ldots,[K]_{m_{n-1}}^{p_{n-1}}))\leq\mathbb{E}\nu(\Pi^{\circ}([K^{*}]_{m_{1}}^{p_{1}},\dots,[K^{*}]_{m_{n-1}}^{p_{n-1}})) (6)

where we used the same notation as above. When ν\nu is the Lebesgue measure, then (6) can be seen as a local version of (3) for natural families of random convex sets associated to K1,,Kn1K_{1},\ldots,K_{n-1}.

Further specializing to the case when p1==pn1=p_{1}=\ldots=p_{n-1}=\infty, we get a corollary for the mixed projection body of centroid bodies. Recall that the centroid body Z(L)Z(L) of a convex body LL in n\mathbb{R}^{n} is defined by its support function via

hZ(L)(u)=1|L|L|x,u|𝑑x.\displaystyle h_{Z(L)}(u)=\frac{1}{\lvert L\rvert}\int_{L}\lvert\langle x,u\rangle\rvert dx.

The Busemann-Petty centroid inequality mentioned above is a sharp extremal inequality for the volume of Z(K)Z(K), which heavily influenced affine isoperimetric principles [22] and the development of LpL_{p}-Brunn–Minkowski theory [24, 25].

A stochastic notion of centroid bodies was developed in [28], which defines a random variant Zm(L)Z_{m}(L) of LL as the body with support function

hZm(L)(u)=1mi=1m|Xi,u|,\displaystyle h_{Z_{m}(L)}(u)=\frac{1}{m}\sum_{i=1}^{m}\lvert\langle X_{i},u\rangle\rvert,

where X1,,XmX_{1},\ldots,X_{m} are independent and identically distributed according to the normalized Lebesgue measure on LL. Our next result concerns mixed projection bodies of independent empirical centroid bodies Zm(K1),,Zm(Kn1)Z_{m}(K_{1}),\ldots,Z_{m}(K_{n-1}) whose support function is given by

hΠ(Zm(K1),,Zm(Kn1))(y)\displaystyle h_{\Pi(Z_{m}(K_{1}),\ldots,Z_{m}(K_{n-1}))}(y) =nV(Zm(K1),,Zm(Kn1),[0,y]).\displaystyle=nV(Z_{m}(K_{1}),\ldots,Z_{m}(K_{n-1}),[0,y]).
Corollary 1.3

Let K1,,Kn1K_{1},\dots,K_{n-1} be convex bodies in n\mathbb{R}^{n} and let Zm(K1),,Zm(Kn1)Z_{m}(K_{1}),\ldots,Z_{m}(K_{n-1}) be independent empirical centroid bodies. Then

𝔼ν(Π(Zm(K1),,Zm(Kn1)))𝔼ν(Π(Zm(K1),,Zm(Kn1))).\mathbb{E}\nu(\Pi^{\circ}(Z_{m}(K_{1}),\ldots,Z_{m}(K_{n-1})))\leq\mathbb{E}\nu(\Pi^{\circ}(Z_{m}(K_{1}^{*}),\ldots,Z_{m}(K_{n-1}^{*}))).

We note that when mm\to\infty,

V(Zm(K1),,Zm(Kn1),[0,y])c~nK1Kn1|det[x1,xn1,y]|𝑑x1𝑑xn1V(Z_{m}(K_{1}),\ldots,Z_{m}(K_{n-1}),[0,y])\to\tilde{c}_{n}\int_{K_{1}}\cdots\int_{K_{n-1}}\lvert\mathop{\rm det}[x_{1},\ldots x_{n-1},y]\rvert dx_{1}\ldots dx_{n-1} (7)

where we use [34, eq. (5.81)]. Haddad recently established a family of isoperimetric inequalities for a new class of convex bodies [12] that are defined using similar determinantal expressions as in (7) and their LpL_{p}-generalizations. Our work shows that such bodies arise naturally as limiting cases of mixed projection bodies of random sets in Theorem 1.1 when the CiC_{i}’s are chosen to be cubes.

1.3 Empirical mixed volume inequalities

We also present an alternate empirical version of Petty’s projection inequality. This approach is inspired by the proof of the inequality based on Busemann-Petty centroid inequality and Minkowski’s first inequality [22, 34]. We will use an empirical approximant of centroid bodies, defined as follows. For each convex body LL in n\mathbb{R}^{n}, we use the notation [L]m2=𝑿2Bm2[L]_{m_{2}}^{\infty}=\bm{X}_{2}B_{\infty}^{m_{2}}; this is nothing but the Minkowski sum of m2m_{2} random segments [X2j,X2j][-X_{2j},X_{2j}], where X21,,X2m2X_{21},\ldots,X_{2m_{2}} are independent random vectors sampled according to 1|L|𝟙L\frac{1}{\lvert L\rvert}\mathds{1}_{L}, i.e.,

[L]m2=j=1m2[X2j,X2j].[L]_{m_{2}}^{\infty}=\sum_{j=1}^{m_{2}}[-X_{2j},X_{2j}].

Note that 1m[L]m=Zm(L)\frac{1}{m}[L]_{m}^{\infty}=Z_{m}(L). Using this notation with L=Π(K)L=\Pi^{\circ}(K), we establish a sharp extremal inequality for the following quantity:

𝔼V1([K]m1,[Π(K)]m2)\displaystyle\mathbb{E}V_{1}([K]_{m_{1}},[\Pi^{\circ}(K)]_{m_{2}}^{\infty})
=1|K|m11|Π(K)|m2Km1(Π(K))m2V1(conv{x11,,x1m1},j=1m2[x2j,x2j])𝑑x11𝑑x1m1𝑑x21𝑑x2m2;\displaystyle=\frac{1}{\lvert K\rvert^{m_{1}}}\frac{1}{\lvert\Pi^{\circ}(K)\rvert^{m_{2}}}\int\limits_{K^{m_{1}}}\int\limits_{(\Pi^{\circ}(K))^{m_{2}}}V_{1}\left(\mathop{\rm conv}\{x_{11},\ldots,x_{1m_{1}}\},\sum_{j=1}^{m_{2}}[-x_{2j},x_{2j}]\right)dx_{11}\ldots dx_{1m_{1}}dx_{21}\ldots dx_{2m_{2}};

here we implicitly assume that the m1m_{1} random vectors from KK and m2m_{2} random vectors from Π(K)\Pi^{\circ}(K) are independent. With these notational conventions, we have the following theorem.

Theorem 1.4

Let KK be a convex body in n\mathbb{R}^{n} and m1,m2nm_{1},m_{2}\geq n. Then

𝔼V1([K]m1,[Π(K)]m2)𝔼V1([K]m1,[(Π(K))]m2).\mathbb{E}V_{1}([K]_{m_{1}},[\Pi^{\circ}(K)]_{m_{2}}^{\infty})\geq\mathbb{E}V_{1}([K^{*}]_{m_{1}},[\left(\Pi^{\circ}(K)\right)^{*}]_{m_{2}}^{\infty}).

As we will explain at the end of § 7, when m1,m2m_{1},m_{2}\rightarrow\infty, Theorem 1.4 also implies Petty’s inequality (1).

For the proof of Theorem 1.4 we first need to establish an empirical version of Minkowski’s first inequality which we believe is of independent interest. In fact, we establish a generalization of the latter, stated as follows.

Theorem 1.5

Let C1,,CnC_{1},\ldots,C_{n} be compact convex sets such that CimiC_{i}\subseteq\mathbb{R}^{m_{i}}, mi1m_{i}\geq 1, and set m=m1++mnm=m_{1}+\ldots+m_{n}. Let 𝐗\bm{X} and 𝐗#\bm{X}^{\scriptscriptstyle\#} be n×mn\times m random matrices with =n\ell=n in Table 1. Then

𝔼V(𝑿C1,,𝑿Cn)𝔼V(𝑿#C1,,𝑿#Cn).\mathbb{E}V({\bm{X}}C_{1},\ldots,{\bm{X}}C_{n})\geq\mathbb{E}V({\bm{X}}^{\scriptscriptstyle\#}C_{1},\ldots,{\bm{X}}^{\scriptscriptstyle\#}C_{n}).

Consequently, for any k=1,,n1k=1,\ldots,n-1,

𝔼V(𝑿C1,,𝑿Ck;B2n,nk)𝔼V(𝑿#C1,,𝑿#Ck;B2n,nk).\mathbb{E}V({\bm{X}}C_{1},\ldots,{\bm{X}}C_{k};B_{2}^{n},n-k)\geq\mathbb{E}V({\bm{X}}^{\scriptscriptstyle\#}C_{1},\ldots,{\bm{X}}^{\scriptscriptstyle\#}C_{k};B_{2}^{n},n-k).

Theorem 1.4 follows directly from the latter theorem since

𝔼\displaystyle\mathbb{E} V1([K]m1,[Π(K)]m2)=𝔼V1(𝑿1Sm1,𝑿2Bm2).\displaystyle V_{1}([K]_{m_{1}},[\Pi^{\circ}(K)]_{m_{2}}^{\infty})=\mathbb{E}V_{1}(\bm{X}_{1}S_{m_{1}},{\bm{X}}_{2}B^{m_{2}}_{\infty}).

In each of Theorems 1.1, 1.4 and 1.5, we have used a single matrix 𝑿{\bm{X}} with columns arranged in blocks 𝑿i{\bm{X}}_{i} according to Table 1, and multiple bodies C1,,CnC_{1},\ldots,C_{n}. When the CiC_{i} are all equal to a given compact convex set CC in m1\mathbb{R}^{m_{1}}, we have

V(𝑿C1,,𝑿Cn)=V(𝑿1C,,𝑿1C)=|𝑿1C|,V({\bm{X}}C_{1},\ldots,{\bm{X}}C_{n})=V({\bm{X}}_{1}C,\ldots,{\bm{X}}_{1}C)=\lvert{\bm{X}}_{1}C\rvert,

and only the first block 𝑿1{\bm{X}}_{1} is involved in the expression; in particular, the block matrices in the above mixed entry functional are dependent. When the CiC_{i}’s are compact convex sets placed in consecutive orthogonal subspaces mi\mathbb{R}^{m_{i}}, then the use of independent blocks 𝑿i{\bm{X}}_{i} allows for distinct entries in

V(𝑿C1,,𝑿Cn)=V(𝑿1C1,,𝑿nCn)V({\bm{X}}C_{1},\ldots,{\bm{X}}C_{n})=V({\bm{X}}_{1}C_{1},\ldots,{\bm{X}}_{n}C_{n})

and all blocks 𝑿i{\bm{X}}_{i} of 𝑿{\bm{X}} are used (and are independent). The block notation for 𝑿{\bm{X}} also accommodates scenarios between these two extremes, where some of the CiC_{i}’s are repeated while others are taken in orthogonal subspaces.

Acknowledgements. We would like to thank Mark Rudelson and Franz Schuster for helpful discussions. The second and third authors are also grateful to ICERM for excellent working conditions, where they participated in the program “Harmonic Analysis and Convexity”.

Funding. The first-named author was supported by NSF grants DMS 1800633, CCF 1900881, DMS 2405441, Simons Foundation Fellowship #823432 and Simons Foundation Collaboration grant #964286. The second-named author was supported by NSF grant DMS-2105468 and Simons Foundation grant #635531. The third-named author was supported in part by NSERC Grant no 2022-02961.

2 Preliminaries

2.1 Convex bodies and mixed volumes

We work in Euclidean space n\mathbb{R}^{n} and use |||\cdot| for the nn-dimensional Lebesgue measure. The unit Euclidean ball in n\mathbb{R}^{n} is B2nB^{n}_{2}, while the unit sphere is Sn1S^{n-1}. We use PHP_{H} to denote the orthogonal projection onto a subspace HH. We write u={xn:x,u=0}u^{\perp}=\{x\in\mathbb{R}^{n}:\ \langle x,u\rangle=0\} for the (n1)(n-1)-dimensional subspace of n\mathbb{R}^{n} that is orthogonal to uSn1u\in S^{n-1}.

A convex body KnK\subset\mathbb{R}^{n} is a compact, convex set with non-empty interior. The set of all compact, convex sets is denoted by 𝒦n\mathcal{K}^{n}. We say that KK is origin-symmetric if K=KK=-K. We also say that KK is 1-unconditional if KK is symmetric with respect to reflections in the standard coordinate hyperplanes.

The support function of K𝒦nK\in\mathcal{K}^{n} is defined by

hK(x)=maxyKx,y,xn.h_{K}(x)=\max\limits_{y\in K}\left<x,y\right>,\quad x\in\mathbb{R}^{n}.

The polar body KK^{\circ} of an origin-symmetric convex body KK in n\mathbb{R}^{n} is defined as K={xn:hK(x)1}K^{\circ}=\{x\in\mathbb{R}^{n}:\ h_{K}(x)\leq 1\}. The gauge function (or Minkowski functional) of an origin-symmetric convex body KK is defined as xK=inf{t0:xtK}||x||_{K}=\inf\{t\geq 0:\ x\in tK\}. If KK contains the origin in its interior, then xK=hK(x)||x||_{K}=h_{K^{\circ}}(x).

The Minkowski combination of K1,,Km𝒦nK_{1},\dots,K_{m}\in\mathcal{K}^{n} is defined as

λ1K1++λmKm={λ1x1++λmxm:xiKi}\lambda_{1}K_{1}+\dots+\lambda_{m}K_{m}=\{\lambda_{1}x_{1}+\dots+\lambda_{m}x_{m}:\ x_{i}\in K_{i}\}

where λ1,,λm0.\lambda_{1},\dots,\lambda_{m}\geq 0. The Minkowski theorem on volume of Minkowski combinations says that

|λ1K1++λmKm|=i1,,in=1mλi1λinV(Ki1,,Kin).|\lambda_{1}K_{1}+\dots+\lambda_{m}K_{m}|=\sum\limits_{i_{1},\dots,i_{n}=1}^{m}\lambda_{i_{1}}\cdots\lambda_{i_{n}}V(K_{i_{1}},\dots,K_{i_{n}}).

The coefficient V(Ki1,,Kin)V(K_{i_{1}},\dots,K_{i_{n}}) is the mixed volume of Ki1,,KinK_{i_{1}},\dots,K_{i_{n}}; when the last body KinK_{i_{n}} appears \ell times, we write V(Ki1,,Ki;Kin,)V(K_{i_{1}},\ldots,K_{i_{\ell}};K_{i_{n}},\ell), where 1n1\leq\ell\leq n. For K,L𝒦nK,L\in\mathcal{K}^{n} and 0in0\leq i\leq n, we write Vi(K,L)V_{i}(K,L) to denote the mixed volume of KK repeated nin-i times and LL repeated ii times.

If K1,,Kn1K_{1},\dots,K_{n-1} are convex bodies in n\mathbb{R}^{n} and uSn1u\in S^{n-1}, then we write v(PuK1,,PuKn1)v(P_{u^{\perp}}K_{1},\dots,P_{u^{\perp}}K_{n-1}) for the (n1)(n-1)-dimenisonal mixed volume of PuK1,,PuKn1P_{u^{\perp}}K_{1},\dots,P_{u^{\perp}}K_{n-1} in uu^{\perp}. It is known (see e.g. [34, p. 302]) that for uSn1u\in S^{n-1},

v(PuK1,,PuKn1)=nV(K1,,Kn1,[0,u])v(P_{u^{\perp}}K_{1},\dots,P_{u^{\perp}}K_{n-1})=nV(K_{1},\dots,K_{n-1},[0,u]) (8)

where [0,u][0,u] denotes the line segment connecting the origin and uu.

The projection body ΠK\Pi K of a convex body KK is defined as the origin-symmetric convex body such that hΠK(u)=|PuK|h_{\Pi K}(u)=|P_{u^{\perp}}K| for all uSn1u\in S^{n-1}. It follows from (8) that hΠK(u)=nV1(K,[0,u])h_{\Pi K}(u)=nV_{1}(K,[0,u]) for uSn1u\in S^{n-1}. We will denote the polar projection body (ΠK)(\Pi K)^{\circ} by ΠK\Pi^{\circ}K.

More generally, the mixed projection body Π(K1,,Kn1)\Pi(K_{1},\dots,K_{n-1}) of the convex bodies K1,,Kn1K_{1},\dots,K_{n-1} is defined by

hΠ(K1,,Kn1)(u)=v(PuK1,,PuKn1)=nV(K1,,Kn1,[0,u])h_{\Pi(K_{1},\dots,K_{n-1})}(u)=v(P_{u^{\perp}}K_{1},\dots,P_{u^{\perp}}K_{n-1})=nV(K_{1},\dots,K_{n-1},[0,u])

for all uSn1,u\in S^{n-1}, where we used (8) in the last identity. The polar of the mixed projection body Π(K1,,Kn1)\Pi^{\circ}(K_{1},\dots,K_{n-1}) is defined as (Π(K1,,Kn1))(\Pi(K_{1},\dots,K_{n-1}))^{\circ} so that

θΠ(K1,,Kn1)=nV(K1,,Kn1,[0,θ]).||\theta||_{\Pi^{\circ}(K_{1},\dots,K_{n-1})}=nV(K_{1},\dots,K_{n-1},[0,\theta]).

For background on mixed projection bodies, see [21] and [34].

2.2 LpL_{p} and MM-addition operations

We recall the notion of LpL_{p}-addition of convex bodies from LpL_{p}-Brunn–Minkowski theory, e.g. [8, 20, 23]. For K,L𝒦nK,L\in\mathcal{K}^{n} containing the origin and p1p\geq 1, we will write K+pLK+_{p}L for their LpL_{p}-sum, i.e.,

hK+pLp(u)=hKp(u)+hLp(u),un.h^{p}_{K+_{p}L}(u)=h^{p}_{K}(u)+h^{p}_{L}(u),\quad u\in\mathbb{R}^{n}. (9)

A general framework for addition of sets, called MM-addition, was developed by Gardner, Hug and Weil [10]. Let MM be an arbitrary subset of m\mathbb{R}^{m} and define the MM-combination M(K1,,Km)\oplus_{M}(K_{1},\dots,K_{m}) of arbitrary sets K1,,KmK_{1},\dots,K_{m} in n\mathbb{R}^{n} by

M(K1,,Km)={i=1maixi:xiKi,(a1,,am)M}.\oplus_{M}(K_{1},\dots,K_{m})=\left\{\sum_{i=1}^{m}a_{i}x_{i}:x_{i}\in K_{i},\,(a_{1},\dots,a_{m})\in M\right\}.

If M={(1,1)}M=\{(1,1)\} and K1K_{1}, and K2K_{2} are convex sets, then K1MK2=K1+K2K_{1}\oplus_{M}K_{2}=K_{1}+K_{2}, i.e., M\oplus_{M} is the usual Minkowski addition. If M=Bqn,q1M=B_{q}^{n},\ q\geq 1 with 1/p+1/q=11/p+1/q=1, and K1K_{1} and K2K_{2} are origin-symmetric convex bodies, then K1MK2=K1+pK2K_{1}\oplus_{M}K_{2}=K_{1}+_{p}K_{2}, i.e., M\oplus_{M} corresponds to LpL_{p}-addition as in (9). More generally, let ψ:[0,)2[0,)\psi:[0,\infty)^{2}\to[0,\infty) be convex, increasing in each argument, and ψ(0,0)=0\psi(0,0)=0, ψ(1,0)=ψ(0,1)=1\psi(1,0)=\psi(0,1)=1. Let KK and LL be origin-symmetric convex bodies and let M=BψM=B_{\psi}^{\circ}, where Bψ={(t1,t2)[1,1]2:ψ(|t1|,|t2|)1}B_{\psi}=\{(t_{1},t_{2})\in[-1,1]^{2}:\psi(|t_{1}|,|t_{2}|)\leq 1\}. Then we define K+ψLK+_{\psi}L to be KMLK\oplus_{M}L. In this way, MM-addition encompasses previous notions of Orlicz addition, e.g. [25].

It was shown in [10, Section 6] that when MM is 1-unconditional and K1,,KmK_{1},\ldots,K_{m} are origin-symmetric and convex, then M(K1,,Km)\oplus_{M}(K_{1},\dots,K_{m}) is origin-symmetric and convex. For our purposes, for such MM and C1,,CmC_{1},\ldots,C_{m}, we have

M(𝐗1C1,,𝐗nCn)=[𝐗1𝐗n]M(C1,,Cn);\oplus_{M}({\bf X}_{1}C_{1},\ldots,{\bf X}_{n}C_{n})=[{\bf X}_{1}\cdots{\bf X}_{n}]\oplus_{M}(C_{1},\ldots,C_{n}); (10)

see [29, Sections 4 and 5] for further background, details and references.

2.3 Symmetrization of sets and functions

Let K𝒦nK\in\mathcal{K}^{n} and uSn1u\in S^{n-1}. We define fK:PuKf_{K}:P_{u^{\perp}}K\rightarrow\mathbb{R} by

fK(x)=sup{λ:x+λuK}f_{K}(x)=\sup\{\lambda:x+\lambda u\in K\}

and gK:PuKg_{K}:P_{u^{\perp}}K\rightarrow\mathbb{R} by

gK(x)=inf{λ:x+λuK}.g_{K}(x)=\inf\{\lambda:x+\lambda u\in K\}.

Notice that fK-f_{K} and gKg_{K} are convex functions.

The Steiner symmetral of a non-empty Borel set AnA\subseteq\mathbb{R}^{n} of finite measure with respect to uu^{\perp}, denoted here by SuAS_{u^{\perp}}A, is constructed as follows: for each line ll orthogonal to uu^{\perp} such that lAl\cap A is non-empty and measurable, the set lSuAl\cap S_{u^{\perp}}A is a closed segment with midpoint on uu^{\perp} and length equal to the one-dimensional measure of lAl\cap A. In particular, if KK is a convex body

SuK={x+λu:xPuK,fK(x)gK(x)2λfK(x)gK(x)2}.S_{u^{\perp}}K=\left\{x+\lambda u:x\in P_{u^{\perp}}K,-\dfrac{f_{K}(x)-g_{K}(x)}{2}\leq\lambda\leq\dfrac{f_{K}(x)-g_{K}(x)}{2}\right\}.

This shows that SuKS_{u^{\perp}}K is convex, since the function fKgKf_{K}-g_{K} is concave. Moreover, SuKS_{u^{\perp}}K is symmetric with respect to uu^{\perp}, it is closed, and by Fubini’s theorem it has the same volume as KK.

For a Borel set AnA\subset\mathbb{R}^{n} with finite volume, the symmetric rearrangement AA^{*} of AA is the open Euclidean ball centered at the origin whose volume is equal to the volume of AA. The symmetric decreasing rearrangement of 𝟙A\mathds{1}_{A} is defined as 𝟙A=𝟙A\mathds{1}_{A}^{*}=\mathds{1}_{A^{*}}. It will be convenient to use the following bracket notation for indicator functions:

𝟙A(x)=xA.\mathds{1}_{A}(x)=\left\llbracket x\in A\right\rrbracket. (11)

Let f:n+f:\mathbb{R}^{n}\rightarrow\mathbb{R}_{+} be an integrable function. Its layer-cake representation is given by

f(x)=0x{f>t}dt.f(x)=\int_{0}^{\infty}\left\llbracket x\in\{f>t\}\right\rrbracket dt. (12)

The symmetric decreasing rearrangement ff^{*} of ff is defined by

f(x)=0x{f>t}dt.f^{*}(x)=\int\limits_{0}^{\infty}\left\llbracket x\in\{f>t\}^{*}\right\rrbracket dt.

The function ff^{*} is radially-symmetric, radially decreasing and equimeasurable with ff, i.e. {f>t}\{f>t\} and {f>t}\{f^{*}>t\} have the same volume for each t>0.t>0. For integrable functions ff, the Steiner symmetral f(|u)f^{*}(\cdot|u) of ff with respect to uu^{\perp} is defined as follows:

f(x|u)=0xSu{f>t}\displaystyle f^{*}(x|u)=\int_{0}^{\infty}\left\llbracket x\in S_{u^{\perp}}\{f>t\}\right\rrbracket

In other words, we obtain f(|u)f^{*}(\cdot|u) by rearranging ff along every line parallel to uu. For more background on rearrangements, see [17].

3 Convexity and rearrangement inequalities

3.1 Shadow systems

Rogers–Shephard [32] and Shephard [36] systematized the use of Steiner symmetrization as a means of proving geometric inequalities with their introduction of linear parameter systems and shadow systems, respectively. A linear parameter system is a family of sets

Kt=conv{xi+αitu:iI},K_{t}=\mathop{\rm conv}\{x_{i}+\alpha_{i}tu:i\in I\}, (13)

where {αi}iI\{\alpha_{i}\}_{i\in I} and {xi}iI\{x_{i}\}_{i\in I} are bounded sets, and II is an index set. For a unit vector unu\in\mathbb{R}^{n} and a convex body 𝒞\mathcal{C} in n+1=n\mathbb{R}^{n+1}=\mathbb{R}^{n}\oplus\mathbb{R}, a shadow system is a family of sets of the form

Kt=Pt𝒞,K_{t}=P_{t}\mathcal{C},

where Pt:n+1nP_{t}:\mathbb{R}^{n+1}\rightarrow\mathbb{R}^{n} is the projection parallel to en+1tue_{n+1}-tu. Setting

𝒞=conv{x+α(x)en+1:xKen+1}\mathcal{C}=\mathop{\rm conv}\{x+\alpha(x)e_{n+1}:x\in K\subseteq e_{n+1}^{\perp}\}

where α\alpha is a bounded function on KK, gives rise to the shadow system for t[0,1]t\in[0,1],

Kt=conv{x+tα(x):xK}.K_{t}=\mathop{\rm conv}\{x+t\alpha(x):x\in K\}.

The choice of α(x)=g(Pux)f(Pux)\alpha(x)=-g(P_{u^{\perp}}x)-f(P_{u^{\perp}}x) has the property that K0=KK_{0}=K, while K1/2K_{1/2} is the Steiner symmetral of KK about uu^{\perp}, and K1K_{1} is the reflection of KK about uu^{\perp}. For background on linear parameter systems and shadow systems, we refer to [34, 4].

We will make essential use of the following fundamental theorem of Shephard

Theorem 3.1

Let Kt1,,KtnK_{t}^{1},\ldots,K_{t}^{n} be shadow systems in common direction uu. Then

[0,1]tV(Kt1,,Ktn)[0,1]\ni t\mapsto V(K_{t}^{1},\ldots,K_{t}^{n})

is convex.

3.2 Analytic tools

A non-negative, non-identically zero function ff is called log-concave if logf\log f is concave on {f>0}\{f>0\}. We note that if ff is a convex function, then the function f(x)1\left\llbracket f(x)\leq 1\right\rrbracket is log\log-concave. Also, ff is quasi-concave if for all tt the set {x:f(x)>t}\{x:f(x)>t\} is convex, and ff is quasi-convex if for all tt the set {x:f(x)t}\{x:f(x)\leq t\} is convex.

The Prékopa–Leindler inequality states that for 0<λ<10<\lambda<1 and functions f,g,h:n+f,g,h:\mathbb{R}^{n}\rightarrow\mathbb{R}_{+} such that for any x,ynx,y\in\mathbb{R}^{n}

h(λx+(1λ)y)f(x)λg(y)1λ,h(\lambda x+(1-\lambda)y)\geq f(x)^{\lambda}g(y)^{1-\lambda},

the following inequality holds

nh(nf)λ(ng)1λ.\int_{\mathbb{R}^{n}}h\geq\left(\int_{\mathbb{R}^{n}}f\right)^{\lambda}\left(\int_{\mathbb{R}^{n}}g\right)^{1-\lambda}.

We will use the following consequence of the Prékopa–Leindler inequality: if f:n×m+f:\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}_{+} is log\log-concave, then

g(x)=mf(x,y)𝑑xg(x)=\int_{\mathbb{R}^{m}}f(x,y)dx

is a log\log-concave function on n\mathbb{R}^{n}; see [13].

We also make use of Christ’s form [5] of the Rogers–Brascamp–Lieb–Lutinger inequality, see the survey [29] for the related inequalities, their applications and further references.

Theorem 3.2

Let f1,,fm:n+f_{1},\dots,f_{m}:\mathbb{R}^{n}\rightarrow\mathbb{R_{+}} be integrable functions and let F:(n)m+F:(\mathbb{R}^{n})^{m}\rightarrow\mathbb{R}_{+}. Suppose that FF satisfies the following condition: for any uSn1u\in S^{n-1} and for any ω=(ω1,,ωm)u\omega=(\omega_{1},\dots,\omega_{m})\subset~{}u^{\perp}, the function Fu,ω:m+F_{u,\omega}:\mathbb{R}^{m}\rightarrow\mathbb{R}_{+} defined by Fu,ω(t1,,tm)=F(ω1+t1u,,ωm+tmu)F_{u,\omega}(t_{1},\dots,t_{m})=F(\omega_{1}+t_{1}u,\dots,\omega_{m}+t_{m}u) is even and quasi-concave. Then

nnF(x1,,xm)i=1mfi(xi)dx1dxmnnF(x1,,xm)i=1mfi(xi)dx1dxm.\displaystyle\int\limits_{\mathbb{R}^{n}}\cdots\int\limits_{\mathbb{R}^{n}}F(x_{1},\dots,x_{m})\prod_{i=1}^{m}f_{i}(x_{i})dx_{1}\dots dx_{m}\leq\int\limits_{\mathbb{R}^{n}}\cdots\int\limits_{\mathbb{R}^{n}}F(x_{1},\dots,x_{m})\prod_{i=1}^{m}f^{*}_{i}(x_{i})dx_{1}\dots dx_{m}.

When each Fu,ωF_{u,\omega} is even and quasi-convex, then the inequality in Theorem 3.2 is reversed.

For subsequent reference, we note that the theorem is proved by iterated Steiner symmetrization and the key step involves Steiner symmetrization as follows:

nnF(x1,,xm)i=1mfi(xi)dx1dxm=\displaystyle\int_{\mathbb{R}^{n}}\cdots\int_{\mathbb{R}^{n}}F(x_{1},\dots,x_{m})\prod_{i=1}^{m}f_{i}(x_{i})dx_{1}\dots dx_{m}=
=(u)mmFu,ω(t1,,tm)i=1mfi(ωi+tiu)dt1dtmdω1dωm\displaystyle=\int_{(u^{\perp})^{m}}\int_{\mathbb{R}^{m}}F_{u,\omega}(t_{1},\dots,t_{m})\prod_{i=1}^{m}f_{i}(\omega_{i}+t_{i}u)dt_{1}\dots dt_{m}d\omega_{1}\dots d\omega_{m}
(u)mmFu,ω(t1,,tm)i=1mfi(ωi+tiu|u)dt1dtmdω1dωm\displaystyle\leq\int_{(u^{\perp})^{m}}\int_{\mathbb{R}^{m}}F_{u,\omega}(t_{1},\dots,t_{m})\prod_{i=1}^{m}f^{*}_{i}(\omega_{i}+t_{i}u|u)dt_{1}\dots dt_{m}d\omega_{1}\dots d\omega_{m}
=nnF(x1,,xm)i=1mfi(xi|u)dx1dxm,\displaystyle=\int_{\mathbb{R}^{n}}\cdots\int_{\mathbb{R}^{n}}F(x_{1},\dots,x_{m})\prod_{i=1}^{m}f^{*}_{i}(x_{i}|u)dx_{1}\dots dx_{m},

where fi(|u)f_{i}^{*}(\cdot|u) is the Steiner symmetral of fif_{i} in direction uu.

4 Minimizing the mixed volume of random convex sets

The proof of Theorem 1.5 relies on Theorem 3.1 about the convexity of mixed volumes of shadow systems along a common direction. Here we show how this interfaces with the use of random linear operators.

Proof of Theorem 1.5. We start by associating a shadow system to linear images of convex sets. To fix the notation, let CC be a compact convex set in m\mathbb{R}^{m} and let x1,,xmnx_{1},\ldots,x_{m}\in\mathbb{R}^{n}. We will attach a shadow system in direction uSn1u\in S^{n-1} to the set

[x1xm]C={i=1mcixi:(ci)C}.[x_{1}\ldots x_{m}]C=\left\{\sum_{i=1}^{m}c_{i}x_{i}:(c_{i})\in C\right\}.

Decompose xix_{i} as xi=ωi+tiux_{i}=\omega_{i}+t_{i}u, where ωiu\omega_{i}\in u^{\perp} for i=1,,mi=1,\ldots,m. Let 𝝎=(ω1,,ωm){\bm{\omega}}=(\omega_{1},\ldots,\omega_{m}). For 𝐭=(t1,,tm)m{\bf t}=(t_{1},\ldots,t_{m})\in\mathbb{R}^{m}, we form the n×mn\times m matrix

T𝝎(𝐭)=[ω1+t1θωm+tmθ].T_{{\bm{\omega}}}({\bf t})=[\omega_{1}+t_{1}\theta\dots\omega_{m}+t_{m}\theta].

Fix 𝐭,𝐭m{\bf t},{\bf t}^{\prime}\in\mathbb{R}^{m} and c=(cj)Cc=(c_{j})\in C. Then for each λ[0,1]\lambda\in[0,1],

T𝝎(λ𝐭+(1λ)𝐭)c\displaystyle T_{{\bm{\omega}}}(\lambda{\bf t}+(1-\lambda){\bf t}^{\prime})c =\displaystyle= j=1mcj(ωj+(λtj+(1λ)tj)u)\displaystyle\sum_{j=1}^{m}c_{j}(\omega_{j}+(\lambda t_{j}+(1-\lambda)t_{j}^{\prime})u)
=\displaystyle= j=1mcj(ωj+tju)+λ(j=1mci(tjtj))u.\displaystyle{\sum_{j=1}^{m}c_{j}(\omega_{j}+t_{j}^{\prime}u)}+\lambda{\Bigl{(}\sum_{j=1}^{m}c_{i}(t_{j}-t_{j}^{\prime})\Bigr{)}}u.

For cCc\in C, we write xc=j=1mcj(ωj+tju)x_{c}={\sum_{j=1}^{m}c_{j}(\omega_{j}+t_{j}^{\prime}u)} and αc=j=1mcj(tjtj)\alpha_{c}=\sum_{j=1}^{m}c_{j}(t_{j}-t_{j}^{\prime}) so that

T𝝎(λ𝐭+(1λ)𝐭)C={xc+λαcu:cC}.T_{{\bm{\omega}}}(\lambda{\bf t}+(1-\lambda){\bf t}^{\prime})C=\left\{x_{c}+\lambda\alpha_{c}u:c\in C\right\}. (14)

As a linear image of the convex set CC, the latter is convex and hence takes the form of a linear parameter system in (13), which is indexed by CC and generated by the bounded sets {xc}cC\{x_{c}\}_{c\in C} and {αc}cC\{\alpha_{c}\}_{c\in C}.

Similarly, assume we have nn compact convex sets C1,,CnC_{1},\ldots,C_{n} and xij=ωij+tijux_{ij}=\omega_{ij}+t_{ij}u, where ωiju\omega_{ij}\in u^{\perp}. For i=1,,ni=1,\ldots,n, let 𝝎i=(ωi1,,ωim){\bm{\omega}}_{i}=(\omega_{i1},\ldots,\omega_{im}). For 𝐭¯=(𝐭1,,𝐭n)m1××mn\underline{{\bf t}}=({\bf t}_{1},\ldots,{\bf t}_{n})\in\mathbb{R}^{m_{1}}\times\dots\times\mathbb{R}^{m_{n}}, we write 𝐭i=(ti1,,timi)mi{\bf t}_{i}=(t_{i1},\ldots,t_{im_{i}})\in\mathbb{R}^{m_{i}} and set

T𝝎ii(𝐭i)=[ωi1+ti1θωimi+timiθ].T_{{\bm{\omega}}_{i}}^{i}({\bf t}_{i})=[\omega_{i1}+t_{i1}\theta\dots\omega_{im_{i}}+t_{im_{i}}\theta].

We first consider the case when C1,,CnC_{1},\ldots,C_{n} are in mutually orthogonal subspaces, then 𝑿Ci=𝑿iCi\bm{X}C_{i}=\bm{X}_{i}C_{i} and the quantity under consideration is

𝔼\displaystyle\mathbb{E} V(𝑿1C1,,𝑿nCn)\displaystyle V({\bm{X}}_{1}C_{1},\ldots,{\bm{X}}_{n}C_{n})
=nnV([x11x1m1]C1,,[xn1xnmn]Cn)i=1nj=1mif(xij)dx11dxnmn.\displaystyle=\int_{\mathbb{R}^{n}}\cdots\int_{\mathbb{R}^{n}}V([x_{11}\cdots x_{1m_{1}}]C_{1},\dots,[x_{n1}\cdots x_{nm_{n}}]C_{n})\prod_{i=1}^{n}\prod_{j=1}^{m_{i}}f(x_{ij})\,dx_{11}\cdots dx_{nm_{n}}. (15)

As we will apply Theorem 3.2, it is sufficient to show that

m1××mn(𝐭1,,𝐭n)V(T𝝎11(𝐭1)C1,,T𝝎nn(𝐭n)Cn)\mathbb{R}^{m_{1}}\times\cdots\times\mathbb{R}^{m_{n}}\ni({\bf t}_{1},\ldots,{\bf t}_{n})\mapsto V(T_{{\bm{\omega}}_{1}}^{1}({\bf t}_{1})C_{1},\ldots,T_{{\bm{\omega}}_{n}}^{n}({\bf t}_{n})C_{n})

is convex. We need only show that the function is convex on any line joining given points 𝐭¯=(𝐭1,,𝐭n)\underline{{\bf t}}=({\bf t}_{1},\ldots,{\bf t}_{n}) and 𝐭¯=(𝐭1,,𝐭n)\underline{{\bf t}}^{\prime}=({\bf t}_{1}^{\prime},\ldots,{\bf t}_{n}^{\prime}) in m1××mn\mathbb{R}^{m_{1}}\times\cdots\times\mathbb{R}^{m_{n}}, i.e., we need only to establish convexity of

[0,1]λV(T𝝎11(λ𝐭1+(1λ)𝐭1)C1,,T𝝎nn(λ𝐭n+(1λ)𝐭n)Cn).[0,1]\ni\lambda\mapsto V(T_{{\bm{\omega}}_{1}}^{1}(\lambda{\bf t}_{1}+(1-\lambda){\bf t}_{1}^{\prime})C_{1},\ldots,T_{{\bm{\omega}}_{n}}^{n}(\lambda{\bf t}_{n}+(1-\lambda){\bf t}_{n}^{\prime})C_{n}).

By the discussion at the beginning of this section, each argument in the mixed volume is a shadow system in the common direction uu. Therefore, we can apply Theorem 3.1 to obtain the required convexity in λ\lambda.

In the case CiC_{i} are not necessarily mutually orthogonal, then 𝑿C1,𝑿Cn\bm{X}C_{1},\ldots\bm{X}C_{n} share some common columns. The proof then applies verbatim but on a smaller product space. For example, in the case when all CiC_{i} are identical, the mixed volume reduces to the volume and one works with

nn|[x11x1m1]C1|j=1m1f(x1j)dx11dx1m1;\int_{\mathbb{R}^{n}}\cdots\int_{\mathbb{R}^{n}}\lvert[x_{11}\cdots x_{1m_{1}}]C_{1}\rvert\prod_{j=1}^{m_{1}}f(x_{1j})dx_{11}\ldots dx_{1m_{1}};

here the product space involves only the first m1m_{1} random vectors. As Theorem 3.1 concerns any nn shadow systems (whether or not some are identical), it remains applicable in this case.

\scriptstyle\blacksquare

5 An empirical Petty projection inequality for measures

While Theorem 1.2 follows directly from Theorem 1.1, we will first prove the former for simplicity of exposition.

Proof of Theorem 1.2. We will first assume that ν\nu is a rotationally invariant, log-concave measure on n\mathbb{R}^{n} with density ρ(x)\rho(x), i.e. dν(x)=ρ(x)dxd\nu(x)=\rho(x)dx. For uSn1u\in S^{n-1} and yuy\in u^{\perp}, let ρy(s)=ρ(y+su)\rho_{y}(s)=\rho(y+su) be the restriction of ρ\rho to the line that passes through yy parallel to uu. We note that since ν\nu is a rotationally invariant, log-concave measure then ρy(s)\rho_{y}(s) is even and log-concave for any fixed yuy\in u^{\perp}.

Fix uSn1u\in S^{n-1} and ω1,,ωmu\omega_{1},\ldots,\omega_{m}\in u^{\perp}. As in §4, we write T𝝎(𝐭)=[ω1+t1uωm+tmu]T_{\bm{\omega}}({\bf t})=[\omega_{1}+t_{1}u\dots\omega_{m}+t_{m}u]. Using the notation for indicator functions (11), we have

ν(Π(T𝝎(𝐭)C))\displaystyle\nu(\Pi^{\circ}(T_{\bm{\omega}}({\bf t})C)) =\displaystyle= uy+suΠT𝝎(𝐭)Cρy(s)dsdy\displaystyle\int\limits_{u^{\perp}}\int\limits_{\mathbb{R}}\left\llbracket y+su\in\Pi^{\circ}T_{\bm{\omega}}({\bf t})C\right\rrbracket\rho_{y}(s)dsdy (16)
=\displaystyle= unVn1(T𝝎(𝐭)C,[0,y+su])1ρy(s)dsdy.\displaystyle\int\limits_{u^{\perp}}\int\limits_{\mathbb{R}}\left\llbracket nV_{n-1}(T_{\bm{\omega}}({\bf t})C,[0,y+su])\leq 1\right\rrbracket\rho_{y}(s)dsdy. (17)

Fixed yuy\in u^{\perp} and set h(𝐭,s)=nVn1(T𝝎(𝐭)C,[0,y+su])h({\bf t},s)=nV_{n-1}(T_{\bm{\omega}}({\bf t})C,\,[0,y+su]), and define

Gy(𝐭)=h(𝐭,s)1ρy(s)ds.G_{y}({\bf t})=\int\limits_{\mathbb{R}}\left\llbracket h({\bf t},s)\leq 1\right\rrbracket\,\rho_{y}(s)ds.

Note that hh is jointly convex in 𝐭{\bf t} and ss. To see this, it is sufficient to show that, given any two points (𝐭,s)({\bf t},s) and (𝐭,s)({\bf t}^{\prime},s^{\prime}) in m×\mathbb{R}^{m}\times\mathbb{R}, the restriction of hh to the line segment [0,1]λλ(𝐭,s)+(1λ)(𝐭,s)[0,1]\ni\lambda\mapsto\lambda({\bf t},s)+(1-\lambda)({\bf t}^{\prime},s^{\prime}) is convex. Set

f(λ)=h(λ𝐭+(1λ)𝐭,λs+(1λ)s),f(\lambda)=h(\lambda{\bf t}+(1-\lambda){\bf t}^{\prime},\lambda s+(1-\lambda)s^{\prime}),

and observe that

f(λ)=nVn1(T𝝎(λ(𝐭𝐭)+𝐭)C,[0,y+(λ(ss)+s)u]).\displaystyle f(\lambda)=nV_{n-1}(T_{\bm{\omega}}(\lambda({\bf t}-{\bf t}^{\prime})+{\bf t}^{\prime})C,\,[0,y+(\lambda(s-s^{\prime})+s^{\prime})u]).

Each of the arguments are shadow systems in a common direction uu as observed above. Thus the convexity of f(λ)f(\lambda), and hence that of h(𝐭,s)h({\bf t},s), follows from Theorem 3.1.

Using the joint convexity of hh in 𝐭{\bf t} and ss, we have that h(𝐭,s)1\left\llbracket h({\bf t},s)\leq 1\right\rrbracket is log\log-concave. As GyG_{y} is the marginal of log\log-concave function on m×\mathbb{R}^{m}\times\mathbb{R}, it is also log\log-concave by the Prekopa–Leinder inequality (see §2.3). In particular, GyG_{y} is quasi-concave.

Next, we note that hh is an even function. Indeed, the sets [ω1+t1u,,ωm+tmu]C\left[\omega_{1}+t_{1}u,\dots,\omega_{m}+t_{m}u\right]C and [ω1t1u,,ωmtmu]C\left[\omega_{1}-t_{1}u,\dots,\omega_{m}-t_{m}u\right]C are reflections of one another with respect to uu^{\perp}, hence

h(𝐭,s)\displaystyle h(-{\bf t},-s) =\displaystyle= nVn1(T𝝎(𝐭)C,[0,ysu])\displaystyle nV_{n-1}(T_{\bm{\omega}}(-{\bf t})C,\,[0,y-su])
=\displaystyle= nVn1(Ru([ω1+t1u,,ωm+tmu]C),Ru[0,y+su])\displaystyle nV_{n-1}(R_{u}(\left[\omega_{1}+t_{1}u,\dots,\omega_{m}+t_{m}u\right]C),\,R_{u}[0,y+su])
=\displaystyle= nVn1([ω1+t1u,,ωm+tmu]C,[0,y+su])\displaystyle nV_{n-1}(\left[\omega_{1}+t_{1}u,\dots,\omega_{m}+t_{m}u\right]C,\,[0,y+su])
=\displaystyle= h(𝐭,s),\displaystyle h({\bf t},s),

where RuR_{u} denotes the reflection with respect to uu^{\perp}. The rotational invariance of the density ρ\rho implies that ρy\rho_{y} is even in ss. Thus, by a change of variables, we have that GyG_{y} is even

Gy(𝐭)=h(𝐭,s)1ρy(s)ds=h(𝐭,s)1ρy(s)ds=h(𝐭,s)1ρy(s)ds=Gy(𝐭).\displaystyle G_{y}(-{\bf t})=\int\limits_{\mathbb{R}}\left\llbracket h(-{\bf t},s)\leq 1\right\rrbracket\rho_{y}(s)ds=\int\limits_{\mathbb{R}}\left\llbracket h({\bf t},-s)\leq 1\right\rrbracket\rho_{y}(s)ds=\int\limits_{\mathbb{R}}\left\llbracket h({\bf t},s)\leq 1\right\rrbracket\rho_{y}(-s)ds=G_{y}({\bf t}).

Thus for every uSn1u\in S^{n-1} and for every 𝝎\bm{\omega}, the function Gy(𝐭)G_{y}({\bf t}) is quasi-concave for every yuy\in u^{\perp}. We write

𝔼(ν(Π(𝑿C)))\displaystyle\mathbb{E}\left(\nu(\Pi^{\circ}({\bm{X}}C))\right) =\displaystyle= nnν(Π([x1xm]C))i=1mf(xi)dx1dxm\displaystyle\int_{\mathbb{R}^{n}}\cdots\int_{\mathbb{R}^{n}}\nu(\Pi^{\circ}([x_{1}\cdots x_{m}]C))\prod_{i=1}^{m}f(x_{i})\,dx_{1}\cdots dx_{m}
=\displaystyle= (u)mmν(Π(T𝝎(𝐭)C)i=1mf(ωi+tiu)d𝐭dy¯\displaystyle\int_{(u^{\perp})^{m}}\int_{\mathbb{R}^{m}}\nu(\Pi^{\circ}(T_{\bm{\omega}}({\bf t})C)\prod_{i=1}^{m}f(\omega_{i}+t_{i}u)\,d{\bf t}d\overline{y}
=\displaystyle= (u)mmuGy(𝐭)𝑑yi=1mf(ωi+tiu)d𝐭dy¯\displaystyle\int_{(u^{\perp})^{m}}\int_{\mathbb{R}^{m}}\int_{u^{\perp}}G_{y}({\bf t})dy\prod_{i=1}^{m}f(\omega_{i}+t_{i}u)\,d{\bf t}d\overline{y}
=\displaystyle= (u)mumGy(𝐭)i=1mf(ωi+tiu)d𝐭dydy¯.\displaystyle\int_{(u^{\perp})^{m}}\int_{u^{\perp}}\int_{\mathbb{R}^{m}}G_{y}({\bf t})\prod_{i=1}^{m}f(\omega_{i}+t_{i}u)d{\bf t}dyd\overline{y}.

Applying the key step in the proof of Theorem 3.2 from §3.2, we obtain:

𝔼(ν(Π(𝑿C)))𝔼(ν(Π(𝑿uC))),\mathbb{E}\left(\nu(\Pi^{\circ}({\bm{X}}C))\right)\leq\mathbb{E}\left(\nu(\Pi^{\circ}({\bm{X}}^{u}C))\right),

where 𝑿u{\bm{X}}^{u} has independent columns distributed according to the Steiner symmetrals f(|u)f^{*}(\cdot|u). By iterated symmetrizations, we obtain

𝔼(ν(Π(𝑿C)))𝔼(ν(Π(𝑿#C))).\mathbb{E}\left(\nu(\Pi^{\circ}({\bm{X}}C))\right)\leq\mathbb{E}\left(\nu(\Pi^{\circ}({\bm{X}}^{\scriptscriptstyle\#}C))\right).

As the proof shows, Theorem 3.2 does not require densities to be identical. Hence, Theorem 1.2 holds under this less restrictive assumption.

Up to this point, we dealt with a log-concave measure ν\nu. The above applies, in particular, to the case of the Lebesgue measure restricted to a centered Euclidean ball. Next, we can consider a measure ν\nu such that

dν(x)=ρ(x)dxwithρ:[0,)[0,)decreasing.d\nu(x)=\rho(x)dx\qquad\text{with}\qquad\rho:[0,\infty)\to[0,\infty)\qquad\text{decreasing}.

Using Fubini’s theorem, we have

𝔼ν(Π(𝑿C))=𝔼0|Π(𝑿C){ρt}|𝑑t=0ρ(0)𝔼|Π(𝑿C)R(t)B2n|𝑑t\mathbb{E}\nu(\Pi^{\circ}({\bm{X}}C))=\mathbb{E}\int\limits_{0}^{\infty}|\Pi^{\circ}({\bm{X}}C)\cap\{\rho\geq t\}|dt=\int\limits_{0}^{\rho(0)}\mathbb{E}|\Pi^{\circ}({\bm{X}}C)\cap R(t)B^{n}_{2}|dt

where R(t)R(t) is a radius of an Euclidean ball {ρt}\{\rho\geq t\} which implies the result for any radial measure ν\nu with a decreasing density ρ\rho. \scriptstyle\blacksquare

Corollary 5.1

Let KK and LL be convex bodies in n\mathbb{R}^{n}. Let M2M\subseteq\mathbb{R}^{2} be 11-unconditional and compact. Then for p1,p21p_{1},p_{2}\geq 1

𝔼ν(Π([K]m1p1M[L]m2p2))𝔼ν(Π([K]m1p1M[L]m2p2)).\mathbb{E}\nu(\Pi^{\circ}([K]_{m_{1}}^{p_{1}}\oplus_{M}[L]_{m_{2}}^{p_{2}}))\leq\mathbb{E}\nu(\Pi^{\circ}([K^{*}]_{m_{1}}^{p_{1}}\oplus_{M}[L^{*}]_{m_{2}}^{p_{2}})).

If M=Bq2M=B_{q}^{2}, with 1/p+1/q=11/p+1/q=1, then M\oplus_{M} coincides with +p+_{p}-addition as defined in §2.2. Thus the corollary immediately implies (5).

Proof. Let the columns of 𝑿1{\bm{X}}_{1} and 𝑿2{\bm{X}}_{2} be independent and distributed according to 1|K|𝟙K\frac{1}{\lvert K\rvert}\mathds{1}_{K} and 1|L|𝟙L\frac{1}{\lvert L\rvert}\mathds{1}_{L}, respectively. As in the introduction, [K]m1p1=𝑿1Bp1m1[K]_{m_{1}}^{p_{1}}={\bm{X}}_{1}B_{p_{1}}^{m_{1}} and [L]m2p2=𝑿2Bp2m2[L]_{m_{2}}^{p_{2}}={\bm{X}}_{2}B_{p_{2}}^{m_{2}}. Assuming Bp1m1span{e1,,em1}B_{p_{1}}^{m_{1}}\subseteq\mathop{\rm span}\{e_{1},\ldots,e_{m_{1}}\} and writing Bp2m2^\widehat{B_{p_{2}}^{m_{2}}} for the copy of Bp2m2B_{p_{2}}^{m_{2}} lying in span{em1+1,,em1+m2}\mathop{\rm span}\{e_{m_{1}+1},\ldots,e_{m_{1}+m_{2}}\}, we have

[K]m1p1M[L]m2p2\displaystyle[K]_{m_{1}}^{p_{1}}\oplus_{M}[L]_{m_{2}}^{p_{2}} =𝑿1Bp1m1M𝑿2Bp2m2\displaystyle={\bm{X}}_{1}B_{p_{1}}^{m_{1}}\oplus_{M}{\bm{X}}_{2}B_{p_{2}}^{m_{2}}
=[𝑿1𝑿2](Bp1m1MBp2m2^)\displaystyle=[{\bm{X}}_{1}{\bm{X}}_{2}](B_{p_{1}}^{m_{1}}\oplus_{M}\widehat{B_{p_{2}}^{m_{2}}})
=𝑿(Bp1m1MBp2m2^).\displaystyle={\bm{X}}(B_{p_{1}}^{m_{1}}\oplus_{M}\widehat{B_{p_{2}}^{m_{2}}}).

In this way the MM-addition operation coincides with the image of the convex body C=Bp1m1MBp2m2^C=B_{p_{1}}^{m_{1}}\oplus_{M}\widehat{B_{p_{2}}^{m_{2}}} under 𝑿{\bm{X}}. Similarly, we have

[K]m1p1M[L]m2p2=𝑿#(Bp1m1MBp2m2^).\displaystyle[K^{*}]_{m_{1}}^{p_{1}}\oplus_{M}[L^{*}]_{m_{2}}^{p_{2}}={\bm{X}}^{\#}(B_{p_{1}}^{m_{1}}\oplus_{M}\widehat{B_{p_{2}}^{m_{2}}}).

Thus Theorem 1.2 applies directly. \scriptstyle\blacksquare

Remark 5.2

The latter corollary is a special case of the following inequality

𝔼ν(Π(M(𝑿1C1,,𝑿nCn)))𝔼ν(Π(M(𝑿1#C1,,𝑿n#Cn))),\mathbb{E}\nu(\Pi^{\circ}(\oplus_{M}(\bm{X}_{1}C_{1},\ldots,\bm{X}_{n}C_{n})))\leq\mathbb{E}\nu(\Pi^{\circ}(\oplus_{M}(\bm{X}^{\#}_{1}C_{1},\ldots,\bm{X}^{\#}_{n}C_{n}))),

which follows directly from Theorem 1.2 and (10) whenever M(C1,,Cn)\oplus_{M}(C_{1},\ldots,C_{n}) is compact and convex.

6 Mixed projection bodies

In this section, we prove Theorem 1.1. As the argument is similar to the proof given in § 5, we simply outline the additional steps.

Proof of Theorem 1.1. Let ν\nu be a log-concave, rotationally invariant measure on n\mathbb{R}^{n} with a density ρ(x)\rho(x) as in § 5. As in § 4, we start by assuming that C1,,CnC_{1},\ldots,C_{n} lie in the mutually orthogonal subspaces.

Fix uSn1u\in S^{n-1} and 𝝎i=(ωi1,,ωim)\bm{\omega}_{i}=(\omega_{i1},\dots,\omega_{im}) such that ωiju\omega_{ij}\in u^{\perp} for i=1,,n1i=1,\dots,n-1. Using notation in § 4, we write T𝝎ii(𝐭i)=[ωi1+ti1θωim+timθ].T^{i}_{\bm{\omega}_{i}}({{\bf t}_{i}})=[\omega_{i1}+t_{i1}\theta\cdots\omega_{im}+t_{im}\theta]. Thus, we have

ν(Π(T𝝎11(𝐭1)C1,,T𝝎n1n1(𝐭n1)Cn1))\displaystyle\nu(\Pi^{\circ}(T^{1}_{\bm{\omega}_{1}}({\bf t}_{1})C_{1},\dots,T^{n-1}_{\bm{\omega}_{n-1}}({\bf t}_{n-1})C_{n-1})) (18)
=\displaystyle= uy+suΠ(T𝝎11(𝐭1)C1,,T𝝎n1n1(𝐭n1)Cn1)ρy(s)dsdy\displaystyle\int\limits_{u^{\perp}}\int\limits_{\mathbb{R}}\left\llbracket y+su\in\Pi^{\circ}(T^{1}_{\bm{\omega}_{1}}({\bf t}_{1})C_{1},\dots,T^{n-1}_{\bm{\omega}_{n-1}}({\bf t}_{n-1})C_{n-1})\right\rrbracket\rho_{y}(s)dsdy
=\displaystyle= unV(T𝝎11(𝐭1)C1,,T𝝎n1n1(𝐭n1)Cn1,[0,y+su])1ρy(s)dsdy.\displaystyle\int\limits_{u^{\perp}}\int\limits_{\mathbb{R}}\left\llbracket nV(T^{1}_{\bm{\omega}_{1}}({\bf t}_{1})C_{1},\dots,T^{n-1}_{\bm{\omega}_{n-1}}({\bf t}_{n-1})C_{n-1},[0,y+su])\leq 1\right\rrbracket\rho_{y}(s)dsdy.

For fixed yuy\in u^{\perp}, we define

h(𝐭¯,s)=V(T𝝎11(𝐭1)C1,,T𝝎n1n1(𝐭n1)Cn1,[0,y+su])h(\underline{{\bf t}},s)=V(T^{1}_{\bm{\omega}_{1}}({\bf t}_{1})C_{1},\dots,T^{n-1}_{\bm{\omega}_{n-1}}({\bf t}_{n-1})C_{n-1},[0,y+su])

and

Gy(𝐭¯)=h(𝐭¯,s)1ρy(s)ds.G_{y}(\underline{{\bf t}})=\int\limits_{\mathbb{R}}\left\llbracket h(\underline{{\bf t}},s)\leq 1\right\rrbracket\,\rho_{y}(s)ds.

Using the same reasoning as in Section 5, h(𝐭¯,s)h(\underline{{\bf t}},s) is jointly convex in 𝐭¯\underline{{\bf t}} and ss, and an even function. In particular, joint convexity implies that h(𝐭¯,s)1\left\llbracket h(\underline{{\bf t}},s)\leq 1\right\rrbracket is log\log-concave. Also, we have that GyG_{y} is even, i.e., Gy(𝐭¯)=Gy(𝐭¯).G_{y}(-\underline{{\bf t}})=G_{y}(\underline{{\bf t}}). Therefore, for every uSn1u\in S^{n-1} and for every 𝝎i\bm{\omega}_{i}, the function Gy(𝐭¯)G_{y}(\underline{{\bf t}}) is quasi-concave for every yuy\in u^{\perp}.

Repeating the same argument on (n)m1××(n)mn1(\mathbb{R}^{n})^{m_{1}}\times\dots\times(\mathbb{R}^{n})^{m_{n-1}} as in §5, we get

𝔼(ν(Π(𝑿1C1,,𝑿n1Cn1)))𝔼(ν(Π(𝑿1uC1,,𝑿n1uCn1))),\mathbb{E}\left(\nu(\Pi^{\circ}({\bm{X}}_{1}C_{1},\dots,{\bm{X}}_{n-1}C_{n-1}))\right)\leq\mathbb{E}\left(\nu(\Pi^{\circ}({\bm{X}}^{u}_{1}C_{1},\dots,{\bm{X}}^{u}_{n-1}C_{n-1}))\right),

and after iteration of the repeated symmetrization in suitable directions, we arrive at the following

𝔼(ν(Π(𝑿1C1,,𝑿n1Cn1)))𝔼(ν(Π(𝑿1#C1,,𝑿n1#Cn1))).\mathbb{E}\left(\nu(\Pi^{\circ}({\bm{X}}_{1}C_{1},\dots,{\bm{X}}_{n-1}C_{n-1}))\right)\leq\mathbb{E}\left(\nu(\Pi^{\circ}({\bm{X}}^{\scriptscriptstyle\#}_{1}C_{1},\dots,{\bm{X}}^{\scriptscriptstyle\#}_{n-1}C_{n-1}))\right).

Once again, when C1,,CnC_{1},\ldots,C_{n} are non-mutually orthogonal, the argument applies verbatim on the smaller product space.

Finally, as above, the proof shows that densities need not be identical. \scriptstyle\blacksquare

7 Laws of large numbers and convergence

In this section, we detail how one can obtain deterministic inequalities from our main stochastic inequalities. As in the introduction, the centroid body Z(L)Z(L) of a convex body LL in n\mathbb{R}^{n},is defined by

hZ(L)(u)=1|L|L|x,u|𝑑x.\displaystyle h_{Z(L)}(u)=\frac{1}{\lvert L\rvert}\int_{L}\lvert\langle x,u\rangle\rvert dx.

Similarly, the empirical centroid body Z1,N(L)Z_{1,N}(L) of LL is given by

hZ1,N(L)(u)=1Ni=1N|Xi,u|,\displaystyle h_{Z_{1,N}(L)}(u)=\frac{1}{N}\sum_{i=1}^{N}\lvert\langle X_{i},u\rangle\rvert,

where X1,,XNX_{1},\ldots,X_{N} are independent and identically distributed according to the normalized Lebesgue measure on LL. Note that by the strong law of large numbers (as in e.g., [28]), we have convergence a.s. in the Hausdorff metric:

Z1,N(L)Z(L).\displaystyle Z_{1,N}(L)\rightarrow Z(L). (19)
Proposition 7.1

Let KK be a convex body in n\mathbb{R}^{n} and m1,m2nm_{1},m_{2}\geq n. Then

1m2𝔼V1([K]m1,[Π(K)]m2)V1(K,Z(Π(K))).\displaystyle\frac{1}{m_{2}}\mathbb{E}V_{1}([K]_{m_{1}},[\Pi^{\circ}(K)]_{m_{2}}^{\infty})\rightarrow V_{1}(K,Z(\Pi^{\circ}(K))). (20)

Proof. We have convergence a.s. in the Hausdorff metric

[K]m1=conv{X11,,X1m1}K;\displaystyle[K]_{m_{1}}=\mathop{\rm conv}\{X_{11},\ldots,X_{1m_{1}}\}\rightarrow K;

see, e.g., [7] for a stronger quantitative result on the rate of convergence. Similarly, by (19)

1m2[L]m2=1m2j=1m2[X2j,X2j]Z(L).\displaystyle\frac{1}{m_{2}}[L]_{m_{2}}^{\infty}=\frac{1}{m_{2}}\sum_{j=1}^{m_{2}}[-X_{2j},X_{2j}]\rightarrow Z(L).

It follows that we have convergence a.s., as m1,m2m_{1},m_{2}\rightarrow\infty,

1m2V1([K]m1,[Π(K)]m2)V1(K,Z(Π(K))).\displaystyle\frac{1}{m_{2}}V_{1}([K]_{m_{1}},[\Pi^{\circ}(K)]_{m_{2}}^{\infty})\rightarrow V_{1}(K,Z(\Pi^{\circ}(K))).

Note that

V1([K]m1,[Π(K)]m2)\displaystyle V_{1}([K]_{m_{1}},[\Pi^{\circ}(K)]_{m_{2}}^{\infty}) V1(K,[Π(K)]m2)\displaystyle\leq V_{1}(K,[\Pi^{\circ}(K)]_{m_{2}}^{\infty})
=1nSn1h[Π(K)]m2(u)𝑑σK(u)\displaystyle=\frac{1}{n}\int_{S^{n-1}}h_{[\Pi^{\circ}(K)]_{m_{2}}^{\infty}}(u)d\sigma_{K}(u)
=1n1m2j=1m2Sn1|u,X2j|𝑑σK(u)\displaystyle=\frac{1}{n}\frac{1}{m_{2}}\sum_{j=1}^{m_{2}}\int_{S^{n-1}}\lvert\langle u,X_{2j}\rangle\rvert d\sigma_{K}(u)
1nR(Π(K))R(K)S(K)\displaystyle\leq\frac{1}{n}R(\Pi^{\circ}(K))R(K)S(K)

where R(K)R(K) is the circumradius of KK and S(K)S(K) is the surface area of KK. By the bounded convergence theorem, we have as m1,m2m_{1},m_{2}\rightarrow\infty,

1m2𝔼V1([K]m1,[Π(K)]m2)V1(K,Z(Π(K))).\displaystyle\frac{1}{m_{2}}\mathbb{E}V_{1}([K]_{m_{1}},[\Pi^{\circ}(K)]_{m_{2}}^{\infty})\rightarrow V_{1}(K,Z(\Pi^{\circ}(K))).

\scriptstyle\blacksquare

Lastly, we show that Thereom 1.4 implies Petty’s projection inequality (1). Applying Theorem 1.4 as m1,m2m_{1},m_{2}\rightarrow\infty and using Proposition 7.1, we get

V1(K,Z(Π(K)))V1(K,Z((Π(K)))).V_{1}(K,Z(\Pi^{\circ}(K)))\geq V_{1}(K^{*},Z((\Pi^{\circ}(K))^{*})). (21)

Now we appeal to the fact that

V1(K,Z(Π(K)))=cn,\displaystyle V_{1}(K,Z(\Pi^{\circ}(K)))=c_{n},

where cnc_{n} is a constant independent of KK; see, e.g., [22], and compute the right-hand side of (21). For this, we first observe that (ΠK)=|Π(K)|1n|Π(K)|1nΠ(K)(\Pi^{\circ}K)^{*}=\frac{|\Pi^{\circ}(K)|^{\frac{1}{n}}}{|\Pi^{\circ}(K^{*})|^{\frac{1}{n}}}\Pi^{\circ}(K^{*}) (see [9, Corollary 9.1.4]). Then

V1(K,Z((Π(K))))\displaystyle V_{1}(K^{*},Z((\Pi^{\circ}(K))^{*})) =|Π(K)|1n|Π(K)|1nV1(K,Z(Π(K)))=cn|Π(K)|1n|Π(K)|1n\displaystyle=\frac{|\Pi^{\circ}(K)|^{\frac{1}{n}}}{|\Pi^{\circ}(K^{*})|^{\frac{1}{n}}}V_{1}(K^{*},Z(\Pi^{\circ}(K^{*})))=c_{n}\frac{|\Pi^{\circ}(K)|^{\frac{1}{n}}}{|\Pi^{\circ}(K^{*})|^{\frac{1}{n}}}

where we used homogeneity of mixed volumes and Z(ϕK)=ϕZ(K)Z(\phi K)=\phi Z(K) for ϕGLn\phi\in GL_{n}. Thus,

cn\displaystyle c_{n} |Π(K)|1n|Π(K)|1ncn\displaystyle\geq\frac{|\Pi^{\circ}(K)|^{\frac{1}{n}}}{|\Pi^{\circ}(K^{*})|^{\frac{1}{n}}}c_{n}

which implies (1).

Remark 7.2

The above derivation closely resembles the proof of the Petty projection inequality using the first Minkowski inequality and Busemann centroid inequality which can be found in [22]. However, the stochastic inequality in Theorem 1.4 in effect combines the use of these two inequalities into one step.

References

  • [1] Astrid Berg and Franz E. Schuster. Lutwak-Petty projection inequalities for Minkowski valuations and their duals. J. Math. Anal. Appl., 490(1):124190, 24, 2020.
  • [2] Ethan D. Bolker. A class of convex bodies. Trans. Amer. Math. Soc., 145:323–345, 1969.
  • [3] J. Bourgain and J. Lindenstrauss. Projection bodies. In Geometric aspects of functional analysis (1986/87), volume 1317 of Lecture Notes in Math., pages 250–270. Springer, Berlin, 1988.
  • [4] Stefano Campi and Paolo Gronchi. Volume inequalities for sets associated with convex bodies. In Integral geometry and convexity, pages 1–15. World Sci. Publ., Hackensack, NJ, 2006.
  • [5] Michael Christ. Estimates for the kk-plane transform. Indiana Univ. Math. J., 33(6):891–910, 1984.
  • [6] Dario Cordero-Erausquin, Matthieu Fradelizi, Grigoris Paouris, and Peter Pivovarov. Volume of the polar of random sets and shadow systems. Math. Ann., 362(3-4):1305–1325, 2015.
  • [7] Lutz Dümbgen and Günther Walther. Rates of convergence for random approximations of convex sets. Adv. in Appl. Probab., 28(2):384–393, 1996.
  • [8] Wm. J. Firey. pp-means of convex bodies. Math. Scand., 10:17–24, 1962.
  • [9] Richard J. Gardner. Geometric tomography, volume 58 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, New York, second edition, 2006.
  • [10] Richard J. Gardner, Daniel Hug, and Wolfgang Weil. Operations between sets in geometry. J. Eur. Math. Soc. (JEMS), 15(6):2297–2352, 2013.
  • [11] Christoph Haberl and Franz E. Schuster. Affine vs. Euclidean isoperimetric inequalities. Adv. Math., 356:106811, 26, 2019.
  • [12] J. Haddad. A convex body associated to the Busemann random simplex inequality and the Petty conjecture. J. Funct. Anal., 281(7):Paper No. 109118, 28, 2021.
  • [13] Marek Kanter. Unimodality and dominance for symmetric random vectors. Trans. Amer. Math. Soc., 229:65–85, 1977.
  • [14] Alexander Koldobsky. Fourier analysis in convex geometry, volume 116 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2005.
  • [15] Dylan Langharst, Eli Putterman, Michael Roysdon, and Deping Ye. On higher-order extensions of the weighted projection body operator.
  • [16] Dylan Langharst, Michael Roysdon, and Artem Zvavitch. General measure extensions of projection bodies. Proc. Lond. Math. Soc. (3), 125(5):1083–1129, 2022.
  • [17] Elliott H. Lieb and Michael Loss. Analysis, volume 14 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, second edition, 2001.
  • [18] Monika Ludwig. Projection bodies and valuations. Adv. Math., 172(2):158–168, 2002.
  • [19] Erwin Lutwak. Mixed projection inequalities. Trans. Amer. Math. Soc., 287(1):91–105, 1985.
  • [20] Erwin Lutwak. The Brunn-Minkowski-Firey theory. I. Mixed volumes and the Minkowski problem. J. Differential Geom., 38(1):131–150, 1993.
  • [21] Erwin Lutwak. Inequalities for mixed projection bodies. Trans. Amer. Math. Soc., 339(2):901–916, 1993.
  • [22] Erwin Lutwak. Selected affine isoperimetric inequalities. In Handbook of convex geometry, Vol. A, B, pages 151–176. North-Holland, Amsterdam, 1993.
  • [23] Erwin Lutwak. The Brunn-Minkowski-Firey theory. II. Affine and geominimal surface areas. Adv. Math., 118(2):244–294, 1996.
  • [24] Erwin Lutwak, Deane Yang, and Gaoyong Zhang. LpL_{p} affine isoperimetric inequalities. J. Differential Geom., 56(1):111–132, 2000.
  • [25] Erwin Lutwak, Deane Yang, and Gaoyong Zhang. Orlicz projection bodies. Adv. Math., 223(1):220–242, 2010.
  • [26] E. Makai, Jr. and H. Martini. The cross-section body, plane sections of convex bodies and approximation of convex bodies. I. Geom. Dedicata, 63(3):267–296, 1996.
  • [27] Emanuel Milman and Amir Yehudayoff. Sharp isoperimetric inequalities for affine quermassintegrals. J. Amer. Math. Soc., 36(4):1061–1101, 2023.
  • [28] Grigoris Paouris and Peter Pivovarov. A probabilistic take on isoperimetric-type inequalities. Adv. Math., 230(3):1402–1422, 2012.
  • [29] Grigoris Paouris and Peter Pivovarov. Randomized isoperimetric inequalities. In Convexity and concentration, volume 161 of IMA Vol. Math. Appl., pages 391–425. Springer, New York, 2017.
  • [30] C. M. Petty. Centroid surfaces. Pacific J. Math., 11:1535–1547, 1961.
  • [31] C. M. Petty. Isoperimetric problems. In Proceedings of the Conference on Convexity and Combinatorial Geometry (Univ. Oklahoma, Norman, Okla., 1971), pages 26–41. University of Oklahoma, Department of Mathematics, Norman, OK, 1971.
  • [32] C. A. Rogers and G. C. Shephard. Some extremal problems for convex bodies. Mathematika, 5:93–102, 1958.
  • [33] Michael Schmuckenschläger. Petty’s projection inequality and Santalo’s affine isoperimetric inequality. Geom. Dedicata, 57(3):285–295, 1995.
  • [34] Rolf Schneider. Convex bodies: the Brunn-Minkowski theory, volume 151 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, expanded edition, 2014.
  • [35] Rolf Schneider and Wolfgang Weil. Zonoids and related topics. In Convexity and its applications, pages 296–317. Birkhäuser, Basel, 1983.
  • [36] G. C. Shephard. Shadow systems of convex sets. Israel J. Math., 2:229–236, 1964.


G. Paouris, Department of Mathematics, Texas A&M University, College Station, Texas, 77843-3368, USA

Department of Mathematics, Princeton University, Fine Hall, 304 Washington Road, Princeton, NJ, 08540, USA

Email address: [email protected]

P. Pivovarov, Mathematics Department, University of Missouri, Columbia, Missouri, 65211, USA

Email address: [email protected]

K. Tatarko, Department of Pure Mathematics, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada

Email address: [email protected]