This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Adversarial Meta-Learning of Gamma-Minimax Estimators
That Leverage Prior Knowledge

Hongxiang Qiu Department of Epidemilogy and Biostatistics, Michigan State University Alex Luedtke Department of Statistics, University of Washington
Abstract

Bayes estimators are well known to provide a means to incorporate prior knowledge that can be expressed in terms of a single prior distribution. However, when this knowledge is too vague to express with a single prior, an alternative approach is needed. Gamma-minimax estimators provide such an approach. These estimators minimize the worst-case Bayes risk over a set Γ\Gamma of prior distributions that are compatible with the available knowledge. Traditionally, Gamma-minimaxity is defined for parametric models. In this work, we define Gamma-minimax estimators for general models and propose adversarial meta-learning algorithms to compute them when the set of prior distributions is constrained by generalized moments. Accompanying convergence guarantees are also provided. We also introduce a neural network class that provides a rich, but finite-dimensional, class of estimators from which a Gamma-minimax estimator can be selected. We illustrate our method in two settings, namely entropy estimation and a prediction problem that arises in biodiversity studies.

1 Introduction

A variety of principles can be used to guide the search for a suitable statistical estimator. Asymptotic efficiency (Pfanzagl,, 1990), minimaxity (Wald,, 1945) and Bayes optimality (Berger,, 1985) are popular examples of such principles. Defining the performance criteria underlying these principles requires specifying a model space, that is, a collection of possible data-generating mechanisms known to contain the true, underlying distribution.

It is often desirable to incorporate prior information about the true data-generating mechanism into a statistical procedure. This might be done differently in different statistical paradigms. For frequentist methods, such as those based on the asymptotic efficiency or minimax principle, the primary way to incorporate this information is via the definition of the model space. In the Bayesian paradigm, such information may be represented by further specifying a prior distribution (or prior for short) over the model space and aiming for an estimator that minimizes the induced Bayes risk. However, in many cases, there may be several priors that are compatible with the available information, especially in the context of rich model spaces.

The Gamma-minimax paradigm, proposed by Robbins, (1951), provides a principled means to overcome the challenge of specifying a single prior distribution. Under this paradigm, a statistician first specifies a set Γ\Gamma of all priors that are consistent with the available prior information and subsequently seeks an estimator that minimizes the worst-case Bayes risk over this set of priors. The Gamma-minimax paradigm may be viewed as a robust version of the Bayesian paradigm that is less sensitive to misspecification of a prior distribution (Vidakovic,, 2000). When it is infeasible to specify a prior due to the complexity of the model space, the Gamma-minimax paradigm may also be viewed as a feasible substitute for the Bayesian paradigm. The Gamma-minimax paradigm is closely related to Bayes and minimax paradigms: when the set of priors consists of a single prior, a Gamma-minimax estimator is Bayes with respect to that prior; when the set Γ\Gamma of priors is the entire set of possible prior distributions, a Gamma-minimax estimator is also minimax.

Gamma-minimax estimators have been studied for a variety of problems. Some explicit forms of Gamma-minimax estimators have been obtained. For example, Olman and Shmundak, (1985) studied Gamma-minimax estimation of the mean of a normal distribution for the set of symmetric and unimodal priors on an interval and obtained an explicit form when this interval is sufficiently small. Eichenauer-Herrmann, (1990) generalized this result to more general parametric models and Eichenauer-Herrmann et al., (1994) obtained a further generalization with the requirement of symmetry on the priors dropped. Chen et al., (1988) studied Gamma-minimax estimation for multinomial distributions and the set of priors with bounded mean. Chen et al., (1991) studied Gamma-minimax estimation for one-parameter exponential families and the set of priors that place certain bounds on the first two moments. These results do not deal with general model spaces, such as semiparametric or nonparametric models, and general forms of the set of priors that may not directly impose bounds on prior moments of the parameters of interest. One reason for this lack of generality might be that, in the existing literature, Gamma-minimaxity is defined only for parametric models. However, an issue with parametric models is that they often fail to contain the true data-generating mechanism, in which case output from the aforementioned statistical procedures may no longer be interpretable. Another possible reason is that it is typically intractable to analytically derive Gamma-minimax estimators, even for parametric models.

To overcome this lack of analytical tractability, meta-learning algorithms to compute a minimax or Gamma-minimax estimator have been proposed. Still, most of these works focus on parametric models. For example, Nelson, (1966) and Kempthorne, (1987) each proposed an algorithm to compute a minimax estimator. Bryan et al., (2007) and Schafer and Stark, (2009) proposed an algorithm to compute an approximate confidence region of optimal expected size in the minimax sense. Noubiap and Seidel, (2001) proposed an iterative algorithm to compute a Gamma-minimax decision for the set of priors constrained by generalized moment conditions. More recent works explored computing estimators under more general models. For example, Luedtke et al., 2020a introduced an approach, termed Adversarial Monte Carlo meta-learning (AMC), for constructing minimax estimators. In the special case of prediction problems with mean-squared error, Luedtke et al., 2020b studied the invariance properties of the decision problem and their implications for AMC.

In this paper, we make the following contributions:

  1. 1.

    We propose iterative adversarial meta-learning algorithms for constructing Gamma-minimax estimators for a general model space and class of estimators. We further provide convergence guarantees for these algorithms.

To our best knowledge, this is the first algorithm to compute Gamma-minimax estimators under general models, including infinite-dimensional models. We also show that, for certain problems, there is a unique Gamma-minimax estimator and our computed estimator converges to this estimator as the number of iterations increases to infinity.

Like the approach proposed in Noubiap and Seidel, (2001), we consider sets of priors characterized by (in)equality constraints on prior generalized moments and our proposed iterative algorithm involves solving a discretized Gamma-minimax optimization problem in each intermediate step. However, we explicitly describe algorithms to solve these minimax problems, which facilitates the use of our approach by practitioners. When the space of estimators can be parameterized by a Euclidean space and gradients are available, we propose to use a gradient-based algorithm or a stochastic variant thereof. When gradients are unavailable, we propose to instead use fictitious play (Brown,, 1951; Robinson,, 1951) to compute a stochastic estimator, which is a mixture of deterministic estimators belonging to some specified collection. We also provide a convergence result that is applicable even when this collection has infinite cardinality. This is in contrast to the results in Robinson, (1951), which require that each player has only finitely many possible deterministic strategies.

  1. 2.

    We propose a Markov chain Monte Carlo (MCMC) method to construct the approximating grids defining the discretized Gamma-minimax problems used in our iterative scheme.

Like the approach proposed in Noubiap and Seidel, (2001), our proposed iterative algorithm relies on increasingly fine finite grids over the model space. However, since we allow the model space to be high or even infinite-dimensional, randomly adding points to the grid may lead to unacceptably slow convergence. To overcome this challenge, we propose to use MCMC to efficiently construct such grids.

Our theoretical results allow for many different choices of classes of estimators. Our final contribution concerns the introduction of one such class:

  1. 3.

    We introduce a new neural network architecture that can be used to parameterize statistical estimators and argue that this class represents an appealing class to optimize over.

For this final point, we build on existing works in adversarial learning (e.g., Goodfellow et al.,, 2014; Luedtke et al., 2020a, ; Luedtke et al., 2020b, ) and extreme learning machines (Huang et al., 2006b, ). Thanks to the universal approximation properties of neural networks (e.g., Hornik,, 1991; Csáji,, 2001) and extreme learning machines (Huang et al., 2006a, ), we also show that both of these parameterizations can achieve good performance for sufficiently large networks. Furthermore, inspired by pre-training (e.g., Erhan et al.,, 2010) and transfer learning (e.g., Torrey and Shavlik,, 2009), we recommend leveraging knowledge of existing estimators as inputs to the network in settings where this is possible. Under such choices of the space of estimators, we can expect to obtain a useful estimator even if the associated nonconvex-concave minimax problems prove to be difficult.

This paper is organized as follows. In Section 2, we introduce the framework of Gamma-minimax estimation and regularity conditions that we assume throughout the paper. In Section 3, we describe our proposed iterative adversarial meta-learning algorithms. In Section 4, we discuss considerations when choosing hyperparameters in the algorithms. In Section 5, we demonstrate our method in three simulation studies. We conclude with a discussion in Section 6. Proof sketches of key results are provided in the main text, and complete proofs can be found in the appendix. We also provide a table summarizing the frequently used symbols in Table 7 in the appendix. The code for our simulations is available on GitHub (Qiu,, 2022).

2 Problem setup

Let {\mathcal{M}} be a separable Hausdorff space of data-generating mechanisms that contains the truth P0P_{0} and is equipped with a metric ρ\rho. Under a data-generating mechanism PP\in{\mathcal{M}}, let 𝐗𝒳\mathbf{X}^{*}\in{\mathcal{X}}^{*} denote the random data being generated, where 𝒳{\mathcal{X}}^{*} is the space of values that the random data takes. Let 𝒞\mathcal{C} denote a known coarsening mechanism such that the observed data 𝐗=𝒞(𝐗)𝒳\mathbf{X}=\mathcal{C}(\mathbf{X}^{*})\in{\mathcal{X}}, where 𝒳{\mathcal{X}} is the space of observed data. In some cases, the coarsening mechanism will be the identity map, whereas in other settings, such as those in which missing, censored or truncated data is present, the coarsening mechanism will be nontrivial (e.g., Birmingham et al.,, 2003; Gill et al.,, 1997; Heitjan and Rubin,, 1991; Heitjan,, 1993, 1994). Let 𝒟{\mathcal{D}} denote the space of estimators (or decision functions) equipped with a metric ϱ\varrho. In practice, for computational feasibility, we will mainly consider an estimator space 𝒟{\mathcal{D}} that contains estimators parameterized by a Euclidean space such as linear estimators or neural networks, and approximates a more general space 𝒟0{\mathcal{D}}_{0}, for example, the space of all estimators satisfying certain smoothness conditions. We discuss considerations concerning the choice of 𝒟{\mathcal{D}} in Section 4.2 and note that our proposed methods may be applied to broader estimator classes. We treat 𝒟{\mathcal{D}} as fixed throughout this paper. Let R:𝒟×R:{\mathcal{D}}\times{\mathcal{M}}\rightarrow denote a risk function that measures the performance of an estimator under a data-generating mechanism such that smaller risks are preferable. We suppose throughout that {\mathcal{M}} and 𝒟{\mathcal{D}} are equipped with the topologies induced by ρ\rho and ϱ\varrho, respectively.

We now present three examples in which we formulate statistical decision problems in the above form. The first example is a general example of point estimation. We use this example to illustrate how the Gamma-minimax estimation framework naturally fits into many statistical problems. The other two examples are more concrete and we will study them in the simulations and data analyses.

Example 1 (Point estimation).

Suppose that {\mathcal{M}} is a statistical model, which may be parametric, semiparametric, or nonparametric (Bickel et al.,, 1993). The data 𝐗\mathbf{X}^{*} consists of nn independently and identically distributed (iid) random variables OiO_{i}, i=1,,ni=1,\ldots,n, following the true distribution P0P_{0}\in{\mathcal{M}}. We set 𝒞\mathcal{C} to be the identity function so that 𝐗=𝐗\mathbf{X}=\mathbf{X}^{*}. We wish to estimate an aspect Ψ(P0)\Psi(P_{0})\in of P0P_{0}. Then, we can consider 𝒟{\mathcal{D}} to be a set of 𝒳{\mathcal{X}}\rightarrow functions and the mean-squared error risk R(d,P)=𝔼P[{d(𝐗)Ψ(P)}2]R(d,P)={\mathbb{E}}_{P}[\{d(\mathbf{X})-\Psi(P)\}^{2}]. Some specific examples of estimands include:

  1. i)

    Mean: Ψ(P)=𝔼P[Oi]\Psi(P)={\mathbb{E}}_{P}[O_{i}];

  2. ii)

    Cumulative distribution function at a point oo: Ψ(P)=P(Oio)\Psi(P)=\mathbb{P}_{P}(O_{i}\leq o);

  3. iii)

    Correlation: with Oi=(Xi,Yi)2O_{i}=(X_{i},Y_{i})\in^{2}, Ψ(P)=𝔼P[XiYi]𝔼P[Xi]𝔼P[Yi]\Psi(P)={\mathbb{E}}_{P}[X_{i}Y_{i}]-{\mathbb{E}}_{P}[X_{i}]{\mathbb{E}}_{P}[Y_{i}].

Example 2 (Predicting the expected number of novel categories to be observed in a new sample).

Suppose that {\mathcal{M}} consists of multinomial distributions with an unknown number of categories. Let an iid random sample of size nn be generated from the true multinomial distribution, so that 𝐗\mathbf{X}^{*} is a multiset containing the number XkX_{k} of observations in each category kk. Suppose that only categories with nonzero occurrences are observed, so that 𝐗\mathbf{X} is a left-truncated version of 𝐗\mathbf{X}^{*}. In other words, 𝐗\mathbf{X} is the multiset 𝒞(𝐗)={Xk:Xk>0}\mathcal{C}(\mathbf{X}^{*})=\{X_{k}:X_{k}>0\}. Then, we may wish to predict the number of new categories that would be observed if a new sample of size mm were collected. This problem has been extensively studied in the literature, with applications in microbiome data, species taxonomic surveys, and assessment of vocabulary size, among other areas (e.g., Shen et al.,, 2003; Bunge et al.,, 2014; Orlitsky et al.,, 2016). This prediction problem can be formulated in our framework. For each PP\in{\mathcal{M}}, let pkp_{k} (k=1,,KPk=1,\ldots,K_{P}) be the probability of category kk, and Ψ(P)(𝐗)\Psi(P)(\mathbf{X}^{*}) be k=1KPI(Xk=0)(1(1pk)m)\sum_{k=1}^{K_{P}}I(X_{k}=0)(1-(1-p_{k})^{m}), the expected number of new observed categories given the current full data 𝐗\mathbf{X}^{*}. We consider 𝒟{\mathcal{D}} to be a set of 𝒳{\mathcal{X}}\rightarrow functions and set the risk to be the mean-squared error, that is, R(d,P)=𝔼P[{d(𝐗)Ψ(P)(𝐗)}2]R(d,P)={\mathbb{E}}_{P}[\{d(\mathbf{X})-\Psi(P)(\mathbf{X}^{*})\}^{2}]. This prediction problem is known to be intrinsically difficult when the future sample size mm is greater than the observed sample size nn, and we might expect prior information to substantially improve prediction.

Example 3 (Entropy estimation).

Consider the same data-generating mechanism and observed data as in Example 2. We may wish to estimate Shannon entropy (Shannon,, 1948) Ψ(P)=k=1KPpklogpk\Psi(P)=-\sum_{k=1}^{K_{P}}p_{k}\log p_{k}, a measure of diversity. We consider 𝒟{\mathcal{D}} to be a set of 𝒳{\mathcal{X}}\rightarrow functions and set the risk to be the mean-squared error, that is, R(d,P)=𝔼P[{d(𝐗)Ψ(P)}2]R(d,P)={\mathbb{E}}_{P}[\{d(\mathbf{X})-\Psi(P)\}^{2}]. Jiao et al., (2015) proposed a rate-minimax estimator. Thus, in contrast to Example 2, this is an example of a statistical problem with a satisfactory solution. For these problems, we might not expect prior information to substantially improve estimation.

We now define Gamma-minimaxity within our decision-theoretic framework. We assume that {\mathcal{M}} is equipped with the Borel σ\sigma-field \mathcal{B} and let Π\Pi denote the set of all probability distributions on the measurable space (,)({\mathcal{M}},\mathcal{B}). We also assume that, for any d𝒟d\in{\mathcal{D}} and any πΠ\pi\in\Pi, PR(d,P)P\mapsto R(d,P) is π\pi-integrable. The Bayes risk corresponding to an estimator dd and a prior π\pi is defined as r:(d,π)R(d,P)π(dP)r:(d,\pi)\mapsto\int R(d,P)\,\pi({\mathrm{d}}P). Let ΓΠ\Gamma\subseteq\Pi be the set of priors such that all πΓ\pi\in\Gamma are consistent with the available prior information. An estimator is called a Γ\Gamma-minimax estimator if it is in the set

argmind𝒟supπΓr(d,π).\operatorname*{argmin}_{d\in{\mathcal{D}}}\sup_{\pi\in\Gamma}r(d,\pi). (1)

Throughout the rest of this paper, we assume the existence of this solution set and other solution sets to minimax problems, and that supπΓr(d,π)\sup_{\pi\in\Gamma}r(d,\pi) is finite for any d𝒟d\in{\mathcal{D}}.

In this paper, we consider the case in which Γ\Gamma is characterized by finitely many generalized moment conditions, that is,

Γ={πΠ:ΦkL1(π),Φk(P)π(dP)ck,k=1,,K}\Gamma=\left\{\pi\in\Pi:\Phi_{k}\in L^{1}(\pi),\int\Phi_{k}(P)\,\pi({\mathrm{d}}P)\leq c_{k},k=1,\ldots,K\right\}

where each Φk:\Phi_{k}:{\mathcal{M}}\rightarrow is a prespecified function that extracts an aspect of a data-generating mechanism and ckc_{k}\in is a prespecified constant. The validity of our proposed template to find a Γ\Gamma-minimax estimator in Section 3.1 does not require Γ\Gamma to take this form, but our proposed algorithms in Sections 3.2 and 3.3 are computationally feasible for such constraints because these linear constraints lead to linear programs, which can be solved efficiently (e.g., Jiang et al.,, 2020). In principle, more general constraints can be handled by using suitable minimax problem solvers. Such constraints were considered in Noubiap and Seidel, (2001) and can represent a variety of forms of prior information. For example, with Φk=±Ψκ\Phi_{k}=\pm\Psi^{\kappa} for some κ1\kappa\geq 1, Γ\Gamma imposes bounds on prior moments of Ψ(P)\Psi(P); with Φk(P)=±𝟙(Ψ(P)I)\Phi_{k}(P)=\pm{\mathbbm{1}}(\Psi(P)\in I) for some known interval II, Γ\Gamma imposes bounds on the prior probability of Ψ(P)\Psi(P) lying in II. Similar prior information on aspects of P0P_{0} other than Ψ(P0)\Psi(P_{0}) can also be represented. In addition, since an equality can be equivalently expressed by two inequalities, Γ\Gamma may also impose equality constraints on prior generalized moments. Such information is commonly used to choose prior distributions in Bayesian settings (Sarma and Kay,, 2020). Since we do not require specifying a parametric model or specifying an entire prior distribution for any finite-dimensional summary of P0P_{0}, specifying a set Γ\Gamma of prior distributions in the above form is no more difficult — and often easier — than specifying a single prior distribution, as would be required in a Bayesian approach.

3 Proposed meta-learning algorithms to compute a Γ\Gamma-minimax estimator

Since both the model space {\mathcal{M}} and the estimator space 𝒟{\mathcal{D}} may be infinite, it is computationally infeasible to directly solve the minimax problem (1) defining a Γ\Gamma-minimax estimator. Similarly to Noubiap and Seidel, (2001), our general strategy is to discretize {\mathcal{M}} and thus consider prior distributions with discrete supports. Once the supports of prior distributions are discrete, the optimization over prior distributions only involves finitely many parameters, namely the probability masses at support points, and thus is computationally possible. We will show that, when the grid is sufficiently fine, a solution to the discretized minimax problem is close to a solution to the original minimax problem.

Our proposed algorithm consists of two main steps. The first step is to discretize the model space {\mathcal{M}} and consider an approximating grid {\mathcal{M}}_{\ell} instead of the original complicated model space {\mathcal{M}}. This discretization is illustrated in Fig. 1. We will describe {\mathcal{M}}_{\ell} in more detail in Section 3.1. In the second step, we consider a set Γ\Gamma_{\ell} of priors with support contained {\mathcal{M}}_{\ell} and compute a Γ\Gamma_{\ell}-minimax estimator. We will describe two classes of algorithms to solve this discretized minimax problem in Sections 3.2 and 3.3, respectively.

Refer to caption
Figure 1: Illustration of grid ={P(1),P(2),P(3),,P(T)}{\mathcal{M}}_{\ell}=\{P_{(1)},P_{(2)},P_{(3)},\ldots,P_{(T)}\}\subseteq{\mathcal{M}} approximating the entire model space {\mathcal{M}}. Examples of densities of distributions P(t)P_{(t)} (t=1,,Tt=1,\ldots,T) in the grid are displayed. A prior distribution with support in {\mathcal{M}}_{\ell} is parameterized by the probability mass at each distribution P(t)P_{(t)}. An example of a prior distribution is displayed as black bars with their heights being proportional to the probability masses.

3.1 Grid-based approximation of Γ\Gamma-minimax estimators

We first define the discretization of the model space {\mathcal{M}} that we will use. Let {}=1\{{\mathcal{M}}_{\ell}\}_{\ell=1}^{\infty} be an increasing sequence of finite subsets of {\mathcal{M}} such that =1\bigcup_{\ell=1}^{\infty}{\mathcal{M}}_{\ell} is dense in {\mathcal{M}}. That is, {}=1\{{\mathcal{M}}_{\ell}\}_{\ell=1}^{\infty} is an increasingly fine grid over {\mathcal{M}}. Since {\mathcal{M}} is separable, such an {}=1\{{\mathcal{M}}_{\ell}\}_{\ell=1}^{\infty} necessarily exists. Define

Γ:={πΓ:π has support in }andrsup(d,Γ):=supπΓr(d,π)\Gamma_{\ell}:=\{\pi\in\Gamma:\pi\text{ has support in }{\mathcal{M}}_{\ell}\}\qquad\text{and}\qquad r_{\sup}(d,\Gamma^{\prime}):=\sup_{\pi\in\Gamma^{\prime}}r(d,\pi)

for any d𝒟d\in{\mathcal{D}} and ΓΠ\Gamma^{\prime}\subseteq\Pi.

Algorithm 1 describes how the grids {\mathcal{M}}_{\ell} are used to compute an approximately Γ\Gamma-minimax estimator in our proposed algorithms. We will show that the approximation error decays to zero as \ell grows to infinity. Here and in the rest of the algorithms in the paper, for any real-valued function ff, when we assign argminxf(x)\operatorname*{argmin}_{x}f(x) or argmaxxf(x)\operatorname*{argmax}_{x}f(x) to a variable, we arbitrarily pick a minimizer or maximizer if there are multiple optimizers. In practice, the user may stop the iteration at some \ell and use a Γ\Gamma_{\ell}-minimax estimator dd^{*}_{\ell} as the output estimator. We discuss the stopping criterion in more detail at the end of this section.

Algorithm 1 Iteratively approximate a Γ\Gamma-minimax estimator over an increasingly fine grid.
1:for =1,2,\ell=1,2,\ldots do
2:     Construct a grid {\mathcal{M}}_{\ell}\subseteq{\mathcal{M}} such that 1{\mathcal{M}}_{\ell-1}\subsetneq{\mathcal{M}}_{\ell}
3:     dargmind𝒟supπΓr(d,π)d^{*}_{\ell}\leftarrow\operatorname*{argmin}_{d\in{\mathcal{D}}}\sup_{\pi\in\Gamma_{\ell}}r(d,\pi)

We note that the minimax problem in Line 3 of Algorithm 1 is nontrivial to solve, and therefore we propose two algorithms that can solve this minimax problem in Sections 3.2 and 3.3.

We assume that the following conditions hold throughout the rest of the paper.

Condition 1.

There exists a limit point d𝒟d^{*}\in{\mathcal{D}} of the sequence {d}=1\{d_{\ell}^{*}\}_{\ell=1}^{\infty}.

Condition 1 holds if the sequence {d}=1\{d_{\ell}^{*}\}_{\ell=1}^{\infty} eventually falls in a compact set. For example, if 𝒟{\mathcal{D}} is a space of neural networks and we take ϱ\varrho to be the Euclidean norm in the coefficient space, then we expect Condition 1 to hold if all coefficients are restricted to fall in a bounded set, which is a common restriction in theoretical analyses of neural networks (see, e.g., Goel et al.,, 2016; Zhang et al.,, 2016; Eckle and Schmidt-Hieber,, 2019) and often leads to desirable generalization bounds (see, e.g., Bartlett,, 1997; Bartlett et al.,, 2017; Neyshabur et al.,, 2017). Our theoretical results hold for any limit point dd^{*} in Condition 1, even if there is more than one of them.

Condition 2.

The mapping dR(d,P)d\mapsto R(d,P) is continuous at dd^{*} for all PP\in{\mathcal{M}}.

Condition 2 also often holds. For example, when parameterized using neural networks, all estimators are continuous functions of coefficients for common activation functions such as the sigmoid or the rectified linear unit (ReLU) (Glorot et al.,, 2011) function, and therefore dR(d,P)d\mapsto R(d,P) is continuous everywhere.

We next present a sufficient condition to ensure that dd^{*} is Γ\Gamma-minimax, so that dd^{*}_{\ell} is approximately Γ\Gamma-minimax for sufficiently large \ell.

Condition 3.

We assume that there exists an increasing sequence {Ω}=1\{\Omega_{\ell}\}_{\ell=1}^{\infty} of subsets of {\mathcal{M}} such that

  1. 1.

    =1Ω=\bigcup_{\ell=1}^{\infty}\Omega_{\ell}={\mathcal{M}};

  2. 2.

    for all =1,2,\ell=1,2,\ldots and all d𝒟d\in{\mathcal{D}}, define Γ~:={πΓ:π has support in Ω}\tilde{\Gamma}_{\ell}:=\{\pi\in\Gamma:\pi\text{ has support in }\Omega_{\ell}\} and Γ~i:={πΓ:π has support in iΩ}\tilde{\Gamma}_{i\mid\ell}:=\{\pi\in\Gamma:\pi\text{ has support in }{\mathcal{M}}_{i}\bigcap\Omega_{\ell}\}. For any πΓ~\pi\in\tilde{\Gamma}_{\ell} with a finite support, there exists a sequence πiΓ~i\pi_{i}\in\tilde{\Gamma}_{i\mid\ell} such that r(d,πi)r(d,π)r(d,\pi_{i})\rightarrow r(d,\pi) as ii\rightarrow\infty.

We note that, in contrast to {\mathcal{M}}_{\ell}, Ω\Omega_{\ell} may be an infinite set. We may expect Condition 3 to hold in many cases, especially when PR(d,P)P\mapsto R(d,P) is continuous at each d𝒟d\in{\mathcal{D}} and the grid {\mathcal{M}}_{\ell} contains a variety of distributions that are consistent with prior information represented by Γ\Gamma. We illustrate this point with two counterexamples in Appendix A. We will check the plausibility of Condition 3 for Example 2 in our simulation and data analysis in Section 5.1 for exemplar prior information; an almost identical argument shows the plausibility of Condition 3 for Example 3.

We now present the theorem on Γ\Gamma-minimaxity of dd^{*}.

Theorem 1 (Validity of grid-based approximation).

Under Conditions 13, dd^{*} is Γ\Gamma-minimax and

rsup(d,Γ)mind𝒟rsup(d,Γ)as.r_{\sup}(d_{\ell}^{*},\Gamma_{\ell})\nearrow\min_{d\in{\mathcal{D}}}r_{\sup}(d,\Gamma)\quad\text{as}\quad\ell\rightarrow\infty.

To prove Theorem 1, we utilize a result in Pinelis, (2016) to establish that rsup(d,Γ)r_{\sup}(d,\Gamma) can be approximated arbitrarily well by a discrete prior in Γ\Gamma for any d𝒟d\in{\mathcal{D}}. This is a key ingredient in the proof of Lemma 1, which states that, for any d𝒟d\in{\mathcal{D}}, rsup(d,Γ~)r_{\sup}(d,\tilde{\Gamma}_{\ell}) converges to rsup(d,Γ)r_{\sup}(d,\Gamma). Then, we show that the sequence {rsup(d,Γ)}=1\{r_{\sup}(d^{*}_{\ell},\Gamma_{\ell})\}_{\ell=1}^{\infty} is nondecreasing and upper bounded by infd𝒟rsup(d,Γ)\inf_{d\in{\mathcal{D}}}r_{\sup}(d,\Gamma), which is less than or equal to the Γ\Gamma-maximal Bayes risk rsup(d,Γ)r_{\sup}(d^{*},\Gamma) of the limit point dd^{*} of {d}=1\{d_{\ell}^{*}\}_{\ell=1}^{\infty} in Condition 1. Therefore, rsup(d,Γ)r_{\sup}(d^{*}_{\ell},\Gamma_{\ell}) converges to a limit. We finally use a contradiction argument to prove that this limit is greater than or equal to rsup(d,Γ)r_{\sup}(d^{*},\Gamma), which implies Theorem 1.

We have the following corollary on the uniqueness of the Γ\Gamma-minimax estimator and the convergence of {d}=1\{d^{*}_{\ell}\}_{\ell=1}^{\infty} for certain problems.

Corollary 1 (Convergence of Γ\Gamma_{\ell}-minimax estimator).

Suppose that 𝒟{\mathcal{D}} is a convex subset of a vector space, dR(d,P)d\mapsto R(d,P) is strictly convex for each PP\in{\mathcal{M}}, and rsup(d,Γ)r_{\sup}(d,\Gamma) is attainable for each d𝒟d\in{\mathcal{D}} in the sense that, for all d𝒟d\in{\mathcal{D}}, there exists a πΓ\pi\in\Gamma such that r(d,π)=rsup(d,Γ)r(d,\pi)=r_{\sup}(d,\Gamma). Under Conditions 13, dd^{*} is the unique Γ\Gamma-minimax estimator and

ddas.d^{*}_{\ell}\rightarrow d^{*}\quad\text{as}\quad\ell\rightarrow\infty.

We prove Corollary 1 by establishing that drsup(d,Γ)d\mapsto r_{\sup}(d,\Gamma) is strictly convex.

In practice, the user also needs to specify a stopping criterion for Algorithm 1. In Noubiap and Seidel, (2001), the authors recommended computing or approximating rsup(d,Γ)r_{\sup}(d^{*}_{\ell},\Gamma) and stop if rsup(d,Γ)r_{\sup}(d^{*}_{\ell},\Gamma) is sufficiently close to rsup(d,Γ)r_{\sup}(d^{*}_{\ell},\Gamma_{\ell}). However, the procedure to approximate rsup(d,Γ)r_{\sup}(d^{*}_{\ell},\Gamma) in that work relies on the compactness of {\mathcal{M}}, but we do not want to assume this condition because it may restrict the applicability of the method. Therefore, we propose to use the following alternative criterion: stop if rsup(d,Γ+1)rsup(d,Γ)ϵr_{\sup}(d^{*}_{\ell},\Gamma_{\ell+1})-r_{\sup}(d^{*}_{\ell},\Gamma_{\ell})\leq\epsilon for a prespecified tolerance level ϵ>0\epsilon>0. This criterion was proposed but not recommended in Noubiap and Seidel, (2001) because it does not guarantee that rsup(d,Γ)r_{\sup}(d^{*}_{\ell},\Gamma_{\ell}) is close to rsup(d,Γ)r_{\sup}(d^{*},\Gamma). For example, if +1{\mathcal{M}}_{\ell+1}\setminus{\mathcal{M}}_{\ell} is small, it is even possible that rsup(d,Γ+1)rsup(d,Γ)=0r_{\sup}(d^{*}_{\ell},\Gamma_{\ell+1})-r_{\sup}(d^{*}_{\ell},\Gamma_{\ell})=0, but dd^{*}_{\ell} is far from being Γ\Gamma-minimax. In contrast, we recommend this criterion for our proposed methods because we allow more flexibility in model specification, that is, {\mathcal{M}} need not be compact. We discuss this issue in more detail in Section 4.1.

We finally remark that rsup(d,Γ)r_{\sup}(d,\Gamma_{\ell}) may be difficult to evaluate exactly. Since the risk is often an expectation, we recommend approximating rsup(d,Γ)r_{\sup}(d,\Gamma_{\ell}) for any given dd via Monte Carlo as follows: first, estimate risks R(d,P)R(d,P) for all PP\in{\mathcal{M}}_{\ell} with a large number of Monte Carlo runs; second, estimate the corresponding least favorable prior πd,argmaxπΓr(d,π)\pi_{d,\ell}\in\operatorname*{argmax}_{\pi\in\Gamma_{\ell}}r(d,\pi) using the estimated risks; third, estimate the risks R(d,P)R(d,P) (PP\in{\mathcal{M}}_{\ell}) again with independent Monte Carlo runs, and, finally, calculate r(d,πd,)r(d,\pi_{d,\ell}) with the estimated risks and the estimated least favorable prior. Using two independent estimates of the risk can remove the positive bias that would otherwise arise due to using the same data to estimate the risks and the least favorable prior.

3.2 Computation of an estimator on a grid via stochastic gradient descent with max-oracle

In this section, we present methods to compute a Γ\Gamma_{\ell}-minimax estimator, which corresponds to Line 3 in Algorithm 1. Gradient descent with max-oracle (GDmax) and its stochastic variant (SGDmax), which were presented in Lin et al., (2020), can be used to solve general minimax problems in Euclidean spaces. We focus on SGDmax in the main text and present GDmax in Appendix B. To apply these algorithms to find a Γ\Gamma_{\ell}-m inimax estimator, we need to assume that 𝒟{\mathcal{D}} can be parameterized by a subset of a Euclidean space, that is, that for any d𝒟d\in{\mathcal{D}}, there exists a real vector-valued coefficient βD\beta\in^{D} such that dd may be written as d(β)d(\beta). For example, 𝒟{\mathcal{D}} may be a neural network class. More discussions on the parameterization of 𝒟{\mathcal{D}} can be found in Section 4.2. In this section, in a slight abuse of notation, we define R(β,P):=R(d(β),P)R(\beta,P):=R(d(\beta),P), r(β,π):=r(d(β),π)r(\beta,\pi):=r(d(\beta),\pi) and rsup(β,Γ):=rsup(d(β),Γ)r_{\sup}(\beta,\Gamma_{\ell}):=r_{\sup}(d(\beta),\Gamma_{\ell}) for a coefficient βD\beta\in^{D}, a data-generating mechanism PP\in{\mathcal{M}} and a prior πΓ\pi\in\Gamma. We assume that βR(β,P)\beta\mapsto R(\beta,P) is differentiable for all PP\in{\mathcal{M}}, and hence so is βr(β,π)\beta\mapsto r(\beta,\pi) for all πΓ\pi\in\Gamma.

It is often the case that R(β,P)R(\beta,P) is expressed as an expectation. In this case, R(β,P)R(\beta,P) may instead be approximated using Monte Carlo techniques. With ξ\xi being an exogenous source of randomness according to law Ξ\Xi, let R^(β,P,ξ)\hat{R}(\beta,P,\xi) be an unbiased approximation of R(β,P)R(\beta,P) with 𝔼[β{R^(β,P,ξ)R(β,P)}2]σ2<{\mathbb{E}}[\|\nabla_{\beta}\{\hat{R}(\beta,P,\xi)-R(\beta,P)\}\|^{2}]\leq\sigma^{2}<\infty, where \|\cdot\| denotes the 2\ell_{2}-norm in Euclidean spaces. Let r^(β,π,ξ):=R^(β,P,ξ)π(dP)\hat{r}(\beta,\pi,\xi):=\int\hat{R}(\beta,P,\xi)\,\pi({\mathrm{d}}P) for πΓ\pi\in\Gamma_{\ell}. In this case, SGDmax (Algorithm 2) may be used to find a (locally) Γ\Gamma_{\ell}-minimax estimator. Note that Algorithm 2 represents a generalization of the nested minimax AMC strategy in Luedtke et al., 2020a to Γ\Gamma_{\ell}-minimax problems.

Algorithm 2 Stochastic gradient descent with max-oracle (SGDmax) to compute a Γ\Gamma_{\ell}-minimax estimator
1:Initialize β(0)D\beta_{(0)}\in^{D}. Set learning rate η>0\eta>0, max-oracle accuracy ζ>0\zeta>0 and batch size JJ.
2:for t=1,2,t=1,2,\ldots do
3:     Stochastic maximization: use a stochastic procedure to find π(t)Γ\pi_{(t)}\in\Gamma_{\ell} such that 𝔼[r(β(t1),π(t))]maxπΓr(β(t1),π)ζ{\mathbb{E}}[r(\beta_{(t-1)},\pi_{(t)})]\geq\max_{\pi\in\Gamma_{\ell}}r(\beta_{(t-1)},\pi)-\zeta, where the expectation is over the randomness in stochastic maximization (e.g., variants of stochastic gradient ascent).
4:     Generate iid copies ξ1,,ξJ\xi_{1},\ldots,\xi_{J} of ξ\xi.
5:     Stochastic gradient descent: β(t)β(t1)ηJj=1Jβr^(β,π(t),ξj)|β=β(t1)\beta_{(t)}\leftarrow\beta_{(t-1)}-\frac{\eta}{J}\sum_{j=1}^{J}\nabla_{\beta}\hat{r}(\beta,\pi_{(t)},\xi_{j})|_{\beta=\beta_{(t-1)}}.

We next present two conditions needed for the validity of Algorithm 2.

Condition 4.

For each =1,2,\ell=1,2,\ldots and all βD\beta\in^{D}, βR(β,P)\beta\mapsto R(\beta,P) is Lipschitz continuous with a universal Lipschitz constant L1L_{1} independent of PP\in{\mathcal{M}}_{\ell}.

Note that Condition 4 differs from Condition 2 in that the former relies on the parameterization of 𝒟{\mathcal{D}} in a Euclidean space equipped with the Euclidean norm, while the latter may rely on a different metric on 𝒟{\mathcal{D}} such as an L2L^{2}-distance.

Condition 5.

For each =1,2,\ell=1,2,\ldots and all βD\beta\in^{D}, βR(β,P)\nabla_{\beta}R(\beta,P) is bounded; ββR(β,P)\beta\mapsto\nabla_{\beta}R(\beta,P) is Lipschitz continuous with a universal Lipschitz constant L2L_{2} independent of PP\in{\mathcal{M}}_{\ell}.

Under these conditions, using the results in Lin et al., (2020), we can show that SGDmax yields an approximation to a local minimum of βrsup(β,Γ)\beta\mapsto r_{\sup}(\beta,\Gamma_{\ell}) when the algorithms’ hyperparameters are suitably chosen. Before we formally present the theorem, we introduce some definitions related to the local optimality of a potentially nondifferentiable and nonconvex function. A real-valued function ff is called qq-weakly convex if xf(x)+(q/2)x2x\mapsto f(x)+(q/2)\|x\|^{2} is convex (q>0q>0). The Moreau envelope of a real-valued function ff with parameter q>0q>0 is fq:xminxf(x)+xx2/(2q)f_{q}:x\mapsto\min_{x^{\prime}}f(x^{\prime})+\|x^{\prime}-x\|^{2}/(2q). A point xx is an ϵ\epsilon-stationary point (ϵ0\epsilon\geq 0) of a qq-weakly convex function ff if f1/(2q)(x)ϵ\|\nabla f_{1/(2q)}(x)\|\leq\epsilon. Similarly, a random point xx is an ϵ\epsilon-stationary point (ϵ0\epsilon\geq 0) of a qq-weakly convex function ff in expectation if 𝔼[f1/(2q)(x)]ϵ{\mathbb{E}}[\|\nabla f_{1/(2q)}(x)\|]\leq\epsilon. If xx is an ϵ\epsilon-stationary point in expectation, we may conclude that it is an ϵ\epsilon-stationary point with high probability by Markov’s inequality. Lemma 3.8 in Lin et al., (2020) shows that an ϵ\epsilon-stationary point of ff is close to a point xx^{\prime} at which ff has at least one small subgradient for small ϵ\epsilon, so that f(x)f(x^{\prime}) is close to a local minimum. In other words, if an algorithm outputs an estimator d^=d(β^)\hat{d}=d(\hat{\beta}) such that β^\hat{\beta} is an ϵ\epsilon-stationary point of βrsup(β,Γ)\beta\mapsto r_{\sup}(\beta,\Gamma_{\ell}), then we know that rsup(β^,Γ)r_{\sup}(\hat{\beta},\Gamma_{\ell}) is close to a local minimum of βrsup(β,Γ)\beta\mapsto r_{\sup}(\beta,\Gamma_{\ell}).

We next present the validity result for Algorithm 2.

Theorem 2 (Validity of SGDmax (Algorithm 2)).

Suppose that Conditions 12 and 45 hold. Let ϵ>0\epsilon>0 be fixed and define Δ:=(rsup)1/(2L1)(β(0))minβD(rsup)1/(2L1)(β)\Delta:=(r_{\sup})_{1/(2L_{1})}(\beta_{(0)})-\min_{\beta\in^{D}}(r_{\sup})_{1/(2L_{1})}(\beta), where we recall that (rsup)1/(2L1)(r_{\sup})_{1/(2L_{1})} is the Moreau envelope of rsupr_{\sup} with parameter 1/(2L1)1/(2L_{1}). In Algorithm 2, with η=ϵ2/[L1(L22+σ2)]\eta=\epsilon^{2}/[L_{1}(L_{2}^{2}+\sigma^{2})], ζ=ϵ2/(24L1)\zeta=\epsilon^{2}/(24L_{1}) and J=1J=1, β(t)\beta_{(t)} is an ϵ\epsilon-stationary point of βrsup(β,Γ)\beta\mapsto r_{\sup}(\beta,\Gamma_{\ell}) in expectation for t=O(L1(L22+σ2)Δ/ϵ4)t=O(L_{1}(L_{2}^{2}+\sigma^{2})\Delta/\epsilon^{4}), and is thus close to a local minimum of βrsup(β,Γ)\beta\mapsto r_{\sup}(\beta,\Gamma_{\ell}) with high probability.

The assumption that the batch size J=1J=1 is purely for convenience since increasing JJ corresponds to decreasing variance σ2\sigma^{2}. To run Algorithm 2 in practice, the user only needs to specify tuning parameters in Line 1 and all other constants in Theorem 2 need not be known. In general, a small learning rate η\eta, a stringent accuracy ζ\zeta, and a large batch size JJ make Algorithm 2 likely to eventually reach an approximation of a local minimum of βrsup(β,Γ)\beta\mapsto r_{\sup}(\beta,\Gamma_{\ell}), but computation time might increase. Similar to most numeric optimization algorithms, fine-tuning is needed to achieve a balance between convergence guarantee and computation time, but a conservative choice of tuning parameters would typically result in convergence at the cost of computation time.

We note that Line 3 in Algorithm 2 may be inconvenient to implement because linear program solvers often do not use stochastic optimization. Therefore, we propose to use a convenient variant (Algorithm 6 in Appendix B), where the stochastic maximization step (Line 3 in Algorithm 2) is replaced by solving a linear program where the objective is approximated via Monte Carlo. This variant has similar validity under similar conditions. We also note that the two uniform Lipschitz continuity conditions (4 and 5) heavily rely on the fact that {\mathcal{M}}_{\ell} is finite and the compactness of a set containing the coefficients. Nevertheless, the latter compactness restriction is common in theoretical analyses of neural networks (see, e.g., Goel et al.,, 2016; Zhang et al.,, 2016; Eckle and Schmidt-Hieber,, 2019). Moreover, these two conditions are sufficient conditions for the validity of the gradient-based methods, namely SGDmax, our variant of SGDmax and GDmax; a guarantee similar to these validity results might hold when two conditions are violated.

We finally remark that other algorithms similar to SGDmax can be applied, for example, (stochastic) gradient descent ascent with projection (Lin et al.,, 2020), (stochastic) mirror descent ascent, or accelerated (stochastic) mirror descent ascent (Huang et al.,, 2021). It is of future research interest to develop gradient-based methods to solve minimax problems with convergence guarantees under weaker conditions.

3.3 Computation of an estimator on a grid via fictitious play

The algorithms in Section 3.2 may be convenient in many cases, but the requirements such as parameterization of the space 𝒟{\mathcal{D}} of estimators in a Euclidean space, differentiability of the risk function RR with respect to the coefficients β\beta, and uniform Lipschitz continuity may be restrictive for certain problems. In this section, we propose an alternative algorithm, fictitious play, that avoids these requirements. We also present its convergence results.

Brown, (1951) introduced fictitious play as a means to find the value of a zero-sum game, that is, the optimal mixed strategy for both players and their expected gains. Robinson, (1951) then proved that fictitious play can be used to iteratively solve a two-player zero-sum game for a saddle point that is a pair of mixed strategies where both players have finitely many pure strategies. Our problem of finding a Γ\Gamma-minimax estimator may also be viewed as a two-player zero-sum game where one player chooses a prior from Γ\Gamma and the other player chooses an estimator from 𝒟{\mathcal{D}}. If we assume that, for the Γ\Gamma-minimax problem at hand, the pair of both players’ optimal strategies is a saddle point, which holds in many minimax problems (e.g., v. Neumann,, 1928; Fan,, 1953; Sion,, 1958), then fictitious play may also be used to find a Γ\Gamma-minimax estimator. Since Γ\Gamma may be too rich to allow for feasible implementation of fictitious play, we propose to use this algorithm to find a Γ\Gamma_{\ell}-minimax estimator.

In the fictitious play algorithm in Robinson, (1951), the two players take turns to play the best pure strategy against the mixture of the opponent’s historic pure strategies, and the final output is a pair of mixtures of the two players’ historic pure strategies. Since this algorithm aims to find minimax mixed strategies, we consider stochastic estimators. That is, consider the Borel σ\sigma-field \mathcal{F} over 𝒟{\mathcal{D}} and let Π\varPi denote the set of all probability distributions on the measurable space (𝒟,)({\mathcal{D}},\mathcal{F}). We define 𝒟¯\overline{{\mathcal{D}}} to be the space of stochastic estimators with each element taking the following form: first draw an estimator from 𝒟{\mathcal{D}} according to a distribution ϖΠ\varpi\in\varPi with an exogenous random mechanism and then use the estimator to obtain an estimate based on the data. Note that we may write any d¯𝒟¯\overline{d}\in\overline{{\mathcal{D}}} as d¯(ϖ)\overline{d}(\varpi) for some ϖΠ\varpi\in\varPi. We consider estimators in 𝒟¯\overline{{\mathcal{D}}} throughout this section, with the definition of Γ\Gamma-minimaxity extended in the natural way, so that d¯=d¯(ϖ)𝒟¯\overline{d}^{*}=\overline{d}(\varpi^{*})\in\overline{{\mathcal{D}}} is Γ\Gamma-minimax if rsup(d¯,Γ)=mind¯𝒟¯rsup(d¯,Γ)r_{\sup}(\overline{d}^{*},\Gamma)=\min_{\overline{d}\in\overline{{\mathcal{D}}}}r_{\sup}(\overline{d},\Gamma); we similarly extend all other definitions from Section 2. We assume that there exists πΓ\pi^{*}_{\ell}\in\Gamma_{\ell} (=1,2,\ell=1,2,\ldots) such that

r(d¯,π)=supπΓinfd¯𝒟¯r(d¯,π)=infd¯𝒟¯supπΓr(d¯,π).r(\overline{d}^{*},\pi^{*}_{\ell})=\sup_{\pi\in\Gamma_{\ell}}\inf_{\overline{d}\in\overline{{\mathcal{D}}}}r(\overline{d},\pi)=\inf_{\overline{d}\in\overline{{\mathcal{D}}}}\sup_{\pi\in\Gamma_{\ell}}r(\overline{d},\pi). (2)

In other words, (d¯,π)(\overline{d}^{*},\pi^{*}_{\ell}) is a saddle point of rr in 𝒟¯×Γ\overline{{\mathcal{D}}}\times\Gamma_{\ell}. Under this condition and the further conditions that 𝒟{\mathcal{D}} is convex and dR(d,P)d\mapsto R(d,P) is convex for all PP\in{\mathcal{M}}, it is possible to use a Γ\Gamma-minimax estimator over the richer class 𝒟¯\overline{{\mathcal{D}}} of stochastic estimators to derive a Γ\Gamma-minimax estimator over the original class 𝒟{\mathcal{D}}. Indeed, for any d¯(ϖ)𝒟¯\overline{d}(\varpi)\in\overline{{\mathcal{D}}} and PP\in{\mathcal{M}}, by Jensen’s inequality, R(d¯(ϖ),P)=R(d,P)ϖ(dd)R(d¯¯(ϖ),P)R(\overline{d}(\varpi),P)=\int R(d,P)\,\varpi({\mathrm{d}}d)\geq R(\underline{\overline{d}}(\varpi),P) where d¯¯(ϖ):=𝑑ϖ(dd)𝒟\underline{\overline{d}}(\varpi):=\int d\,\varpi({\mathrm{d}}d)\in{\mathcal{D}} is the average of the stochastic estimator d¯(ϖ)\overline{d}(\varpi); that is, the risk of d¯¯(ϖ)\underline{\overline{d}}(\varpi) is never greater than that of d¯(ϖ)\overline{d}(\varpi). Therefore, we may use the fictitious play algorithm to compute d¯(ϖ)\overline{d}(\varpi^{*}_{\ell}) for each \ell and further apply Algorithm 1 to compute d¯(ϖ)\overline{d}(\varpi^{*}). After that, we may take d¯¯(ϖ)\underline{\overline{d}}(\varpi^{*}) as the final output deterministic estimator.

Algorithm 3 presents the fictitious play algorithm for finding a Γ\Gamma_{\ell}-minimax estimator in 𝒟¯\overline{{\mathcal{D}}}. Note that Γ\Gamma_{\ell} is convex, and hence π\pi always lies in Γ\Gamma_{\ell} throughout the iterations. In practice, we may initialize ϖ\varpi as a point mass at an initial estimator in 𝒟{\mathcal{D}}. In addition, similarly to Robinson, (1951), we may replace Line 5 with d(t)argmind𝒟r(d,π(t))d^{\dagger}_{(t)}\leftarrow\operatorname*{argmin}_{d\in{\mathcal{D}}}r(d,\pi_{(t)}), that is, minimizing the Bayes risk with the most recently updated prior rather than with the previous prior.

Algorithm 3 Fictitious play to compute a Γ\Gamma_{\ell}-minimax stochastic estimator
1:Initialize ϖ(0)Π\varpi_{(0)}\in\varPi and π(0)Γ\pi_{(0)}\in\Gamma_{\ell}.
2:for t=1,2,\ldots do
3:     π(t)argmaxπΓr(d¯(ϖ(t1)),π)\pi^{\dagger}_{(t)}\leftarrow\operatorname*{argmax}_{\pi\in\Gamma_{\ell}}r(\overline{d}(\varpi_{(t-1)}),\pi)
4:     π(t)t1tπ(t1)+1tπ(t)\pi_{(t)}\leftarrow\frac{t-1}{t}\pi_{(t-1)}+\frac{1}{t}\pi^{\dagger}_{(t)}
5:     d(t)argmind𝒟r(d,π(t1))d^{\dagger}_{(t)}\leftarrow\operatorname*{argmin}_{d\in{\mathcal{D}}}r(d,\pi_{(t-1)})
6:     ϖ(t)t1tϖ(t1)+1tδ(d(t))\varpi_{(t)}\leftarrow\frac{t-1}{t}\varpi_{(t-1)}+\frac{1}{t}\delta(d^{\dagger}_{(t)}), where δ(d)\delta(d) denotes a point mass at d𝒟d\in{\mathcal{D}}.

We next present a convergence result for this algorithm.

Theorem 3 (Validity of fictitious play (Algorithm 3)).

Assume that there exists a compact subset 𝒟¯\bar{{\mathcal{D}}} of 𝒟{\mathcal{D}} that contains all d(t)d^{\dagger}_{(t)} (t=1,2,t=1,2,\ldots). Under Conditions 12, it holds that

r(d(t),π(t1))r(d¯(ϖ),π)r(d¯(ϖ(t1)),π(t))r(d^{\dagger}_{(t)},\pi_{(t-1)})\leq r(\overline{d}(\varpi^{*}_{\ell}),\pi^{*}_{\ell})\leq r(\overline{d}(\varpi_{(t-1)}),\pi^{\dagger}_{(t)})

for all tt and

limt[r(d¯(ϖ(t1)),π(t))r(d(t),π(t1))]=0.\lim_{t\rightarrow\infty}\left[r(\overline{d}(\varpi_{(t-1)}),\pi^{\dagger}_{(t)})-r(d^{\dagger}_{(t)},\pi_{(t-1)})\right]=0.

Consequently, the Γ\Gamma_{\ell}-maximal risk of d¯(ϖ(t))\overline{d}(\varpi_{(t)}) converges to the Γ\Gamma_{\ell}-minimax risk, that is,

rsup(d¯(ϖ(t1)),Γ)rsup(d¯(ϖ),Γ)ast.r_{\sup}(\overline{d}(\varpi_{(t-1)}),\Gamma_{\ell})\rightarrow r_{\sup}(\overline{d}(\varpi_{\ell}^{*}),\Gamma_{\ell})\quad\text{as}\quad t\rightarrow\infty.

Robinson, (1951) proved a similar case for two-player zero-sum games where each player has finitely many pure strategies. In contrast, in our problem, each player may have infinitely many pure strategies. A natural attempt to prove Theorem 3 would be to consider finite covers of 𝒟¯\bar{{\mathcal{D}}} and Γ\Gamma_{\ell}, that is, 𝒟¯=i=1I𝒟i\bar{{\mathcal{D}}}=\bigcup_{i=1}^{I}{\mathcal{D}}_{i} and Γ=j=1JΠj\Gamma_{\ell}=\bigcup_{j=1}^{J}\Pi_{j}, such that the range of r(d,π)r(d,\pi) in each 𝒟i{\mathcal{D}}_{i} and Πj\Pi_{j} is small (say less than ϵ\epsilon), bin pure strategies into these subsets, and then apply the argument in Robinson, (1951) to these bins. The collection of 𝒟i{\mathcal{D}}_{i} and Πj\Pi_{j} may be viewed as finitely many approximated pure strategies to Γ\Gamma_{\ell} and 𝒟¯\bar{{\mathcal{D}}} up to accuracy ϵ\epsilon, respectively. Unfortunately, we found that this approach fails. The problem arises because Robinson, (1951) inducted on II and JJ, and, after each induction step, the corresponding upper bound becomes twice as large. Unlike the case with finitely many pure strategies that was considered in Brown, (1951) and Robinson, (1951), as the desired approximation accuracy ϵ\epsilon approaches zero, the numbers of approximated pure strategies, II and JJ, may diverge to infinity, and so does the number of induction steps. Therefore, the resulting final upper bound is of order 2I+Jϵ2^{I+J}\epsilon and generally does not converge to zero as ϵ\epsilon tends to zero. To overcome this challenge, we instead control the increase in the relevant upper bound after each induction step more carefully so that the final upper bound converges to zero as ϵ\epsilon decreases to zero, despite the fact that II and JJ may diverge to infinity.

We remark that, because Line 5 of Algorithm 3 typically involves another layer of iteration in addition to that over tt, this algorithm will often be more computationally intensive than SGDmax. Nevertheless, Algorithm 3 provides an approach to construct Γ\Gamma_{\ell}-minimax estimators in cases where these other algorithms cannot be applied, for example, in settings where the risk is not differentiable in the parameters indexing the estimator or uniform Lipschitz conditions fail. In our numerical experiments, we have implemented this algorithm in the context of mean estimation (Appendix C).

4 Considerations in implementation

4.1 Considerations when constructing the grid over the model space

By Theorem 1, rsup(d,Γ)mind𝒟rsup(d,Γ)r_{\sup}(d_{\ell}^{*},\Gamma_{\ell})\nearrow\min_{d\in{\mathcal{D}}}r_{\sup}(d,\Gamma) whenever Conditions 13 hold and the increasing sequence {}=1\{{\mathcal{M}}_{\ell}\}_{\ell=1}^{\infty} is such that =1\bigcup_{\ell=1}^{\infty}{\mathcal{M}}_{\ell} is dense in {\mathcal{M}}. Though this guarantee holds for all such sequences {}=1\{{\mathcal{M}}_{\ell}\}_{\ell=1}^{\infty}, in practice, judiciously choosing this sequence of grids of distributions can lead to faster convergence. In particular, it is desirable that the least favorable prior Γ\Gamma_{\ell} puts mass on some of the distributions in \1{\mathcal{M}}_{\ell}\backslash{\mathcal{M}}_{\ell-1} since, if this is not the case, then dd_{\ell}^{*} will be the same as d1d_{\ell-1}^{*}. While we may try to arrange for this to occur by adding many new points when enlarging 1{\mathcal{M}}_{\ell-1} to {\mathcal{M}}_{\ell}, it may not be likely that any of these points will actually modify the least favorable prior unless they are carefully chosen.

To better address this issue, we propose to add grid points using a Markov chain Monte Carlo (MCMC) method. Our intuition is that, given an estimator dd, the maximal Bayes risk is likely to significantly increase if we add distributions that (i) have a high risk for dd, and (ii) are consistent with prior information so that there exists some prior such that these distributions lie in a high-probability region. We propose to use the MCMC algorithm to bias the selection of distributions in favor of those with the above characteristics. Let τ:[0,)\tau:{\mathcal{M}}\rightarrow[0,\infty) denote a function such that τ(P)>τ(P)\tau(P)>\tau(P^{\prime}) if PP is more consistent with prior information than PP^{\prime}. For example, given a prior mean μ\mu of some real-valued summary Ψ(P)\Psi(P) of PP and an interval II that contains Ψ(P)\Psi(P) with prior probability at least 95%, we may choose τ:Pϕ(Ψ(P))\tau:P\mapsto\phi(\Psi(P)), where ϕ\phi is the density of a normal distribution that has mean μ\mu and places 95% of its probability mass in II. We call τ\tau a pseudo-prior. Then, with the current estimator being dd, we wish to select distributions PP for which R(d,P)τ(P)R(d,P)\tau(P) is large. We may use the Metropolis-Hastings-Green algorithm (Metropolis et al.,, 1953; Hastings,, 1970; Green,, 1995) to draw samples from a density proportional to PR(d,P)τ(P)P\mapsto R(d,P)\tau(P). We then let {\mathcal{M}}_{\ell} be equal to the union of 1{\mathcal{M}}_{\ell-1} and the set containing all unique distributions in this sample.

Details of the proposed scheme are provided in Algorithm 4. To use this proposed algorithm, we rely on it being possible to define a sequence of parametric models {Ω~}=1\{\tilde{\Omega}_{\ell}\}_{\ell=1}^{\infty} such that ~:==1Ω~\tilde{{\mathcal{M}}}:=\cup_{\ell=1}^{\infty}\tilde{\Omega}_{\ell} is dense in {\mathcal{M}}_{\ell}—this is possible in many interesting examples (see, e.g., Chen,, 2007). When combined with separability of {\mathcal{M}}, this condition enables the definition of an increasing sequence of grids of distributions {}=1\{{\mathcal{M}}_{\ell}\}_{\ell=1}^{\infty} such that, for each \ell, ~{\mathcal{M}}_{\ell}\subseteq\tilde{{\mathcal{M}}}.

Algorithm 4 MCMC algorithm to construct {\mathcal{M}}_{\ell}
1:Previous grid 1{\mathcal{M}}_{\ell-1}, current estimator d1d^{*}_{\ell-1} and number TT of iterations. We define 1:={\mathcal{M}}_{-1}:=\emptyset. An initial estimator d0d^{*}_{0} must be available if =1\ell=1.
2:Initialize P(0)~P_{(0)}\in\tilde{{\mathcal{M}}}.
3:for t=1,2,,Tt=1,2,\ldots,T do
4:     Propose a distribution P~P^{\prime}\in\tilde{{\mathcal{M}}} from P(t1)P_{(t-1)}
5:     Calculate the MCMC acceptance probability pacceptp_{\mathrm{accept}} of PP^{\prime} for target density PR(d1,P)τ(P)P\mapsto R(d^{*}_{\ell-1},P)\tau(P)
6:     With probability pacceptp_{\mathrm{accept}}, accept PP^{\prime} and P(t)PP_{(t)}\leftarrow P^{\prime}
7:     if PP^{\prime} is not accepted then
8:         P(t)P(t1)P_{(t)}\leftarrow P_{(t-1)}      
9:unique elements of the multiset 1{P(1),P(2),,P(T)}{\mathcal{M}}_{\ell}\leftarrow\text{unique elements of the multiset }{\mathcal{M}}_{\ell-1}\bigcup\{P_{(1)},P_{(2)},\ldots,P_{(T)}\}

The following theorem on distributional convergence follows from that for the Metropolis-Hastings-Green algorithm (see Section 3.2 and 3.3 of Green,, 1995).

Theorem 4 (Validity of MCMC algorithm (Algorithm 4)).

Suppose that PR(d1,P)τ(P)P\mapsto R(d^{*}_{\ell-1},P)\tau(P) is bounded and integrable with respect to some measure μ\mu on ~\tilde{{\mathcal{M}}} and let \mathscr{L} denote the probability law on ~\tilde{{\mathcal{M}}} whose density function with respect to μ\mu is proportional to this function. Suppose that the MCMC is constructed such that the Markov chain is irreducible and aperiodic. Then, P(t)P_{(t)} converges weakly to \mathscr{L} as tt\rightarrow\infty.

Therefore, if \mathscr{L} corresponds to a continuous distribution with nonzero density over the parameter space of ~\tilde{{\mathcal{M}}}, then Theorem 4 implies that =1\bigcup_{\ell=1}^{\infty}{\mathcal{M}}_{\ell} is dense in {\mathcal{M}}, as required by Algorithm 1.

Implementing Algorithm 4 relies on the user making several decisions. These decisions include the choice of the pseudo-prior τ\tau and the technique used to approximate the risk R(d,P)R(d,P) to a reasonable accuracy. Fortunately, regardless of the decisions made, Theorem 1 suggests that rsup(d,Γ)mind𝒟rsup(d,Γ)r_{\sup}(d_{\ell}^{*},\Gamma_{\ell})\nearrow\min_{d\in{\mathcal{D}}}r_{\sup}(d,\Gamma) for a wide range of sequences {}=1\{{\mathcal{M}}_{\ell}\}_{\ell=1}^{\infty}. Indeed, all that theorem requires on this sequence is that the grid {\mathcal{M}}_{\ell} becomes arbitrarily fine as \ell increases. Though the final decisions made are not important when \ell is large, we still comment briefly on the decisions that we have made in our experiments, First, we have found it effective to approximate R(d,P)R(d,P) via a large number of Monte Carlo draws. Second, in a variety of settings, we have also identified, via numerical experiments, candidate pseudo-priors that balance high risk and consistency with prior information (see Sections 5.1 and 5.2 for details).

4.2 Considerations when choosing the space of estimators

It is desirable to consider a rich space 𝒟0{\mathcal{D}}_{0} of estimators to obtain an estimator with low maximal Bayes risk, and thus good general performance. However, to make numerically constructing these estimators computationally feasible, we usually have to consider a restricted space 𝒟{\mathcal{D}} of estimators. This approximation is justified because, if estimators in 𝒟{\mathcal{D}} can approximate the GammaGamma-minimax estimator in 𝒟0{\mathcal{D}}_{0} well, then we expect the resulting excess maximal Bayes risk is small.

Feedforward neural networks (or neural networks for short) are natural options for the space of estimators because of their universal approximation property (e.g., Hornik,, 1991; Csáji,, 2001; Hanin and Sellke,, 2017; Kidger and Lyons,, 2020). However, training commonly used neural networks can be computationally intensive. Moreover, a space of neural networks is typically nonconvex, and hence it may be difficult to find a global minimizer of the maximal Bayes risk even if the risk is convex in the estimator. Therefore, the learned estimator might not perform well.

To help overcome this challenge, we advocate for utilizing available statistical knowledge when designing the space of estimators. We call estimators that take this form statistical knowledge networks. In particular, if a simple estimator is already available, we propose to use neural networks with such an estimator as a node connected to the output node. An example of such an architecture is presented in Fig. 2. In this sample architecture, each node is an activation function such as the sigmoid or the rectified linear unit (ReLU) (Glorot et al.,, 2011) function applied to an affine transformation of the vector containing the ancestors of the node. The only exception is the output node, which is again an affine transformation of its ancestors but uses the identity activation function. When training the neural network, we may initialize the affine transformation in the output layer to only give weight to the simple estimator. Under this approach, the space of estimators is a set of perturbations of an existing simple estimator. Although we may still face the challenge of nonconvexity and local optimality, we can at least expect to improve the initial simple estimator.

\vdots\vdotsExisting estimatorInputlayer1st Hiddenlayer2nd HiddenlayerOuputlayer
Figure 2: Example of neural network estimator architecture utilizing an existing estimator. The arrows from the input nodes to the existing estimator are omitted from this graph.

In the simulation we describe in Appendix C, we compared the empirical performance of several spaces of estimators. This simulation concerns the simple problem of estimating the mean of a true distribution whose support has known bounds (Example 1), and the existing simple estimator we use in the statistical neural network is the sample mean. Fig. 3 presents the trajectory of estimated Bayes risks. As shown in subfigures (b)–(d), using the statistical knowledge network, the estimator is almost Γ\Gamma-minimax after a few iterations; on the other hand, it took about 1000 iterations for the feedforward neural network to reach an approximately Γ\Gamma-minimax estimator. Therefore, in this simple problem where the true Γ\Gamma-minimax estimator is a shifted and scaled sample mean, statistical knowledge substantially reduced the number of iterations required to obtain an approximately Γ\Gamma-minimax estimator. For more complicated problems, we expect statistical knowledge to further help improve the performance of the computed estimator.

Refer to caption
Figure 3: Estimated Bayes risks of the estimator over iterations when computing a Γ1\Gamma_{1}-minimax estimator. The lines are the current Bayes risks (y-axis) over iterations (x-axis) (unbiased estimates with 50 Monte Carlo runs for Algorithm 6; exact values for Algorithm 3). The solid lines are the Bayes risks after an update in the estimator to decrease the Bayes risk. The dashed lines are the Bayes risks after an update in the prior to increase the Bayes risk. The two horizontal lines are the Bayes risk of the sample mean (dashed) and dd^{*} (solid), respectively, for π\pi^{*}. For ease of visualization, in subfigures (a) and (b), the Bayes risks are plotted every 50 iterations; in subfigures (c) and (d), the Bayes risks are plotted every 200 iterations; subfigure (d) contains the part in subfigure (c) after 500 iterations.

We note that we might overcome the challenge of nonconvexity and local optimality by using an extreme learning machine (ELM) (Huang et al., 2006b, ) to parameterize the estimator. ELMs are neural networks for which the weights in hidden nodes are randomly generated and are held fixed, and only the weights in the output layer are trained. Thus, the space of ELMs with a fixed architecture and fixed hidden layer weights is convex. Like traditional neural networks, ELMs have the universal approximation property (Huang et al., 2006a, ). In addition, Corollary 1 may be applied to an ELM so that the Γ\Gamma_{\ell}-minimax estimator may converge to the Γ\Gamma-minimax estimator. As for traditional neural networks, we may incorporate knowledge of existing statistical estimators into an ELM.

We finally remark that, besides computational intensity when constructing (i.e., learning) a Γ\Gamma-minimax estimator, another important factor to be considered when choosing 𝒟{\mathcal{D}} is the computational intensity to evaluate the learned estimator at the observed dataset. This is another reason for our choosing neural networks or ELMs as the space of estimators. Indeed, existing software packages (e.g., Paszke et al.,, 2019) make it easy to leverage graphics processing units to efficiently evaluate the output of neural networks for any given input. Therefore, if the existing estimator being used is not too difficult to compute, then estimators parameterized using similar architectures to that displayed in Figure 2 will be able to be computed efficiently in practice. This efficiency may be especially important in settings where the estimator will be applied to many datasets, so that the cost of learning the estimator is amortized and the main computational expense is evaluating the learned estimator.

5 Simulations and data analyses

We illustrate our methods in Examples 13. A toy example of Example 1 is presented in Appendix C. We focus on the more complex Examples 2 and 3 in this section.

5.1 Prediction of the expected number of new categories

We apply our proposed method to Example 2. In the simulation, we set the true population to be an infinite population with the same categories and same proportions as the sample studied in Miller and Wiegert, (1989), which consists of 1088 observations in 188 categories. This setting is the same as the simulation setting in Shen et al., (2003). We set the sample size to be n=100n=100 and the size of the new sample to be m=200m=200. In this setting, the expected number of new categories in the new sample unconditionally on the observed sample, namely Φ(P0):=𝔼P0[Ψ(P0)(𝐗)]\Phi(P_{0}):={\mathbb{E}}_{P_{0}}[\Psi(P_{0})(\mathbf{X}^{*})], can be analytically computed and equals 48.0248.02. We note that this quantity can also be computed via simulation: (i) sample nn and mm individuals with replacement from the dataset in Miller and Wiegert, (1989), (ii) count the number of new categories in the second sample, and (iii) repeat steps (i) and (ii) many times and compute the average.

It is well known that this prediction problem is difficult when m>nm>n, and we run this simulation to investigate the potential gain from leveraging prior information by computing a Gamma-minimax estimator for such difficult or even ill-posed problems. We consider three sets of prior information:

  1. 1.

    strongly informative: prior mean of Φ(P)\Phi(P) in [45,50][45,50], 95%\geq 95\% prior probability that Φ(P)\Phi(P) lies in [40,55][40,55];

  2. 2.

    weakly informative: prior mean of Φ(P)\Phi(P) in [40,55][40,55], 95%\geq 95\% prior probability that Φ(P)\Phi(P) lies in [30,65][30,65]; and

  3. 3.

    almost noninformative: prior mean of Φ(P)\Phi(P) in [35,60][35,60], 95%\geq 95\% prior probability that Φ(P)\Phi(P) lies in [20,75][20,75].

We note that a traditional Bayesian approach would require specifying a prior on {\mathcal{M}}, including the total number of categories and the proportion of each category, which may be difficult in practice.

We check the plausibility of Condition 3 in this context. We take the strongly informative prior information as an example. Take Ω\Omega_{\ell} to be the collection of multinomial distributions with at most \ell categories. It is obvious that =1Ω=\bigcup_{\ell=1}^{\infty}\Omega_{\ell}={\mathcal{M}}. Let d𝒟d\in{\mathcal{D}} be fixed and πΓ~\pi\in\tilde{\Gamma}_{\ell} be a fixed prior with finite support, that is, π=j=1Jqjδ(Qj)\pi=\sum_{j=1}^{J}q_{j}\delta(Q_{j}) where δ()\delta(\cdot) denotes the point mass distribution, QjΩQ_{j}\in\Omega_{\ell}, qj>0q_{j}>0 and j=1Jqj=1\sum_{j=1}^{J}q_{j}=1. Let ϵ>0\epsilon>0 be an arbitrary small number such that j=1JqjΦ(Qj)50ϵ\sum_{j=1}^{J}q_{j}\Phi(Q_{j})\leq 50-\epsilon or j=1JqjΦ(Qj)45+ϵ\sum_{j=1}^{J}q_{j}\Phi(Q_{j})\geq 45+\epsilon. Since =1\bigcup_{\ell=1}^{\infty}{\mathcal{M}}_{\ell} is dense in {\mathcal{M}} and Φ\Phi is continuous, there exists a sufficiently large ii such that, for every distribution QjQ_{j}, there exists PjiΩP_{j}\in{\mathcal{M}}_{i}\cap\Omega_{\ell} satisfying the following:

  • \square

    |Φ(Pj)Φ(Qj)|ϵ|\Phi(P_{j})-\Phi(Q_{j})|\leq\epsilon;

  • \square

    if Φ(Qj)[40,55]\Phi(Q_{j})\in[40,55], then Φ(Pj)[40,55]\Phi(P_{j})\in[40,55];

  • \square

    |R(d,Pj)R(d,Qj)|ϵ|R(d,P_{j})-R(d,Q_{j})|\leq\epsilon.

Take πi\pi_{i} to be j=1Jqjδ(Pj)\sum_{j=1}^{J}q_{j}\delta(P_{j}). Then it is easy to verify that |j=1JqjΦ(Pj)j=1JqjΦ(Qj)|ϵ|\sum_{j=1}^{J}q_{j}\Phi(P_{j})-\sum_{j=1}^{J}q_{j}\Phi(Q_{j})|\leq\epsilon and thus j=1JqjΦ(Pj)[45,50]\sum_{j=1}^{J}q_{j}\Phi(P_{j})\in[45,50]; moreover, Φ(Qj)[40,55]\Phi(Q_{j})\in[40,55] implies that Φ(Pj)[40,55]\Phi(P_{j})\in[40,55] and therefore j=1Jqj𝟙(Φ(Pj)[40,55])j=1Jqj𝟙(Φ(Qj)[40,55])95%\sum_{j=1}^{J}q_{j}{\mathbbm{1}}(\Phi(P_{j})\in[40,55])\geq\sum_{j=1}^{J}q_{j}{\mathbbm{1}}(\Phi(Q_{j})\in[40,55])\geq 95\%. Thus, πiΓ~i\pi_{i}\in\tilde{\Gamma}_{i\mid\ell}. Moreover, |r(d,π)r(d,πi)|ϵ|r(d,\pi)-r(d,\pi_{i})|\leq\epsilon. Therefore, r(d,πi)r(d,π)r(d,\pi_{i})\rightarrow r(d,\pi) as ii\rightarrow\infty and Condition 3 holds.

We design the architecture of the neural network estimator as in Fig. 4. We choose two existing estimators (referred to as the OSW and SCL estimators, respectively) proposed by Orlitsky et al., (2016) and Shen et al., (2003) as human knowledge inputs to the architecture. We use the ReLU activation function. There are 50 hidden nodes in the first hidden layer. We initialize the neural network that we train to output the average of these two existing estimators.

\vdots\vdotsX1X_{1}XnX_{n}OSWSCLInputlayer1st Hiddenlayer2nd HiddenlayerOuputlayer
Figure 4: Architecture of the neural network estimator of the expected number of new categories. XkX_{k}: number of categories with kk observations; OSW: the estimator proposed in Orlitsky et al., (2016); SCL: the estimator proposed in Shen et al., (2003). The arrows from data (X1,,Xn)(X_{1},\ldots,X_{n}) to the OSW and SCL estimators are omitted from this graph.

We use Algorithm 4 to construct {\mathcal{M}}_{\ell}. There are 2000 grid points in 1{\mathcal{M}}_{1}, and we add 1000 grid points each time we enlarge the grid. When generating 1{\mathcal{M}}_{1}, we chose the starting point to be a distribution P(0)P_{(0)} with 146 categories and Φ(P(0))=49.9\Phi(P_{(0)})=49.9. The choice of this starting point P(0)P_{(0)} was quite arbitrary. We first generated a sample from P0P_{0} and treated it as data from a pilot study. We then came up with a distribution P(0)P_{(0)} such that five random samples generated from P(0)P_{(0)} all appear qualitatively similar to the pilot data. In practice, this starting point can be chosen based on prior knowledge. Our chosen grid sizes for Algorithm 4 were quite arbitrary. For 1{\mathcal{M}}_{1}, the generated distributions P(t)P_{(t)} appear similar for all tt, and thus the initial grid size 2000 and the increment size 1000 appeared sufficient. Smaller grid sizes would simply lead to more iterations in Algorithm 1, which effectively increases the grid size. We selected the log pseudo-prior as a weighted sum of two log density functions: (i) a normal distribution with the mean being the midpoint of the interval constraint on the prior mean of Φ(P)\Phi(P) and central 95% probability interval being the interval with at least 95% prior probability, (ii) a negative-binomial distribution of the total number of categories with success probability 0.9950.995 and 22 failures until the Bernoulli trial is stopped so that the mode and the variance are approximately 200200 and 8×1048\times 10^{4}, respectively. These log-densities are provided weights 30 and 10, respectively. We selected the weights based on the empirical observation that distributions with only a few categories tend to have high risks, but these distributions are relatively inconsistent with prior information and may well be given almost negligible probability weight in a computed least favorable prior, thus contributing little to computing a Γ\Gamma-minimax estimator. We chose the aforementioned weights so that Algorithm 4 can explore a fairly large range of distributions and does not generate too many distributions with too few categories.

We use Algorithm 6 with learning rate η=0.005\eta=0.005 and batch size J=30J=30 to compute Γ\Gamma_{\ell}-minimax estimators. The number of iterations is 4,000 for Γ1\Gamma_{1} and 200 for Γ\Gamma_{\ell} (>1\ell>1). The stopping criterion in Algorithm 1 is that the estimated maximal Bayes risk with 2000 Monte Carlo runs does not relatively increase by more than 2% or absolutely increase by more than 0.0001. We chose the aforementioned tuning parameters based on the prior belief that at least one of OSW and SCL estimators should perform reasonably well, but the performance of SGDmax (Algorithm 6) and Algorithm 4 might be sensitive to tuning parameters. Thus, the network we used is neither deep nor wide. We chose a moderately small learning rate and a large number of iterations for SGDmax. Our chosen learning rate and chosen number of iterations led to a trajectory of estimated Bayes risks that approximately reached a plateau with small fluctuations, suggesting that the obtained estimator is approximately Γ1\Gamma_{1}-minimax (see Fig. 5). In practice, such trajectory plots may help tune the learning rate and the number of iterations.

We also run additional simulations to investigate the sensitivity of our methods to tuning parameter selections. We present these simulations in Appendix D. The results suggest that our methods may be insensitive to tuning parameter selections.

We examine the performance of the OSW estimator, the SCL estimator and our trained Γ\Gamma-minimax estimator by comparing their risks under our set data-generating mechanism computed with 20000 Monte Carlo runs. We also compare their Bayes risks under the computed prior from Algorithm 6 using the last and finest grid in the computation with 20000 Monte Carlo runs. We present the results in Table 1. In this simulation experiment, our Γ\Gamma-minimax estimator substantially reduces the risk compared to two existing estimators. The Γ\Gamma-minimax estimator also has the lowest Bayes risk in all cases. Therefore, incorporating fairly informative prior knowledge into the estimator may lead to a significant improvement in predicting the number of new categories. We expect similar substantial improvement for difficult or even ill-posed statistical problems by incorporating prior knowledge.

Table 1: Risks and Bayes risks of estimators. R(d,P0)R(d,P_{0}): risk of the estimator under the true data-generating mechanism P0P_{0}. r(d,π^)r(d,\hat{\pi}^{*}): Bayes risk under prior π^\hat{\pi}^{*}, the computed prior from Algorithm 6 in the last and finest grid in the computation.
Strength of prior Estimator R(d,P0)R(d,P_{0}) r(d,π^)r(d,\hat{\pi}^{*})
strong OSW 265 303
SCL 146 159
Γ\Gamma-minimax 18 35
weak OSW 265 328
SCL 146 184
Γ\Gamma-minimax 17 61
almost none OSW 265 293
SCL 146 124
Γ\Gamma-minimax 24 81

Fig. 5 presents the unbiased estimator of Bayes risks over iterations when computing a Γ1\Gamma_{1}-minimax estimator. The Bayes risks appear to have a decreasing trend and to approach a liming value. Over iterations, the Bayes risks decrease by a considerable amount. The limiting value of the Bayes risks appears to be slightly higher than the risk of the computed Γ\Gamma-minimax estimator under P0P_{0}. This might indicate that P0P_{0} is not an extreme distribution that yields a high risk.

Refer to caption
Figure 5: Estimated Bayes risks of the estimator over iterations when computing a Γ1\Gamma_{1}-minimax estimator. The lines are unbiased estimates of the current Bayes risks (y-axis) with 30 Monte Carlo runs over iterations (x-axis). The two dashed horizontal lines are the risks of the OSW (upper) and the SCL (lower) estimators, respectively, under P0P_{0} in the simulation. The solid horizontal line is the risk of the computed Γ\Gamma-minimax estimator under P0P_{0}. For clearness of visualization, the estimated Bayes risks are plotted every 50 iterations.

We also apply the above methods to analyze the dataset studied in Miller and Wiegert, (1989), which is used as the true population in the simulation. Based on this sample consisting of n=1088n=1088 observations in 188 categories, we use various methods to predict the number of new categories that would be observed if another m=2000m=2000 observations were to be collected. We train Gamma-minimax estimators using exactly the same tuning parameters as those in the above simulation, except that the starting point in Algorithm 4 has more categories. The predictions of all methods are presented in Table 2. The Γ\Gamma-minimax estimator outputs a more similar prediction to the SCL estimator. This similarity appears different from our observation in the simulation, but can be explained by the fact that having more observations (n=1088n=1088 vs n=100n=100; m=2000m=2000 vs m=200m=200) decreases the variance of the number of new observed categories and thus lowers discrepancies between predictions from these methods. Since the SCL estimator outperforms the OSW estimator in the above simulation where this dataset is the true population, we expect the SCL estimator to achieve reasonably good performance in this application. Moreover, given that the Γ\Gamma-minimax estimators outperform the SCL estimator in the above simulation, we expect that 57 or 58 represents an improved prediction of the number of new categories as compared to the SCL prediction of 51 in the case where there is limited prior information available.

Table 2: Predicted number of new categories (rounded to the nearest integer) in a new sample with size 2000 based on the sample with size 1088 studied in Miller and Wiegert, (1989). The strength of prior information in Γ\Gamma-minimax estimators is shown in brackets.
Estimator Predicted #jnk new categories
OSW         72
SCL         51
Γ\Gamma-minimax (strong)         57
Γ\Gamma-minimax (weak)         57
Γ\Gamma-minimax (almost none)         58

The computation time to compute an approximated Γ\Gamma-minimax estimator was about five to seven hours on an AWS EC2 instance (Amazon,, 2019) with at least 4 vCPUs and at least 8 GiB of memory, depending on the number of times the grid was enlarged. As shown in Fig. 5, far few iterations are needed for SGDmax to output a good approximation of a Γ1\Gamma_{1}-minimax estimator, which is itself quite close to Γ\Gamma-minimax. Therefore, with suitably less conservative tuning parameters or more adaptive minimax problem solvers, the computation time might drastically decrease. Moreover, the computation time needed to evaluate the computed Γ\Gamma-minimax estimator at any sample is almost zero.

5.2 Estimation of the entropy

We also apply our method to estimate the entropy of a multinomial distribution (Example 3). The data-generating mechanism is the same as that described in Example 2, and the estimand of interest is Shannon entropy (Shannon,, 1948), that is, Ψ(P0)=k=1Kpklogpk\Psi(P_{0})=-\sum_{k=1}^{K}p_{k}\log p_{k}. In the simulation, we choose the same true population and the same sample size n=100n=100 as in Section 5.1. The true entropy Ψ(P0)\Psi(P_{0}) is 4.574.57. As a reference, the entropy of the uniform distribution with the same number of categories—which corresponds to the maximum entropy of multinomial distributions with the same total number of categories—is 5.245.24.

Jiao et al., (2015) developed a minimax rate optimal estimator of the Shannon entropy, and we run this simulation to investigate the potential gain of computing a Gamma-minimax estimator in well-posed problems with satisfactory solutions. As in Section 5.1, we consider three sets of prior information:

  1. 1.

    Strongly informative: Prior mean of Ψ(P)\Psi(P) in [4.3,4.7][4.3,4.7], 95%\geq 95\% probability that Ψ(P)\Psi(P) lies in [4,5][4,5];

  2. 2.

    Weakly informative: Prior mean of Ψ(P)\Psi(P) in [4,5][4,5], 95%\geq 95\% probability that Ψ(P)\Psi(P) lies in [3.5,5.5][3.5,5.5];

  3. 3.

    Almost noninformative: Prior mean of Ψ(P)\Psi(P) in [3.7,5.3][3.7,5.3], 95%\geq 95\% probability that Ψ(P)\Psi(P) lies in [3,6][3,6].

The architecture of our neural network estimator is almost identical to that in Section 5.1 except that the existing estimator being used is the one proposed in Jiao et al., (2015) (referred to as the JVHW estimator), and we initialize the network to return the JVHW estimator. We use Algorithm 4 to construct {\mathcal{M}}_{\ell} and Algorithm 6 to compute a Γ\Gamma_{\ell}-minimax estimator. The tuning parameters in the algorithms are identical to those used in Section 5.1 except that, in Algorithm 6, (i) the learning rate is η=0.001\eta=0.001, and (ii) the number of iterations is 6,000 for Γ1\Gamma_{1}. We change these tuning parameters because the JVHW estimator is already minimax in terms of its convergence rate (Jiao et al.,, 2015), and we may need to update the estimator in a more cautious manner in Algorithm 6 to obtain any possible improvement. The trajectories of the estimated Bayes risks (Fig. 6) all appear to approximately reach a plateau, suggesting that the obtained estimator approximately Γ1\Gamma_{1}-minimax and that our choice of a smaller learning rate and a larger number of iterations is valid. Because of the additional complexity of the JVHW estimator, we ran our simulations on an AWS EC2 instance (Amazon,, 2019) with 4 vCPUs and 32 GiB of memory. The computation time was ten to seventeen hours, depending on the number of times the grid was enlarged. The longer computation time than that described in Section 5.1 is primarily due to more iterations in SGDmax and the additional complexity of the JVHW estimator.

We compare the risk of the JVHW estimator and our trained Γ\Gamma-minimax estimator under our set data-generating mechanism computed with 20000 Monte Carlo runs. We also compare their Bayes risk under the computed prior from Algorithm 6 using the last and finest grid in the computation with 20000 Monte Carlo runs. The results are summarized in Table 3. In this simulation experiment, our Γ\Gamma-minimax estimator reduces the risk by a fair percentage compared with the JVHW estimator and achieves lower worst-case Bayes risk. According to these simulation results, we conclude that incorporating informative prior knowledge into the estimator may result in some improvement in estimating entropy. Thus, for well-posed statistical problems with satisfactory solutions, we expect mild or no substantial improvement and little deterioration from using a Gamma-minimax estimator.

Table 3: Risks and Bayes risks of estimators. R(d,P0)R(d,P_{0}): risk of the estimator under the true data-generating mechanism P0P_{0}. r(d,π^)r(d,\hat{\pi}^{*}): Bayes risk under prior π^\hat{\pi}^{*}, the computed prior from Algorithm 6 in the last and finest grid in the computation.
Strength of prior Estimator R(d,P0)R(d,P_{0}) r(d,π^)r(d,\hat{\pi}^{*})
strong JVHW 0.041 0.035
Γ\Gamma-minimax 0.036 0.021
weak JVHW 0.041 0.028
Γ\Gamma-minimax 0.018 0.024
almost none JVHW 0.041 0.031
Γ\Gamma-minimax 0.025 0.016

Fig. 6 presents the unbiased estimator of Bayes risks over iterations when computing a Γ1\Gamma_{1}-minimax estimator. With strongly informative prior information present, the Bayes risks appear to fluctuate without an increasing or decreasing trend at the beginning and decrease slowly after several thousand iterations. With weakly informative or almost no prior information, the Bayes risks also decrease slowly. A reason may be that the JVHW estimator is already minimax rate optimal (Jiao et al.,, 2015). The computed Γ\Gamma-minimax estimators also appear to be somewhat similar to the JVHW estimator: in the output layer of the three settings with different prior information, the coefficients for the JVHW estimator are 0.970.97, 0.900.90 and 0.890.89, respectively; the coefficients for the previous hidden layer are 0.170.17, 0.170.17 and 0.200.20, respectively; the intercepts are 0.060.06, 0.300.30 and 0.300.30, respectively.

Refer to caption
Figure 6: Estimated Bayes risks of the estimator over iterations when computing a Γ1\Gamma_{1}-minimax estimator. The lines are unbiased estimates of the current Bayes risks (y-axis) with 30 Monte Carlo runs over iterations (x-axis). The horizontal lines are the risks of the JVHW (dashed) and the computed Γ\Gamma-minimax (solid) estimators, respectively, under P0P_{0} in the simulation. For clearness of visualization, the estimated Bayes risks are plotted every 100 iterations.

We further use the above methods to estimate entropy based on the dataset used as the true population in the simulation. The tuning parameters of the Γ\Gamma-minimax estimators are exactly the same as those in the above simulation except that the starting point in Algorithm 4 has more categories. The estimates are presented in Table 4. All methods produce almost identical estimates. Because the sample size is more than ten times the sample size in the simulation and the JVHW estimator is minimax rate optimal (Jiao et al.,, 2015), we expect the JVHW estimator to have little room for improvement, which explains why the three Γ\Gamma-minimax estimators perform similarly to the JVHW estimator. In other words, Gamma-minimax estimators appear to maintain, if not improve, the performance of the original JVHW estimator.

Table 4: Estimated entropy based on the sample with size 1088 studied in Miller and Wiegert, (1989). The strength of prior information in Γ\Gamma-minimax estimators is shown in brackets.
Estimator Estimated entropy
JVHW            4.709
Γ\Gamma-minimax (strong)            4.709
Γ\Gamma-minimax (weak)            4.708
Γ\Gamma-minimax (almost none)            4.703

6 Discussion

We propose adversarial meta-learning algorithms to compute a Gamma-minimax estimator with theoretical guarantees under fairly general settings. These algorithms still leave room for improvement. As we discussed in Section 3.1, the stopping criterion we employ does not necessarily indicate that the maximal Bayes risk is close to the true minimax Bayes risk. In future work, it would be interesting to derive a better criterion that necessarily does indicate this near optimality. Our algorithms also require the user to choose increasingly fine approximating grids to the model space. Although we propose a heuristic algorithm for this procedure that performed well in our experiments, at this point, we have not provided optimality guarantees for this scheme. It may also be possible to improve our proposed algorithms to solve intermediate minimax problems in Section 3.1 by utilizing recent and ongoing advances from the machine learning literature that can be used to improve the training of generative adversarial networks.

We do not explicitly consider uncertainty quantification such as confidence intervals or credible intervals under a Gamma-minimax framework. Uncertainty quantification is important in practice since it provides more information than a point estimator and can be used for decision-making. In theory, our method may be directly applied if such a problem can be formulated into a Gamma-minimax problem. However, such a formulation remains unclear. The most challenging part is to identify a suitable risk function that correctly balances the level of uncertainty and the size of the output interval/region. Though the risk function used in Schafer and Stark, (2009) appears to provide one possible starting point, it is not clear how to extend this approach to nonparametric settings.

It is possible to allow the space of estimators 𝒟{\mathcal{D}} to increase as the grid {\mathcal{M}}_{\ell} increase. For example, we may specify an increasing sequence of estimator spaces {𝒟}=1\{{\mathcal{D}}_{\ell}\}_{\ell=1}^{\infty} whose limit is dense in a general space 𝒟0{\mathcal{D}}_{0}; then, in Line 3 of Algorithm 1, we compute a Γ\Gamma_{\ell}-minimax estimator in 𝒟{\mathcal{D}}_{\ell}, namely replace 𝒟{\mathcal{D}} with 𝒟{\mathcal{D}}_{\ell}. This sequence of estimators might converge to a Γ\Gamma-minimax estimator in 𝒟0{\mathcal{D}}_{0}. One possible choice of 𝒟{\mathcal{D}}_{\ell} (>1\ell>1) in this approach is a space of statistical knowledge networks with the given estimator being the computed Γ1\Gamma_{\ell-1}-minimax estimator in 𝒟1{\mathcal{D}}_{\ell-1}. It is of future interest to investigate the properties of such an approach.

In conclusion, we propose adversarial meta-learning algorithms to compute a Gamma-minimax estimator under general models that can incorporate prior information in the form of generalized moment conditions. They can be useful when a parametric model is undesirable, semi-parametric efficiency theory does not apply, or we wish to utilize prior information to improve estimation.

Acknowledgements

Generous support was provided by Amazon through an AWS Machine Learning Research Award, the NIH under award number DP2-LM013340, and the NSF under award number DMS-2210216. The content is solely the responsibility of the authors and does not necessarily represent the official views of Amazon, the NIH, or the NSF.

Appendix A Two counterexamples of Condition 3

We provide two counterexamples of Condition 3 to illustrate that this condition fails in extremely ill cases.

In the first counterexample, PR(d,P)P\mapsto R(d,P) is discontinuous: we set R(d,P)R(d,P^{*}) to be zero for a fixed PP^{*}\in{\mathcal{M}} and R(d,P)R(d,P) to be one for all other PP\in{\mathcal{M}}. If we choose the grid {\mathcal{M}}_{\ell} to be dense in {\mathcal{M}} but to never contain PP^{*}, then Condition 3 does not hold since rsup(d,Γ~)=1r_{\sup}(d,\tilde{\Gamma}_{\ell})=1 for sufficiently large \ell such that PΩP^{*}\in\Omega_{\ell} but rsup(d,Γ~i)=0r_{\sup}(d,\tilde{\Gamma}_{i\mid\ell})=0 for all ii and \ell. This issue can be resolved by choosing a continuous risk function.

In the second counterexample, {\mathcal{M}}_{\ell} does not contain distributions that are consistent with prior information. Suppose that Γ={πΠ:Φ(P)π(dP)=0}\Gamma=\{\pi\in\Pi:\int\Phi(P)\pi({\mathrm{d}}P)=0\} where Φ(P):=𝔼P[X2]\Phi(P):={\mathbb{E}}_{P}[X^{2}]. In other words, it is known that the true data-generating mechanism P0P_{0} must be a distribution that is a point mass at zero, and thus Γ\Gamma also only contains a point mass at P0P_{0}. If Φ(P)0\Phi(P)\neq 0 for every Pi=1iP\in\cup_{i=1}^{\infty}{\mathcal{M}}_{i}, then, even if =1\bigcup_{\ell=1}^{\infty}{\mathcal{M}}_{\ell} is dense in {\mathcal{M}}, Γ~i=\tilde{\Gamma}_{i\mid\ell}=\emptyset and thus Condition 3 does not hold. This issue can be resolved by rewriting the problem such that these hard constraints on {\mathcal{M}} are incorporated into the specification of {\mathcal{M}} rather than Γ\Gamma.

Appendix B Additional gradient-based algorithms

If we can evaluate R(β,P)R(\beta,P) exactly for all β\beta\in\mathcal{H} and PP\in{\mathcal{M}}_{\ell}, then the GDmax algorithm (Algorithm 5) may be used. Note that Line 3 can be formulated into a linear program, which can always be solved in polynomial time with an interior point method (e.g., Jiang et al.,, 2020) and often be solved in polynomial time with a simplex method (Spielman and Teng,, 2004).

Algorithm 5 Gradient descent with max-oracle (GDmax) to compute a Γ\Gamma_{\ell}-minimax estimator
1:Initialize β(0)D\beta_{(0)}\in^{D}. Set learning rate η>0\eta>0 and max-oracle accuracy ζ>0\zeta>0.
2:for t=1,2,t=1,2,\ldots do
3:     Maximization: find π(t)Γ\pi_{(t)}\in\Gamma_{\ell} such that r(β(t1),π(t))maxπΓr(β(t1),π)ζr(\beta_{(t-1)},\pi_{(t)})\geq\max_{\pi\in\Gamma_{\ell}}r(\beta_{(t-1)},\pi)-\zeta
4:     Gradient descent: β(t)β(t1)ηβr(β,π(t))|β=β(t1)\beta_{(t)}\leftarrow\beta_{(t-1)}-\eta\nabla_{\beta}r(\beta,\pi_{(t)})|_{\beta=\beta_{(t-1)}}

We have the following result on the validity of GDmax.

Theorem 5 (Validity of GDmax (Algorithm 5)).

Under conditions in Theorem 2, in Algorithm 5, with η=ϵ2/(L1L22)\eta=\epsilon^{2}/(L_{1}L_{2}^{2}) and ζ=ϵ2/(24L1)\zeta=\epsilon^{2}/(24L_{1}), β(t)\beta_{(t)} is an ϵ\epsilon-stationary point of βrsup(β,Γ)\beta\mapsto r_{\sup}(\beta,\Gamma_{\ell}) for t=O(L1L2Δ/ϵ4)t=O(L_{1}L_{2}\Delta/\epsilon^{4}), and is thus close to a local minimum of βrsup(β,Γ)\beta\mapsto r_{\sup}(\beta,\Gamma_{\ell}).

Therefore, we propose a variant (Algorithm 6) by replacing this line with Lines 34 so that ordinary linear program solvers can be directly applied. The following theorem justifies this variant.

Algorithm 6 Convenient variant of SGDmax (Algorithm 2) to compute a Γ\Gamma_{\ell}-minimax estimator
1:Initialize β(0)D\beta_{(0)}\in^{D}. Set learning rate η>0\eta>0 and batch sizes JJ, JJ^{\prime}.
2:for t=1,2,t=1,2,\ldots do
3:     Generate iid copies ξ1,,ξJ\xi_{1},\ldots,\xi_{J^{\prime}} of ξ\xi.
4:     Stochastic maximization: π(t)argmaxπΓ1Jj=1Jr^(β(t1),π,ξj)\pi_{(t)}\leftarrow\operatorname*{argmax}_{\pi\in\Gamma_{\ell}}\frac{1}{J^{\prime}}\sum_{j=1}^{J^{\prime}}\hat{r}(\beta_{(t-1)},\pi,\xi_{j}).
5:     Generate iid copies of ξJ+1,,ξJ+J\xi_{J^{\prime}+1},\ldots,\xi_{J^{\prime}+J} of ξ\xi.
6:     Stochastic gradient descent: β(t)β(t1)ηJj=J+1J+Jβr^(β,π(t),ξj)|β=β(t1)\beta_{(t)}\leftarrow\beta_{(t-1)}-\frac{\eta}{J}\sum_{j=J^{\prime}+1}^{J^{\prime}+J}\nabla_{\beta}\hat{r}(\beta,\pi_{(t)},\xi_{j})|_{\beta=\beta_{(t-1)}}.

The validity of this variant of SGDmax is given in Theorem 6 below.

Theorem 6 (Validity of convenient variant of SGDmax (Algorithm 6)).

Suppose that {ξr^(β,π,ξ):βD,πΓ}\{\xi\mapsto\hat{r}(\beta,\pi,\xi):\beta\in^{D},\pi\in\Gamma_{\ell}\} is a Ξ\Xi-Glivenko-Centelli class (van der Vaart and Wellner,, 2000). Then, for any ζ>0\zeta>0, there exists a sufficiently large JJ^{\prime} such that

𝔼[r(β(t1),π(t))]maxπΓr(β(t1),π)ζ{\mathbb{E}}[r(\beta_{(t-1)},\pi_{(t)})]\geq\max_{\pi\in\Gamma_{\ell}}r(\beta_{(t-1)},\pi)-\zeta

for all tt, where the expectation is taken over π(t)\pi_{(t)} and β(t1)\beta_{(t-1)} is fixed. Therefore, with the chosen parameters in Theorem 2, we may choose a sufficiently large JJ^{\prime} so that β(t)\beta_{(t)} is an ϵ\epsilon-stationary point of βrsup(β,Γ)\beta\mapsto r_{\sup}(\beta,\Gamma_{\ell}) in expectation for t=O(L1(L22+σ2)Δ/ϵ4)t=O(L_{1}(L_{2}^{2}+\sigma^{2})\Delta/\epsilon^{4}) and is thus close to a local minimum of βrsup(β,Γ)\beta\mapsto r_{\sup}(\beta,\Gamma_{\ell}) with high probability.

We prove Theorem 6 by showing that maxπΓr(β(t1),π)𝔼[r(β(t1),π(t))]\max_{\pi\in\Gamma_{\ell}}r(\beta_{(t-1)},\pi)-{\mathbb{E}}[r(\beta_{(t-1)},\pi_{(t)})] converges to 0 as JJ^{\prime}\rightarrow\infty. The proof is essentially an application of empirical process theory to the study of an M-estimator.

Appendix C Additional simulation: mean estimation

In this appendix, we illustrate our proposed methods via simulation in a special case of Example 1, namely for estimating the mean of a distribution. We assume that {\mathcal{M}} consists of all probability distributions defined on the Borel σ\sigma-algebra on [0,1][0,1] and we observe 𝐗=(X1,X2,,Xn)\mathbf{X}=(X_{1},X_{2},\ldots,X_{n}), where X1,,XniidP0X_{1},\ldots,X_{n}\overset{\mathrm{iid}}{\sim}P_{0}\in{\mathcal{M}}. Here we take n=10n=10. The estimand is Ψ(P0)=xP0(dx)\Psi(P_{0})=\int x\,P_{0}({\mathrm{d}}x). We use the mean squared error risk introduced in Example 1. Suppose that we represent the prior information by Γ={πΠ:Ψ(P)π(dP)=0.3}\Gamma=\{\pi\in\Pi:\int\Psi(P)\,\pi({\mathrm{d}}P)=0.3\}, which corresponds to the set of prior distributions in Π\Pi that satisfy an equality constraint on the prior mean of Ψ(P)\Psi(P).

We apply our method to three spaces of estimators separately. The first space, 𝒟linear{\mathcal{D}}_{\mathrm{linear}}, is the set of affine transformations of the sample mean, that is, 𝒟linear={d:d(𝐗)=β0+β1i=1nXi/n,β0,β1}{\mathcal{D}}_{\mathrm{linear}}=\{d:d(\mathbf{X})=\beta_{0}+\beta_{1}\sum_{i=1}^{n}X_{i}/n,\beta_{0},\beta_{1}\in\}. As shown in Proposition 1 in Appendix E.5, there is an estimator dd^{*} in 𝒟linear{\mathcal{D}}_{\mathrm{linear}} that is Γ\Gamma-minimax in the space of all estimators that are square-integrable with respect to all PP\in{\mathcal{M}}, so we consider this simple space to better compare our computed estimator with that theoretical Γ\Gamma-minimax estimator. When computing a Γ\Gamma-minimax estimator in 𝒟linear{\mathcal{D}}_{\mathrm{linear}}, we initialize the estimator to be the sample mean, that is, we let β0=0\beta_{0}=0 and β1=1\beta_{1}=1.

The second space, 𝒟skn{\mathcal{D}}_{\mathrm{skn}} (statistical knowledge network), is a set of neural networks designed based on statistical knowledge that includes the sample mean as an input. We consider this space to illustrate our proposal in Section 4.2. More precisely, we use the architecture in Fig. 7, which is similar to the deep set architecture (Zaheer et al.,, 2017; Maron et al.,, 2019) and is a permutation invariant neural network. We use such an architecture to account for the fact that the sample is iid. In this architecture, the sample mean node is used as an augmenting node to an ordinary deep set network and is combined with the output of that ordinary network in the fourth hidden layer to obtain the final output. Note that 𝒟skn𝒟linear{\mathcal{D}}_{\mathrm{skn}}\supset{\mathcal{D}}_{\mathrm{linear}}. When computing a Γ\Gamma-minimax estimator for this class, we also initialize the network to be exactly the sample mean, which is a reasonable choice given that the sample mean is known to be a sensible estimator. In this simulation experiment, we choose the dimensionality of nodes in each hidden layer in Fig. 7 as follows: each node in the first, second, third and fourth hidden layer represents a vector in 10, 5, 10 and , respectively. We do not use larger architectures because usually the sample mean is already a good estimator, and we expect to obtain a useful estimator as a small perturbation of this estimator. We also use the ReLU as the activation function. We did not use ELMs in this and the following simulations because we found that neural networks perform well.

\vdots\vdots\vdots\sumX1X_{1}X2X_{2}X3X_{3}XnX_{n}Sample meanInputlayer1st Hiddenlayer2nd HiddenlayerPoolinglayer3rd Hiddenlayer4th HiddenlayerOuputlayer
Figure 7: Architecture of the permutation invariant neural network estimator of the mean in 𝒟skn{\mathcal{D}}_{\mathrm{skn}}. XiX_{i}: observation ii in the sample; \sum: the node that sums up all ancestor nodes. In the first two hidden layers, all input nodes are transformed by the same function. The arrows from the input nodes to the sample mean estimator are omitted from this graph. Each node in the hidden layers represents a vector.

The third space, 𝒟nn{\mathcal{D}}_{\mathrm{nn}}, is a set of neural networks that do not utilize knowledge of the sample mean. We consider this space to illustrate our method without utilizing existing estimators. These estimators are also deep set networks with similar architecture as 𝒟skn{\mathcal{D}}_{\mathrm{skn}} in Fig. 7. The main difference is that the explicit sample mean node and the fourth hidden layer are removed. When computing a Γ\Gamma-minimax estimator in 𝒟nn{\mathcal{D}}_{\mathrm{nn}}, we also randomly initialize the network, unlike 𝒟linear{\mathcal{D}}_{\mathrm{linear}} and 𝒟skn{\mathcal{D}}_{\mathrm{skn}}, in order not to input statistical knowledge. Because the ReLU activation function is used, 𝒟nn𝒟linear{\mathcal{D}}_{\mathrm{nn}}\supset{\mathcal{D}}_{\mathrm{linear}}, and we do not expect that optimizing over 𝒟nn{\mathcal{D}}_{\mathrm{nn}} should not lead to a Γ\Gamma-minimax estimator with worse performance than those in 𝒟linear{\mathcal{D}}_{\mathrm{linear}} and 𝒟skn{\mathcal{D}}_{\mathrm{skn}}.

To construct the grid {\mathcal{M}}_{\ell} for this problem, we use a simpler method than Algorithm 4. As indicated by Lemma 6 in Appendix E.5, for estimators in 𝒟linear{\mathcal{D}}_{\mathrm{linear}}, Bernoulli distributions tend to have high risks since all probability weights lie on the boundary of [0,1][0,1]; in addition, a prior π\pi^{*} for which dd^{*} is Bayes is a Beta prior over Bernoulli distributions. Therefore, we randomly generate 2000 Bernoulli distributions as grid points in 1{\mathcal{M}}_{1}. We also include two degenerate distributions in this grid, namely the distribution that places all of its mass at 0 and that which places all of its mass at 11. When constructing {\mathcal{M}}_{\ell} from 1{\mathcal{M}}_{\ell-1}, we still add in more complicated distributions to make the grid dense in the limit: we first randomly generate 500 discrete distributions with support being those in 1{\mathcal{M}}_{\ell-1}; then we randomly generate 10 new support points in [0,1][0,1] and 1000 distributions with support points being the union of the new support points and the existing support points in 1{\mathcal{M}}_{\ell-1}.

When computing the Γ\Gamma-minimax estimator, for each grid {\mathcal{M}}_{\ell}, we compute the Γ\Gamma_{\ell}-minimax estimator for all three estimator spaces with Algorithm 6. We set the learning rate η=0.005\eta=0.005, the batch size J=50J=50 and the number of iterations to be 200 for Γ\Gamma_{\ell} (>1\ell>1). The number of iterations for Γ1\Gamma_{1} is larger because, in our experiments, we saw that a Γ1\Gamma_{1}-minimax estimator is already close to a Γ\Gamma-minimax estimator, and using a large number of iterations in this step can improve the initial estimator substantially. For 𝒟linear{\mathcal{D}}_{\mathrm{linear}} and 𝒟skn{\mathcal{D}}_{\mathrm{skn}}, the number of iterations for Γ1\Gamma_{1} is 2000; the corresponding number for 𝒟nn{\mathcal{D}}_{\mathrm{nn}} is 6000 to account for the lack of human knowledge input. We also use Algorithm 3 with 10000 iterations to compute a Γ\Gamma_{\ell}-minimax estimator for 𝒟linear{\mathcal{D}}_{\mathrm{linear}} for illustration. In this setup, as described in Section 3.3, we take the average of the computed Γ\Gamma-minimax stochastic estimator as the final output estimator in 𝒟linear{\mathcal{D}}_{\mathrm{linear}}. We do not apply Algorithm 3 to 𝒟skn{\mathcal{D}}_{\mathrm{skn}} or 𝒟nn{\mathcal{D}}_{\mathrm{nn}} because it is computationally intractable for these estimator spaces.

We set the stopping criterion in Algorithm 1 as follows. When Algorithm 6 is used to compute Γ\Gamma_{\ell}-minimax estimators, we estimate both rsup(d1,Γ)r_{\sup}(d^{*}_{\ell-1},\Gamma_{\ell}) and rsup(d1,Γ1)r_{\sup}(d^{*}_{\ell-1},\Gamma_{\ell-1}) with 2000 Monte Carlo runs as described in Section 3.1; when Algorithm 3 is used, rsup(d1,Γ)r_{\sup}(d^{*}_{\ell-1},\Gamma_{\ell}) and rsup(d1,Γ1)r_{\sup}(d^{*}_{\ell-1},\Gamma_{\ell-1}) are computed exactly because R(d,P)R(d,P) has a closed-form expression for all d𝒟lineard\in{\mathcal{D}}_{\mathrm{linear}} and PP\in{\mathcal{M}}_{\ell}. We set the tolerance ϵ\epsilon to be equal to 0.00010.0001 so that we stop Algorithm 1 if rsup(d1,Γ)rsup(d1,Γ1)ϵr_{\sup}(d^{*}_{\ell-1},\Gamma_{\ell})-r_{\sup}(d^{*}_{\ell-1},\Gamma_{\ell-1})\leq\epsilon.

After computation, we report the Bayes risk of the computed and theoretical Γ\Gamma-minimax estimators under π\pi^{*}, the prior such that r(d,π)=infd𝒟rsup(d,Γ)r(d^{*},\pi^{*})=\inf_{d\in{\mathcal{D}}}r_{\sup}(d,\Gamma). For the estimators in 𝒟linear{\mathcal{D}}_{\mathrm{linear}}, we further report their coefficients. We also report two coefficients of the computed estimator in 𝒟skn{\mathcal{D}}_{\mathrm{skn}} as follows. Since 𝒟linear𝒟skn{\mathcal{D}}_{\mathrm{linear}}\subseteq{\mathcal{D}}_{\mathrm{skn}} and we initialize the estimator to be the sample mean for 𝒟skn{\mathcal{D}}_{\mathrm{skn}}, we would expect that the bias β0\beta_{0} and the weight of the sample mean β1\beta_{1} in the output layer for the computed Γ\Gamma-minimax estimator in 𝒟skn{\mathcal{D}}_{\mathrm{skn}} may correspond to those in 𝒟linear{\mathcal{D}}_{\mathrm{linear}}. Therefore, we also report these two coefficients β0\beta_{0} and β1\beta_{1} for 𝒟skn{\mathcal{D}}_{\mathrm{skn}}. This may not be the case for 𝒟nn{\mathcal{D}}_{\mathrm{nn}} because the sample mean is not explicit in its parameterization and all coefficients are randomly initialized, so we do not report any coefficients for 𝒟nn{\mathcal{D}}_{\mathrm{nn}}.

Table 5 presents the computation results. By Theorem 7 in Appendix E.5, these computed estimators are all approximately Γ\Gamma-minimax since their Bayes risks for π\pi^{*} are all close to that of a theoretical Γ\Gamma-minimax estimator. The coefficients β0\beta_{0} and β1\beta_{1} of the computed estimators in 𝒟linear{\mathcal{D}}_{\mathrm{linear}} and 𝒟skn{\mathcal{D}}_{\mathrm{skn}} are also close to a theoretically derived estimator. For the computed estimator in 𝒟skn{\mathcal{D}}_{\mathrm{skn}}, the weight of the other ancestor node in the output layer (i.e., the node in the 4th hidden layer in Fig. 7) is 0.0000.000. Therefore, our computed Γ\Gamma-minimax estimator in 𝒟skn{\mathcal{D}}_{\mathrm{skn}} is also close to a theoretically derived Γ\Gamma-minimax estimator.

Table 5: Coefficients and Bayes risks of estimators of the mean. Unrestricted space: the space of all estimators that are square-integrable with respect to all PP\in{\mathcal{M}}.
Estimator space Method to obtain dd^{*} β0\beta_{0} β1\beta_{1} r(d,π)r(d,\pi^{*})
Unrestricted space Theoretical derivation 0.0720.072 0.7600.760 0.0120.012
𝒟linear{\mathcal{D}}_{\mathrm{linear}} Algorithms 1 & 6 0.0720.072 0.7630.763 0.0120.012
𝒟skn{\mathcal{D}}_{\mathrm{skn}} Algorithms 1 & 6 0.0710.071 0.7670.767 0.0120.012
𝒟nn{\mathcal{D}}_{\mathrm{nn}} Algorithms 1 & 6 0.0120.012
𝒟linear{\mathcal{D}}_{\mathrm{linear}} Algorithms 1 & 3 0.0720.072 0.7600.760 0.0120.012

In our experiments, Algorithm 1 converged after computing a Γ1\Gamma_{1}-minimax estimator except when using Algorithm 6 for 𝒟linear{\mathcal{D}}_{\mathrm{linear}}. Even in this exceptional case, the computed Γ1\Gamma_{1}-minimax estimator is still approximately Γ\Gamma-minimax. We think the algorithm does not stop then in these cases because of Monte Carlo errors when computing rsup(d1,Γ)r_{\sup}(d^{*}_{\ell-1},\Gamma_{\ell}) and rsup(d1,Γ1)r_{\sup}(d^{*}_{\ell-1},\Gamma_{\ell-1}).

Fig. 3 presents the Bayes risks (or its unbiased estimates) over iterations when computing a Γ1\Gamma_{1}-minimax estimator. In all cases using Algorithm 6, the Bayes risks appear to decrease and converge. When using Algorithm 3, the upper and lower bounds both converge to the same limit. The limiting values of the Bayes risks in all cases are close to r(d,π)r(d^{*},\pi^{*}) because Γ1\Gamma_{1} can approximate π\pi^{*} well.

Appendix D Sensitivity analysis for tuning parameter selection

For the simulation in Section 5.1 with strongly informative prior information, we conduct three simulations to investigate the sensitivity of our proposed method to the selection of tuning parameters. In each simulation below, we vary one set of tuning parameters and rerun the algorithm to obtain an estimator. In the first simulation, we vary the starting point of Algorithm 4 to construct the first grid 1{\mathcal{M}}_{1}. The new starting point is a distribution with 173 categories and Φ(P(0))=61\Phi(P_{(0)})=61, and so this starting point is qualitatively different from the one chosen in the original simulation. In the second simulation, we vary the grid sizes: There are 500 grid points in 1{\mathcal{M}}_{1} and we add 500 grid points each time we enlarge the grid. In the third simulation, we chose a wider and deeper statistical knowledge network (see Fig. 8): Compared to the original simulation, we add one more hidden layer and increased the number of hidden nodes in the first two hidden layers to 100. As shown in Table 6, the results in these sensitivity simulations appear similar to that in Section 5.1 within the variation due to randomness in MCMC (Algorithm 4) and SGDmax (Algorithm 6).

\vdots\vdotsX1X_{1}XnX_{n}OSWSCLInputlayer1st Hiddenlayer2nd Hiddenlayer3rd HiddenlayerOuputlayer
Figure 8: Architecture of the deeper and wider neural network estimator of the expected number of new categories.
Table 6: Table similar to Table 1 for sensitivity analysis with strongly informative prior information.
Varied tuning parameter R(d,P0)R(d,P_{0}) r(d,π^)r(d,\hat{\pi}^{*})
Initial distribution in MCMC 19 44
Grid size 15 34
Statistical knowledge network structure 17 38

Appendix E Proofs

E.1 Proof of Theorem 1 and Corollary 1

Lemma 1.

If {Ω}=1\{\Omega_{\ell}\}_{\ell=1}^{\infty} is an increasing sequence of subsets of {\mathcal{M}} such that =1Ω=\bigcup_{\ell=1}^{\infty}\Omega_{\ell}={\mathcal{M}}, then, for any d𝒟d\in{\mathcal{D}}, rsup(d,Γ~)rsup(d,Γ)r_{\sup}(d,\tilde{\Gamma}_{\ell})\nearrow r_{\sup}(d,\Gamma) (\ell\rightarrow\infty).

Proof of Lemma 1.

Since Γ~Γ~+1Γ\tilde{\Gamma}_{\ell}\subseteq\tilde{\Gamma}_{\ell+1}\subseteq\Gamma, it holds that rsup(d,Γ~)rsup(d,Γ~+1)rsup(d,Γ)r_{\sup}(d,\tilde{\Gamma}_{\ell})\leq r_{\sup}(d,\tilde{\Gamma}_{\ell+1})\leq r_{\sup}(d,\Gamma), and so we only need to lower bound rsup(d,Γ~)r_{\sup}(d,\tilde{\Gamma}_{\ell}). Fix ϵ>0\epsilon>0. By Corollary 5 of Pinelis, (2016), rsup(d,Γ)r_{\sup}(d,\Gamma) can be approximated by r(d,ν)r(d,\nu) arbitrarily well for priors νΓ\nu\in\Gamma with a finite support; that is, there exists νΓ\nu\in\Gamma with finite support such that r(d,ν)rsup(d,Γ)ϵr(d,\nu)\geq r_{\sup}(d,\Gamma)-\epsilon. For sufficiently large \ell, Ω\Omega_{\ell} contains all support points of ν\nu and hence rsup(d,Γ~)r(d,ν)rsup(d,Γ)ϵr_{\sup}(d,\tilde{\Gamma}_{\ell})\geq r(d,\nu)\geq r_{\sup}(d,\Gamma)-\epsilon. The desired result follows. ∎

Lemma 2.

Under Condition 2, for any ΓΓ\Gamma^{\prime}\subseteq\Gamma and ϵ>0\epsilon>0, there exists δ>0\delta>0 such that rsup(d,Γ)rsup(d,Γ)ϵr_{\sup}(d^{*},\Gamma^{\prime})-r_{\sup}(d,\Gamma^{\prime})\leq\epsilon for all d𝒟d\in{\mathcal{D}} such that ϱ(d,d)δ\varrho(d,d^{*})\leq\delta.

Proof of Lemma 2.

By Corollary 5 of Pinelis, (2016), there exists νΓ\nu\in\Gamma^{\prime} with a finite support such that rsup(d,Γ)r(d,ν)+ϵ/2r_{\sup}(d^{*},\Gamma^{\prime})\leq r(d^{*},\nu)+\epsilon/2. By Condition 2 and the fact that ν\nu has a finite support, there exists δ>0\delta>0 such that, for any d𝒟d\in{\mathcal{D}} such that ϱ(d,d)δ\varrho(d,d^{*})\leq\delta, |r(d,ν)r(d,ν)|ϵ/2|r(d,\nu)-r(d^{*},\nu)|\leq\epsilon/2. Since νΓ\nu\in\Gamma^{\prime}, we have that rsup(d,Γ)r(d,ν)r_{\sup}(d,\Gamma^{\prime})\geq r(d,\nu) and thus rsup(d,Γ)rsup(d,Γ)r(d,ν)+ϵ/2r(d,ν)ϵr_{\sup}(d^{*},\Gamma^{\prime})-r_{\sup}(d,\Gamma^{\prime})\leq r(d^{*},\nu)+\epsilon/2-r(d,\nu)\leq\epsilon for any d𝒟d\in{\mathcal{D}} such that ϱ(d,d)δ\varrho(d,d^{*})\leq\delta. ∎

Lemma 3.

Under Condition 3, it holds that limirsup(d,Γ~i|)=rsup(d,Γ~)\lim_{i\rightarrow\infty}r_{\sup}(d,\tilde{\Gamma}_{i|\ell})=r_{\sup}(d,\tilde{\Gamma}_{\ell}).

Proof of Lemma 3.

Let d𝒟d\in{\mathcal{D}}, \ell and ϵ>0\epsilon>0 be fixed. By Corollary 5 of Pinelis, (2016), rsup(d,Γ~)r(d,π)+ϵ/2r_{\sup}(d,\tilde{\Gamma}_{\ell})\leq r(d,\pi)+\epsilon/2 for some πΓ~\pi\in\tilde{\Gamma}_{\ell} with a finite support. Under Condition 2, there exists a sequence πiΓ~i\pi_{i}\in\tilde{\Gamma}_{i\mid\ell} such that, for all sufficiently large ii, r(d,πi)r(d,π)ϵ/2r(d,\pi_{i})\geq r(d,\pi)-\epsilon/2. For such ii, rsup(d,Γ~)r(d,πi)+ϵr_{\sup}(d,\tilde{\Gamma}_{\ell})\leq r(d,\pi_{i})+\epsilon. Since rsup(d,Γ~)rsup(d,Γ~i)r(d,πi)r_{\sup}(d,\tilde{\Gamma}_{\ell})\geq r_{\sup}(d,\tilde{\Gamma}_{i\mid\ell})\geq r(d,\pi_{i}), we have that r(d,πi)rsup(d,Γ~i)rsup(d,Γ~)r(d,πi)+ϵr(d,\pi_{i})\leq r_{\sup}(d,\tilde{\Gamma}_{i\mid\ell})\leq r_{\sup}(d,\tilde{\Gamma}_{\ell})\leq r(d,\pi_{i})+\epsilon for all sufficiently large ii, and thus we have proved Lemma 3. ∎

Proof of Theorem 1.

Let ϵ>0\epsilon>0. There exists d𝒟d^{\prime}\in{\mathcal{D}} such that

rsup(d,Γ)infd𝒟rsup(d,Γ)+ϵ.r_{\sup}(d^{\prime},\Gamma)\leq\inf_{d\in{\mathcal{D}}}r_{\sup}(d,\Gamma)+\epsilon.

Moreover, there exists πΓ\pi_{\ell}\in\Gamma_{\ell} such that

rsup(d,Γ)r(d,π)+ϵ.r_{\sup}(d^{\prime},\Gamma_{\ell})\leq r(d^{\prime},\pi_{\ell})+\epsilon.

Using the fact that dd^{*}_{\ell} is Γ\Gamma_{\ell}-minimax and the definition of rsupr_{\sup}, it holds that

rsup(d,Γ)\displaystyle r_{\sup}(d^{*}_{\ell},\Gamma_{\ell}) rsup(d,Γ)r(d,π)+ϵ\displaystyle\leq r_{\sup}(d^{\prime},\Gamma_{\ell})\leq r(d^{\prime},\pi_{\ell})+\epsilon
rsup(d,Γ)+ϵinfd𝒟rsup(d,Γ)+2ϵ.\displaystyle\leq r_{\sup}(d^{\prime},\Gamma)+\epsilon\leq\inf_{d\in{\mathcal{D}}}r_{\sup}(d,\Gamma)+2\epsilon.

Since this inequality holds for any ϵ>0\epsilon>0, we have that rsup(d,Γ)infd𝒟rsup(d,Γ)r_{\sup}(d^{*}_{\ell},\Gamma_{\ell})\leq\inf_{d\in{\mathcal{D}}}r_{\sup}(d,\Gamma). An almost identical argument shows that the sequence {rsup(d,Γ)}=1\{r_{\sup}(d^{*}_{\ell},\Gamma_{\ell})\}_{\ell=1}^{\infty} is nondecreasing. Therefore, this sequence converges to some limit infd𝒟rsup(d,Γ)rsup(d,Γ)\mathcal{R}\leq\inf_{d\in{\mathcal{D}}}r_{\sup}(d,\Gamma)\leq r_{\sup}(d^{*},\Gamma).

We next prove that rsup(d,Γ)r_{\sup}(d^{*},\Gamma)\leq\mathcal{R}. Let ϵ>0\epsilon>0. Without loss of generality, we may assume that Ω{\mathcal{M}}_{\ell}\subseteq\Omega_{\ell} for all =1,2,\ell=1,2,\ldots in Condition 3. (Otherwise, we may instead consider the sequence {Ω~}~=1\{\Omega_{\tilde{\ell}}\}_{\tilde{\ell}=1}^{\infty} where Ω~=:ΩΩ\Omega_{\tilde{\ell}}=\bigcap_{\ell^{\prime}:\Omega_{\ell^{\prime}}\supseteq{\mathcal{M}}_{\ell}}\Omega_{\ell^{\prime}}. Note that Condition 3 also holds for {Ω~}~=1\{\Omega_{\tilde{\ell}}\}_{\tilde{\ell}=1}^{\infty}.) By Lemma 1, there exists 0\ell_{0} such that rsup(d,Γ~0)rsup(d,Γ)ϵ/3r_{\sup}(d^{*},\tilde{\Gamma}_{\ell_{0}})\geq r_{\sup}(d^{*},\Gamma)-\epsilon/3. By Condition 3, there exists i1i_{1} such that rsup(d,Γ~i1|0)rsup(d,Γ~0)ϵ/3r_{\sup}(d^{*},\tilde{\Gamma}_{i_{1}|\ell_{0}})\geq r_{\sup}(d^{*},\tilde{\Gamma}_{\ell_{0}})-\epsilon/3. Without loss of generality, suppose that ddd^{*}_{\ell}\rightarrow d^{*} (otherwise, take a convergent subsequence to this limit point). This then implies that there exists i2>i1i_{2}>i_{1} such that ϱ(di2,d)\varrho(d^{*}_{i_{2}},d^{*}) is sufficiently small, such that, by Lemma 2, rsup(di2,Γ~i1|0)rsup(d,Γ~i1|0)ϵ/3r_{\sup}(d^{*}_{i_{2}},\tilde{\Gamma}_{i_{1}|\ell_{0}})\geq r_{\sup}(d^{*},\tilde{\Gamma}_{i_{1}|\ell_{0}})-\epsilon/3. Moreover, since Γ~i1|0Γ~i1Γ~i2\tilde{\Gamma}_{i_{1}|\ell_{0}}\subseteq\tilde{\Gamma}_{i_{1}}\subseteq\tilde{\Gamma}_{i_{2}}, it holds that rsup(di2,Γ~i2)rsup(di2,Γ~i1|0)r_{\sup}(d^{*}_{i_{2}},\tilde{\Gamma}_{i_{2}})\geq r_{\sup}(d^{*}_{i_{2}},\tilde{\Gamma}_{i_{1}|\ell_{0}}). Therefore, rsup(di2,Γ~i2)rsup(d,Γ)ϵr_{\sup}(d^{*}_{i_{2}},\tilde{\Gamma}_{i_{2}})\geq r_{\sup}(d^{*},\Gamma)-\epsilon. Since the sequence {rsup(d,Γ)}=1\{r_{\sup}(d^{*}_{\ell},\Gamma_{\ell})\}_{\ell=1}^{\infty} is nondecreasing, it holds that rsup(d,Γ)rsup(d,Γ)ϵr_{\sup}(d^{*}_{\ell},\Gamma_{\ell})\geq r_{\sup}(d^{*},\Gamma)-\epsilon for all i2\ell\geq i_{2}. Since ϵ\epsilon is arbitrary, we have that lim infrsup(d,Γ)rsup(d,Γ)\liminf_{\ell\rightarrow\infty}r_{\sup}(d^{*}_{\ell},\Gamma_{\ell})\geq r_{\sup}(d^{*},\Gamma), and hence rsup(d,Γ)\mathcal{R}\geq r_{\sup}(d^{*},\Gamma).

Combining the results from the preceding two paragraphs, =infd𝒟rsup(d,Γ)=rsup(d,Γ)\mathcal{R}=\inf_{d\in{\mathcal{D}}}r_{\sup}(d,\Gamma)=r_{\sup}(d^{*},\Gamma). Consequently, dd^{*} is Γ\Gamma-minimax. Moreover, as {rsup(d,Γ)}=1\{r_{\sup}(d^{*}_{\ell},\Gamma_{\ell})\}_{\ell=1}^{\infty} increases to \mathcal{R}, this sequence also increases to rsup(d,Γ)r_{\sup}(d^{*},\Gamma). This concludes the proof. ∎

Proof of Corollary 1.

We first establish the strict convexity of dr(d,π)d\mapsto r(d,\pi) for any πΓ\pi\in\Gamma. We then establish the strict convexity of drsup(d,Γ)d\mapsto r_{\sup}(d,\Gamma). We then establish that there is a unique minimizer of drsup(d,Γ)d\mapsto r_{\sup}(d,\Gamma) and show that the desired result follows from Theorem 1.

Let d1,d2𝒟d_{1},d_{2}\in{\mathcal{D}} and c(0,1)c\in(0,1) be arbitrary, then by the convexity of 𝒟{\mathcal{D}} and the strict convexity of dR(d,P)d\mapsto R(d,P) for each PP\in{\mathcal{M}},

r(cd1+(1c)d2,π)\displaystyle r(cd_{1}+(1-c)d_{2},\pi) =R(cd1+(1c)d2,P)π(dP)\displaystyle=\int R(cd_{1}+(1-c)d_{2},P)\pi({\mathrm{d}}P)
<{cR(d1,P)+(1c)R(d2,P)}π(dP)\displaystyle<\int\{cR(d_{1},P)+(1-c)R(d_{2},P)\}\pi({\mathrm{d}}P)
=cr(d1,π)+(1c)r(d2,π).\displaystyle=cr(d_{1},\pi)+(1-c)r(d_{2},\pi).

Therefore, dr(d,π)d\mapsto r(d,\pi) is strictly convex for any πΓ\pi\in\Gamma.

Let d1,d2𝒟d_{1},d_{2}\in{\mathcal{D}} be distinct and c(0,1)c\in(0,1) be arbitrary. Since rsup(d,Γ)r_{\sup}(d,\Gamma) is attainable for any d𝒟d\in{\mathcal{D}}, there exists π~Γ\tilde{\pi}\in\Gamma such that

rsup(cd1+(1c)d2,Γ)\displaystyle r_{\sup}(cd_{1}+(1-c)d_{2},\Gamma) =r(cd1+(1c)d2,π~)\displaystyle=r(cd_{1}+(1-c)d_{2},\tilde{\pi})
<cr(d1,π~)+(1c)r(d2,π~)\displaystyle<cr(d_{1},\tilde{\pi})+(1-c)r(d_{2},\tilde{\pi})
crsup(d1,Γ)+(1c)rsup(d2,Γ).\displaystyle\leq cr_{\sup}(d_{1},\Gamma)+(1-c)r_{\sup}(d_{2},\Gamma).

Thus, drsup(d,Γ)d\mapsto r_{\sup}(d,\Gamma) is strictly convex.

As drsup(d,Γ)d\mapsto r_{\sup}(d,\Gamma) is strictly convex and 𝒟{\mathcal{D}} is convex, this function achieves exactly one minimum on 𝒟{\mathcal{D}}. By Theorem 1, any limit point dd^{*} of {d}=1\{d^{*}_{\ell}\}_{\ell=1}^{\infty} is a minimizer of drsup(d,Γ)d\mapsto r_{\sup}(d,\Gamma), and so the sequence has a limit point, which is also the unique Γ\Gamma-minimax estimator. ∎

E.2 Proof of Theorems 2 & 5

We prove Theorems 2 and 5 by checking that Assumptions 3.1 and 3.6 in Lin et al., (2020) are satisfied and using Theorem E.3 and E.4 in Lin et al., (2020), respectively. Since Assumption 3.1 is satisfied by our construction of R^\hat{R}, we focus on Assumption 3.6 for the rest of this section.

Let ={P1,P2,,PΛ}{\mathcal{M}}_{\ell}=\{P_{1},P_{2},\ldots,P_{\Lambda}\}\subseteq{\mathcal{M}}. For any πΓ\pi\in\Gamma_{\ell}, let πλ\pi_{\lambda} denote the probability weight of π\pi on PλP_{\lambda} (λ=1,,Λ\lambda=1,\ldots,\Lambda). For the rest of this section, we also use π\pi to denote the vector (π1,,πΛ)(\pi_{1},\ldots,\pi_{\Lambda}). We also use \lesssim to denote less than equal to up to a universal positive constant that may depend on \ell. Then, straightforward calculations imply that βr(β,π)=λ=1ΛπλβR(β,Pλ)\nabla_{\beta}r(\beta,\pi)=\sum_{\lambda=1}^{\Lambda}\pi_{\lambda}\nabla_{\beta}R(\beta,P_{\lambda}) and πr(β,π)=(R(β,P1),,R(β,PΛ))\nabla_{\pi}r(\beta,\pi)=(R(\beta,P_{1}),\ldots,R(\beta,P_{\Lambda}))^{\top}.

For each =1,2,\ell=1,2,\ldots, for any β1,β2\beta^{1},\beta^{2}\in\mathcal{H} and π1,π2Γ\pi^{1},\pi^{2}\in\Gamma_{\ell}, by Conditions 4 and 5,

βr(β,π)|β=β1,π=π1βr(β,π)|β=β2,π=π2\displaystyle\left\|\left.\nabla_{\beta}r(\beta,\pi)\right|_{\beta=\beta^{1},\pi=\pi^{1}}-\left.\nabla_{\beta}r(\beta,\pi)\right|_{\beta=\beta^{2},\pi=\pi^{2}}\right\|
=λ=1Λ{πλ1βR(β,Pλ)|β=β1πλ2βR(β,Pλ)|β=β2}\displaystyle=\left\|\sum_{\lambda=1}^{\Lambda}\left\{\pi^{1}_{\lambda}\left.\nabla_{\beta}R(\beta,P_{\lambda})\right|_{\beta=\beta_{1}}-\pi^{2}_{\lambda}\left.\nabla_{\beta}R(\beta,P_{\lambda})\right|_{\beta=\beta_{2}}\right\}\right\|
λ=1Λπλ1βR(β,Pλ)|β=β1βR(β,Pλ)|β=β2+λ=1Λ(πλ1πλ2)βR(β,Pλ)|β=β2\displaystyle\leq\sum_{\lambda=1}^{\Lambda}\pi^{1}_{\lambda}\left\|\left.\nabla_{\beta}R(\beta,P_{\lambda})\right|_{\beta=\beta_{1}}-\left.\nabla_{\beta}R(\beta,P_{\lambda})\right|_{\beta=\beta_{2}}\right\|+\left\|\sum_{\lambda=1}^{\Lambda}(\pi^{1}_{\lambda}-\pi^{2}_{\lambda})\left.\nabla_{\beta}R(\beta,P_{\lambda})\right|_{\beta=\beta_{2}}\right\|
β1β2+π1π2\displaystyle\lesssim\|\beta^{1}-\beta^{2}\|+\|\pi^{1}-\pi^{2}\|
(β1,π1)(β2,π2),\displaystyle\lesssim\|(\beta^{1},\pi^{1})-(\beta^{2},\pi^{2})\|,

and similarly for πr(β,π)\nabla_{\pi}r(\beta,\pi),

πr(β,π)|β=β1,π=π1πr(β,π)|β=β2,π=π2\displaystyle\left\|\left.\nabla_{\pi}r(\beta,\pi)\right|_{\beta=\beta^{1},\pi=\pi^{1}}-\left.\nabla_{\pi}r(\beta,\pi)\right|_{\beta=\beta^{2},\pi=\pi^{2}}\right\|
=(R(β1,P1)R(β2,P1),R(β1,P2)R(β2,P2),,R(β1,PΛ)R(β2,PΛ))\displaystyle=\left\|\left(R(\beta^{1},P_{1})-R(\beta^{2},P_{1}),R(\beta^{1},P_{2})-R(\beta^{2},P_{2}),\ldots,R(\beta^{1},P_{\Lambda})-R(\beta^{2},P_{\Lambda})\right)^{\top}\right\|
β1β2(β1,π1)(β2,π2).\displaystyle\lesssim\|\beta^{1}-\beta^{2}\|\leq\|(\beta^{1},\pi^{1})-(\beta^{2},\pi^{2})\|.

This implies that for each \ell, the gradient of r(β,π)r(\beta,\pi) (β\beta\in\mathcal{H}, πΓ\pi\in\Gamma_{\ell}) is Lipschitz continuous.

For each =1,2,\ell=1,2,\ldots, for any β1,β2\beta^{1},\beta^{2}\in\mathcal{H} and πΓ\pi\in\Gamma_{\ell}, Condition 4 implies that

|r(β1,π)r(β2,π)|\displaystyle\left|r(\beta^{1},\pi)-r(\beta^{2},\pi)\right| =|λ=1Λπλ[R(β1,Pλ)R(β2,Pλ)]|\displaystyle=\left|\sum_{\lambda=1}^{\Lambda}\pi_{\lambda}\left[R(\beta^{1},P_{\lambda})-R(\beta^{2},P_{\lambda})\right]\right|
λ=1Λπλ|R(β1,Pλ)R(β2,Pλ)|β1β2.\displaystyle\leq\sum_{\lambda=1}^{\Lambda}\pi_{\lambda}\left|R(\beta^{1},P_{\lambda})-R(\beta^{2},P_{\lambda})\right|\lesssim\|\beta^{1}-\beta^{2}\|.

Therefore, βr(β,π)\beta\mapsto r(\beta,\pi) is Lipschitz continuous with a universal Lipschitz constant independent of πΓ\pi\in\Gamma_{\ell}.

Finally, it is straightforward to check that (i) πr(β,π)\pi\mapsto r(\beta,\pi) is concave for any β\beta\in\mathcal{H}, and (ii) Γ\Gamma_{\ell} is parameterized by a convex subset of a simplex in a Euclidean space, which is a convex and bounded set. These results show that Assumption 3.6 in Lin et al., (2020) is satisfied for Algorithm 5 and 2.

E.3 Proof of Theorem 6

Proof of Theorem 6.

Let π(t),0\pi_{(t),0} denote a maximizer of πr(β(t1),π)\pi\mapsto r(\beta_{(t-1)},\pi). It holds that

0\displaystyle 0 r(β(t1),π(t),0)r(β(t1),π(t))\displaystyle\leq r(\beta_{(t-1)},\pi_{(t),0})-r(\beta_{(t-1)},\pi_{(t)})
1Jj=1Jr^(β(t1),π(t),ξj)1Jj=1Jr^(β(t1),π(t),0,ξj)\displaystyle\leq\frac{1}{J^{\prime}}\sum_{j=1}^{J^{\prime}}\hat{r}(\beta_{(t-1)},\pi_{(t)},\xi_{j})-\frac{1}{J^{\prime}}\sum_{j=1}^{J^{\prime}}\hat{r}(\beta_{(t-1)},\pi_{(t),0},\xi_{j})
+r(β(t1),π(t),0)r(β(t1),π(t))\displaystyle\quad+r(\beta_{(t-1)},\pi_{(t),0})-r(\beta_{(t-1)},\pi_{(t)})
=1Jj=1J{[r^(β(t1),π(t),ξj)r^(β(t1),π(t),0,ξj)]\displaystyle=\frac{1}{J^{\prime}}\sum_{j=1}^{J^{\prime}}\Bigg{\{}\left[\hat{r}(\beta_{(t-1)},\pi_{(t)},\xi_{j})-\hat{r}(\beta_{(t-1)},\pi_{(t),0},\xi_{j})\right]
𝔼[r^(β(t1),π(t),ξ)r^(β(t1),π(t),0,ξ)]}\displaystyle\quad-{\mathbb{E}}\left[\hat{r}(\beta_{(t-1)},\pi_{(t)},\xi)-\hat{r}(\beta_{(t-1)},\pi_{(t),0},\xi)\right]\Bigg{\}}
supβD,π1,π2Γ|1Jj=1J{[r^(β,π1,ξj)r^(β,π2,ξj)]\displaystyle\leq\sup_{\beta\in^{D},\pi_{1},\pi_{2}\in\Gamma_{\ell}}\Bigg{|}\frac{1}{J^{\prime}}\sum_{j=1}^{J^{\prime}}\Bigg{\{}\left[\hat{r}(\beta,\pi_{1},\xi_{j})-\hat{r}(\beta,\pi_{2},\xi_{j})\right]
𝔼[r^(β,π1,ξ)r^(β,π2,ξ)]}|.\displaystyle\quad-{\mathbb{E}}\left[\hat{r}(\beta,\pi_{1},\xi)-\hat{r}(\beta,\pi_{2},\xi)\right]\Bigg{\}}\Bigg{|}.

Note that the right hand side does not depend on tt. Therefore,

0\displaystyle 0 supt{r(β(t1),π(t),0)𝔼[r(β(t1),π(t))]}\displaystyle\leq\sup_{t}\left\{r(\beta_{(t-1)},\pi_{(t),0})-{\mathbb{E}}[r(\beta_{(t-1)},\pi_{(t)})]\right\}
𝔼supβD,π1,π2Γ|1Jj=1J{[r^(β,π1,ξj)r^(β,π2,ξj)]\displaystyle\leq{\mathbb{E}}^{*}\sup_{\beta\in^{D},\pi_{1},\pi_{2}\in\Gamma_{\ell}}\Bigg{|}\frac{1}{J^{\prime}}\sum_{j=1}^{J^{\prime}}\Bigg{\{}\left[\hat{r}(\beta,\pi_{1},\xi_{j})-\hat{r}(\beta,\pi_{2},\xi_{j})\right]
𝔼[r^(β,π1,ξ)r^(β,π2,ξ)]}|,\displaystyle\quad-{\mathbb{E}}\left[\hat{r}(\beta,\pi_{1},\xi)-\hat{r}(\beta,\pi_{2},\xi)\right]\Bigg{\}}\Bigg{|},

where 𝔼{\mathbb{E}}^{*} stands for outer expectation. We may apply Corollary 9.27 in Kosorok, (2008) to :={ξr^(β,π,ξ):βD,πΓ}\mathcal{F}:=\{\xi\mapsto\hat{r}(\beta,\pi,\xi):\beta\in^{D},\pi\in\Gamma_{\ell}\} and show that :={f1f2:f1,f2}{ξr^(β,π1,ξ)r^(β,π2,ξ):βD,π1,π2Γ}\mathcal{F}-\mathcal{F}:=\{f_{1}-f_{2}:f_{1},f_{2}\in\mathcal{F}\}\supseteq\{\xi\mapsto\hat{r}(\beta,\pi_{1},\xi)-\hat{r}(\beta,\pi_{2},\xi):\beta\in^{D},\pi_{1},\pi_{2}\in\Gamma_{\ell}\} is a Ξ\Xi-Glivenko-Cantelli class. Therefore,

{supβD,π1,π2Γ|1Jj=1J{[r^(β,π1,ξj)r^(β,π2,ξj)]\displaystyle\Bigg{\{}\sup_{\beta\in^{D},\pi_{1},\pi_{2}\in\Gamma_{\ell}}\Bigg{|}\frac{1}{J^{\prime}}\sum_{j=1}^{J^{\prime}}\Bigg{\{}\left[\hat{r}(\beta,\pi_{1},\xi_{j})-\hat{r}(\beta,\pi_{2},\xi_{j})\right]
𝔼[r^(β,π1,ξ)r^(β,π2,ξ)]}|}\displaystyle\quad-{\mathbb{E}}\left[\hat{r}(\beta,\pi_{1},\xi)-\hat{r}(\beta,\pi_{2},\xi)\right]\Bigg{\}}\Bigg{|}\Bigg{\}}^{*}
{supf|1Jj=1J{f(ξj)𝔼[f(ξ)]}|}a.s.0,\displaystyle\leq\left\{\sup_{f\in\mathcal{F}-\mathcal{F}}\left|\frac{1}{J^{\prime}}\sum_{j=1}^{J^{\prime}}\left\{f(\xi_{j})-{\mathbb{E}}[f(\xi)]\right\}\right|\right\}^{*}\overset{a.s.}{\rightarrow}0,

as JJ^{\prime}\rightarrow\infty. Here, XX^{*} stands for the minimal measurable majorant with respect to Ξ\Xi of a (possibly non-measurable) mapping XX (van der Vaart and Wellner,, 2000).

By Problem 1 of Section 2.4 in van der Vaart and Wellner, (2000), there exists a random variable FF such that Fsupf|f(ξ)𝔼[f(ξ)]|F\geq\sup_{f\in\mathcal{F}-\mathcal{F}}|f(\xi)-{\mathbb{E}}[f(\xi^{\prime})]| Ξ\Xi-almost surely and 𝔼[F]<{\mathbb{E}}[F]<\infty. Then,

supf|1Jj=1J{f(ξj)𝔼[f(ξj)]}|F\displaystyle\sup_{f\in\mathcal{F}-\mathcal{F}}\left|\frac{1}{J^{\prime}}\sum_{j=1}^{J^{\prime}}\left\{f(\xi_{j})-{\mathbb{E}}[f(\xi_{j})]\right\}\right|\leq F

Ξ\Xi-almost surely. By dominated convergence theorem,

𝔼supβD,π1,π2Γ|1Jj=1J{[r^(β,π1,ξj)r^(β,π2,ξj)]\displaystyle{\mathbb{E}}^{*}\sup_{\beta\in^{D},\pi_{1},\pi_{2}\in\Gamma_{\ell}}\Bigg{|}\frac{1}{J^{\prime}}\sum_{j=1}^{J^{\prime}}\Bigg{\{}\left[\hat{r}(\beta,\pi_{1},\xi_{j})-\hat{r}(\beta,\pi_{2},\xi_{j})\right]
𝔼[r^(β,π1,ξj)r^(β,π2,ξj)]}|0\displaystyle\quad-{\mathbb{E}}\left[\hat{r}(\beta,\pi_{1},\xi_{j})-\hat{r}(\beta,\pi_{2},\xi_{j})\right]\Bigg{\}}\Bigg{|}\rightarrow 0

as JJ^{\prime}\rightarrow\infty, and so does supt{r(β(t1),π(t),0)𝔼[r(β(t1),π(t))]}\sup_{t}\left\{r(\beta_{(t-1)},\pi_{(t),0})-{\mathbb{E}}[r(\beta_{(t-1)},\pi_{(t)})]\right\}. Thus, for any ζ>0\zeta>0, there exists a sufficiently large JJ^{\prime} such that 𝔼[r(β(t1),π(t))]r(β(t1),π(t),0)ζ{\mathbb{E}}[r(\beta_{(t-1)},\pi_{(t)})]\geq r(\beta_{(t-1)},\pi_{(t),0})-\zeta for all tt. ∎

E.4 Proof of Theorem 3

Our proof of Theorem 3 builds on that of Robinson, (1951). Major modifications are needed to allow for more general definitions that can accommodate for potentially infinite spaces of pure strategies and a more careful control on a bound on r(d¯(ϖ(t1)),π(t))r(d(t),π(t1))r(\overline{d}(\varpi_{(t-1)}),\pi^{\dagger}_{(t)})-r(d^{\dagger}_{(t)},\pi_{(t-1)}) towards the end of the proof.

In this appendix, we slightly abuse the notation and use 𝒟{\mathcal{D}} to denote the compact set 𝒟¯\bar{{\mathcal{D}}} that contains all d(t)d^{\dagger}_{(t)} (t=1,2,t=1,2,\ldots). We first introduce the notion of cumulative Bayes risk functions. Under Algorithm 3, we let U0:𝒟U_{0}:{\mathcal{D}}\rightarrow and V0:ΓV_{0}:\Gamma_{\ell}\rightarrow be any two continuous functions such that

mind𝒟U0(d)=maxπΓV0(π)\min_{d\in{\mathcal{D}}}U_{0}(d)=\max_{\pi\in\Gamma_{\ell}}V_{0}(\pi) (3)

and recursively define

Ut+1(d):=Ut(d)+r(d,π(t)),Vt+1(π):=Vt(π)+r(d(t),π)U_{t+1}(d):=U_{t}(d)+r(d,\pi^{\dagger}_{(t)}),\quad V_{t+1}(\pi):=V_{t}(\pi)+r(d^{\dagger}_{(t)},\pi) (4)

for d𝒟d\in{\mathcal{D}} and πΓ\pi\in\Gamma_{\ell}. Here, we let π(t)argmaxπΓVt1(π)\pi^{\dagger}_{(t)}\in\operatorname*{argmax}_{\pi\in\Gamma_{\ell}}V_{t-1}(\pi) and d(t)argmind𝒟Ut1(d)d^{\dagger}_{(t)}\in\operatorname*{argmin}_{d\in{\mathcal{D}}}U_{t-1}(d). Note that the choices of π(t)\pi_{(t)}^{\dagger} and d(t)d_{(t)} in Algorithm 3 corresponds to setting U00U_{0}\equiv 0 and V00V_{0}\equiv 0, in which case Ut(d)=tr(d,π(t))U_{t}(d)=t\cdot r(d,\pi_{(t)}) and Vt(π)=tr(d¯(ϖ(t)),π)V_{t}(\pi)=t\cdot r(\overline{d}(\varpi_{(t)}),\pi). In general,

Ut(d)=U0(d)+tr(d,π(t)),Vt(π)=V0(π)+tr(d¯(ϖ(t)),π)U_{t}(d)=U_{0}(d)+t\cdot r(d,\pi_{(t)}),\quad V_{t}(\pi)=V_{0}(\pi)+t\cdot r(\overline{d}(\varpi_{(t)}),\pi) (5)

for some π(t)Γ\pi_{(t)}\in\Gamma and d¯(ϖ(t))𝒟¯\overline{d}(\varpi_{(t)})\in\overline{{\mathcal{D}}}. Later in this section, we will also make use of UtU_{t} and VtV_{t} with other initializations U0U_{0} and V0V_{0}.

To make notations concise, we define mind𝒟Ut:=mind𝒟Ut(d)\min_{d\in{\mathcal{D}}^{\prime}}U_{t}:=\min_{d\in{\mathcal{D}}^{\prime}}U_{t}(d) for any 𝒟𝒟{\mathcal{D}}^{\prime}\subseteq{\mathcal{D}}, and define max𝒟Ut\max_{{\mathcal{D}}^{\prime}}U_{t}, minΠVt\min_{\Pi^{\prime}}{V}_{t} and maxΠVt\max_{\Pi^{\prime}}V_{t} (ΠΓ\Pi^{\prime}\subseteq\Gamma_{\ell}) similarly. We also drop the subscript denoting the set when the set is the whole space we consider, that is, 𝒟{\mathcal{D}} or Γ\Gamma_{\ell}. Note that for any t1,t2=1,2,t_{1},t_{2}=1,2,\ldots, under the setting of Algorithm 3 and (2), it holds that

minUt1/t1=mind¯𝒟¯r(d¯,π(t1))\displaystyle\quad\min U_{t_{1}}/t_{1}=\min_{\overline{d}\in\overline{{\mathcal{D}}}}r(\overline{d},\pi_{(t_{1})})
maxπΓmind¯𝒟¯r(d¯,π)=r(d¯(ϖ),π)=mind¯𝒟¯maxπΓr(d¯,π)\displaystyle\leq\max_{\pi\in\Gamma_{\ell}}\min_{\overline{d}\in\overline{{\mathcal{D}}}}r(\overline{d},\pi)=r(\overline{d}(\varpi_{\ell}^{*}),\pi^{*}_{\ell})=\min_{\overline{d}\in\overline{{\mathcal{D}}}}\max_{\pi\in\Gamma_{\ell}}r(\overline{d},\pi)
maxπΓr(d¯(ϖ(t2)),π)=maxVt2/t2\displaystyle\leq\max_{\pi\in\Gamma_{\ell}}r(\overline{d}(\varpi_{(t_{2})}),\pi)=\max V_{t_{2}}/t_{2}

Therefore, to prove the first result in Theorem 3, it suffices to show that lim supt(maxVtminUt)/t0\limsup_{t\rightarrow\infty}(\max V_{t}-\min U_{t})/t\leq 0.

We next introduce additional definitions related to iterations. We say that πΓ\pi\in\Gamma_{\ell} is eligible in the interval [t1,t2][t_{1},t_{2}] if there exists t[t1,t2]t\in[t_{1},t_{2}] such that Vt(π)=maxVtV_{t}(\pi)=\max V_{t}; we say that d𝒟d\in{\mathcal{D}} is eligible in the interval [t1,t2][t_{1},t_{2}] if there exists t[t1,t2]t\in[t_{1},t_{2}] such that Ut(d)=minUtU_{t}(d)=\min U_{t}. We also define eligibility for sets. We say that ΠΓ\Pi^{\prime}\subseteq\Gamma_{\ell} is eligible in the interval [t1,t2][t_{1},t_{2}] if there exists πΠ\pi\in\Pi^{\prime} that is eligible in that interval; we say that 𝒟𝒟{\mathcal{D}}^{\prime}\subseteq{\mathcal{D}} is eligible in the interval [t1,t2][t_{1},t_{2}] if there exists d𝒟d\in{\mathcal{D}}^{\prime} that is eligible in the interval [t1,t2][t_{1},t_{2}]. In addition, for any 𝒟𝒟{\mathcal{D}}^{\prime}\subseteq{\mathcal{D}}, we define maximum variation MVt(𝒟):=supd𝒟Ut(d)infd𝒟Ut(d){\mathrm{MV}}_{t}({\mathcal{D}}^{\prime}):=\sup_{d\in{\mathcal{D}}^{\prime}}U_{t}(d)-\inf_{d\in{\mathcal{D}}^{\prime}}U_{t}(d) and MVt(Π){\mathrm{MV}}_{t}(\Pi^{\prime}) similarly for any ΠΓ\Pi^{\prime}\subset\Gamma_{\ell}. By Condition 2, there exists B(0,)B\in(0,\infty) such that R[B,B]R\in[-B,B]. Note that by Condition 1 and Lemma 2, given an arbitrary desired approximation accuracy ϵ>0\epsilon>0, 𝒟{\mathcal{D}} can be covered by finitely many compact subsets with the maximum variation of each subset bounded by ϵt\epsilon t for all tt; by Condition 2, since Γ\Gamma_{\ell} is parameterized by a compact subset of a simplex in a Euclidean space, Γ\Gamma_{\ell} can also be covered by finitely many compact subsets with the maximum variation of each subset bounded by ϵt\epsilon t for all tt. These covers can be viewed as discrete finite approximations to 𝒟{\mathcal{D}} and Γ\Gamma_{\ell}, respectively.

All of the above definitions are associated with the space of estimators 𝒟{\mathcal{D}} and the set of priors Γ\Gamma_{\ell}. We call {(Ut,Vt)}t\{(U_{t},V_{t})\}_{t} a pair of cumulative Bayes risk functions constructed from the pair (𝒟,Γ)({\mathcal{D}},\Gamma_{\ell}) of the space of estimators and the set of priors, and will consider pairs of cumulative Bayes risk functions constructed from other pairs (𝒟,Π)({\mathcal{D}}^{\prime},\Pi^{\prime}) of the space of estimators and the set of priors in the subsequent proof. We can define the above quantities similarly for such cases.

The following lemma gives an upper bound on the maximum variation of Us+tU_{s+t} and Vs+tV_{s+t} over the corresponding entire space from which they are constructed after tt iterations from ss when essentially all parts of these spaces are eligible in [s,s+t][s,s+t].

Lemma 4.

Suppose that {(Ut,Vt)}t\{(U_{t},V_{t})\}_{t} is a pair of cumulative Bayes risk functions constructed from (𝒟,Π)({\mathcal{D}}^{\prime},\Pi^{\prime}). Suppose that 𝒟=i=1I𝒟i{\mathcal{D}}^{\prime}=\bigcup_{i=1}^{I}{\mathcal{D}}_{i} and Π=j=1JΠj\Pi^{\prime}=\bigcup_{j=1}^{J}\Pi_{j} where

supi,tMVt(𝒟i)/tA,supj,tMVt(Πj)/tA\sup_{i,t}{\mathrm{MV}}_{t}({\mathcal{D}}_{i})/t\leq A,\quad\sup_{j,t}{\mathrm{MV}}_{t}(\Pi_{j})/t\leq A

for A<A<\infty. If all 𝒟i{\mathcal{D}}_{i} and Πj\Pi_{j} are eligible in [s,s+t][s,s+t], then max𝒟Us+tmin𝒟Us+t(2B+A)t\max_{{\mathcal{D}}^{\prime}}U_{s+t}-\min_{{\mathcal{D}}^{\prime}}U_{s+t}\leq(2B+A)t and maxΠVs+tminΠVs+t(2B+A)t\max_{\Pi^{\prime}}V_{s+t}-\min_{\Pi^{\prime}}V_{s+t}\leq(2B+A)t.

Proof of Lemma 4.

Without loss of generality, assume that d~(argmaxd𝒟Us+t)𝒟1\tilde{d}\in(\operatorname*{argmax}_{d\in{\mathcal{D}}^{\prime}}U_{s+t})\bigcap{\mathcal{D}}_{1}. Since 𝒟1{\mathcal{D}}_{1} is eligible in [s,t][s,t], there exists t~[s,s+t]\tilde{t}\in[s,s+t] such that (argmind𝒟Ut~)𝒟1(\operatorname*{argmin}_{d\in{\mathcal{D}}^{\prime}}U_{\tilde{t}})\bigcap{\mathcal{D}}_{1}\neq\emptyset. By the recursive definition of the sequence {Ut}t\{U_{t}\}_{t} in (4), the bound on the risk, and the assumption that supi,tMVt(𝒟i)/tA\sup_{i,t}{\mathrm{MV}}_{t}({\mathcal{D}}_{i})/t\leq A, we have that max𝒟Us+t=Us+t(d~)Ut~(d~)+B(s+tt~)min𝒟Ut~+At+B(s+tt~)min𝒟Ut~+(A+B)t\max_{{\mathcal{D}}^{\prime}}U_{s+t}=U_{s+t}(\tilde{d})\leq U_{\tilde{t}}(\tilde{d})+B(s+t-\tilde{t})\leq\min_{{\mathcal{D}}^{\prime}}U_{\tilde{t}}+At+B(s+t-\tilde{t})\leq\min_{{\mathcal{D}}^{\prime}}U_{\tilde{t}}+(A+B)t. Letting d~argmind𝒟Us+t\tilde{d}^{\prime}\in\operatorname*{argmin}_{d\in{\mathcal{D}}^{\prime}}U_{s+t}, by the bound on the risk, we can derive that min𝒟Us+t=Us+t(d~)Ut~(d~)B(s+tt~)min𝒟Ut~Bt\min_{{\mathcal{D}}^{\prime}}U_{s+t}=U_{s+t}(\tilde{d}^{\prime})\geq U_{\tilde{t}}(\tilde{d}^{\prime})-B(s+t-\tilde{t})\geq\min_{{\mathcal{D}}^{\prime}}U_{\tilde{t}}-Bt. Combine these two inequalities and we have that max𝒟Us+tmin𝒟Us+t(2B+A)t\max_{{\mathcal{D}}^{\prime}}U_{s+t}-\min_{{\mathcal{D}}^{\prime}}U_{s+t}\leq(2B+A)t. An identical argument applied to the sequence {Vt}t\{V_{t}\}_{t} shows that maxΠVs+tminΠVs+t(2B+A)t\max_{\Pi^{\prime}}V_{s+t}-\min_{\Pi^{\prime}}V_{s+t}\leq(2B+A)t. ∎

The next lemma builds on the previous lemma and provides an upper bound on maxVs+tminUs+t\max V_{s+t}-\min U_{s+t} under the same conditions.

Lemma 5.

Under the same setup and conditions as in Lemma 4, maxΠVs+tmin𝒟Us+t(4B+2A)t\max_{\Pi^{\prime}}V_{s+t}-\min_{{\mathcal{D}}^{\prime}}U_{s+t}\leq(4B+2A)t.

Proof of Lemma 5.

Summing the two inequalities in Lemma 4 and rearranging the terms, we have that maxΠVs+tmin𝒟Us+t(4B+2A)t+minΠVs+tmax𝒟Us+t\max_{\Pi^{\prime}}V_{s+t}-\min_{{\mathcal{D}}^{\prime}}U_{s+t}\leq(4B+2A)t+\min_{\Pi^{\prime}}V_{s+t}-\max_{{\mathcal{D}}^{\prime}}U_{s+t}. It therefore suffices to show that minΠVs+tmax𝒟Us+t\min_{\Pi^{\prime}}V_{s+t}\leq\max_{{\mathcal{D}}^{\prime}}U_{s+t}.

Let τ:=s+t\tau:=s+t. There exists πΠ\pi^{\prime}\in\Pi^{\prime} and a stochastic strategy d¯𝒟\overline{d}^{\prime}\in{\mathcal{D}}^{\prime} such that Uτ(d)=U0(d)+τr(d,π)U_{\tau}(d)=U_{0}(d)+\tau\cdot r(d,\pi^{\prime}) and Vτ(π)=V0(π)+τr(d¯,π)V_{\tau}(\pi)=V_{0}(\pi)+\tau\cdot r(\overline{d}^{\prime},\pi) for all d𝒟d\in{\mathcal{D}}^{\prime} and all πΠ\pi\in\Pi^{\prime}. Therefore, for this choice of π\pi^{\prime} and d¯\overline{d}^{\prime}, using (3), minΠVτVτ(π)=V0(π)+τr(d¯,π)maxΠV0+τr(d¯,π)=min𝒟U0+τr(d¯,π)U0(d¯)+τr(d¯,π)=Uτ(d¯)max𝒟Uτ\min_{\Pi^{\prime}}V_{\tau}\leq V_{\tau}(\pi^{\prime})=V_{0}(\pi^{\prime})+\tau\cdot r(\overline{d}^{\prime},\pi^{\prime})\leq\max_{\Pi^{\prime}}V_{0}+\tau\cdot r(\overline{d}^{\prime},\pi^{\prime})=\min_{{\mathcal{D}}^{\prime}}U_{0}+\tau\cdot r(\overline{d}^{\prime},\pi^{\prime})\leq U_{0}(\overline{d}^{\prime})+\tau\cdot r(\overline{d}^{\prime},\pi^{\prime})=U_{\tau}(\overline{d}^{\prime})\leq\max_{{\mathcal{D}}^{\prime}}U_{\tau}. ∎

Proof of Theorem 3.

It suffices to show that lim supt(maxVtminUt)/t0\limsup_{t\rightarrow\infty}(\max V_{t}-\min U_{t})/t\leq 0 by letting U00U_{0}\equiv 0 and V00V_{0}\equiv 0, which corresponds to Algorithm 3. Let ϵ>0\epsilon>0. Note that rr is Lipschitz continuous by Lemma 2 and the fact that r(d,π)r(d,\pi) is an average of bounded risks with weights π\pi. Furthermore, 𝒟{\mathcal{D}} and Γ\Gamma_{\ell} are both compact. In addition, U0U_{0} and V0V_{0} are both continuous. Therefore, there exist covers 𝒟=i=1I𝒟i{\mathcal{D}}=\bigcup_{i=1}^{I}{\mathcal{D}}_{i} and Γ=j=1JΠj\Gamma_{\ell}=\bigcup_{j=1}^{J}\Pi_{j} such that (i) 𝒟i{\mathcal{D}}_{i} and Πj\Pi_{j} are all compact, and (ii) supi,tMVt(𝒟i)/tϵ\sup_{i,t}{\mathrm{MV}}_{t}({\mathcal{D}}_{i})/t\leq\epsilon, supj,tMVt(Πj)/tϵ\sup_{j,t}{\mathrm{MV}}_{t}(\Pi_{j})/t\leq\epsilon. (Note that II and JJ may depend on ϵ\epsilon.) For index sets {1,2,,I}\mathcal{I}\subseteq\{1,2,\ldots,I\} and 𝒥{1,2,,J}\mathcal{J}\subseteq\{1,2,\ldots,J\}, define 𝒟:=i𝒟i{\mathcal{D}}_{\mathcal{I}}:=\bigcup_{i\in\mathcal{I}}{\mathcal{D}}_{i} and Π𝒥:=j𝒥Πj\Pi_{\mathcal{J}}:=\bigcup_{j\in\mathcal{J}}\Pi_{j}. We show that maxVtminUtCϵt\max V_{t}-\min U_{t}\leq C\epsilon t for an absolute constant CC and all sufficiently large tt via induction on the sizes of \mathcal{I} and 𝒥\mathcal{J}.

Let {(Ut,Vt)}t\{(U_{t},V_{t})\}_{t} be a pair of cumulative Bayes risk functions constructed from (𝒟,Π𝒥)({\mathcal{D}}_{\mathcal{I}},\Pi_{\mathcal{J}}) where ||=|𝒥|=1|\mathcal{I}|=|\mathcal{J}|=1. By (5) and the fact that MVt(𝒟)ϵt{\mathrm{MV}}_{t}({\mathcal{D}}_{\mathcal{I}})\leq\epsilon t and MVt(Π𝒥)ϵt{\mathrm{MV}}_{t}(\Pi_{\mathcal{J}})\leq\epsilon t, we have that

min𝒟Ut\displaystyle\min_{{\mathcal{D}}_{\mathcal{I}}}U_{t} =mind𝒟[U0(d)+tr(d,π(t))]𝔼dϖ(t)[U0(d)]+tr(d¯(ϖ(t)),π(t))ϵt\displaystyle=\min_{d\in{\mathcal{D}}_{\mathcal{I}}}[U_{0}(d)+t\cdot r(d,\pi_{(t)})]\geq{\mathbb{E}}_{d\sim\varpi_{(t)}}[U_{0}(d)]+t\cdot r(\overline{d}(\varpi_{(t)}),\pi_{(t)})-\epsilon t
mind𝒟U0(d)+tr(d¯(ϖ(t)),π(t))ϵt\displaystyle\geq\min_{d\in{\mathcal{D}}_{\mathcal{I}}}U_{0}(d)+t\cdot r(\overline{d}(\varpi_{(t)}),\pi_{(t)})-\epsilon t
=maxπΠ𝒥V0(π)+tr(d¯(ϖ(t)),π(t))ϵt\displaystyle=\max_{\pi\in\Pi_{\mathcal{J}}}V_{0}(\pi)+t\cdot r(\overline{d}(\varpi_{(t)}),\pi_{(t)})-\epsilon t
V0(π(t))+tr(d¯(ϖ(t)),π(t))ϵt\displaystyle\geq V_{0}(\pi_{(t)})+t\cdot r(\overline{d}(\varpi_{(t)}),\pi_{(t)})-\epsilon t
maxπΠ𝒥[V0(π)+tr(d¯(ϖ(t)),π)]2ϵt=maxΠ𝒥Vt2ϵt.\displaystyle\geq\max_{\pi\in\Pi_{\mathcal{J}}}[V_{0}(\pi)+t\cdot r(\overline{d}(\varpi_{(t)}),\pi)]-2\epsilon t=\max_{\Pi_{\mathcal{J}}}V_{t}-2\epsilon t.

Therefore, maxΠ𝒥Vtmin𝒟Ut2ϵt\max_{\Pi_{\mathcal{J}}}V_{t}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{t}\leq 2\epsilon t.

Let ϵ>0\epsilon^{\prime}>0 be arbitrary. Suppose that there exists t0t_{0} such that, for any \mathcal{I}^{\prime}\subseteq\mathcal{I} and 𝒥𝒥\mathcal{J}^{\prime}\subseteq\mathcal{J} such that \mathcal{I}^{\prime}\neq\mathcal{I} or 𝒥𝒥\mathcal{J}^{\prime}\neq\mathcal{J}, for any pair of cumulative Bayes risk functions {(Ut,Vt)}t\{(U_{t},V_{t})\}_{t} constructed from (𝒟,Π𝒥)({\mathcal{D}}_{\mathcal{I}^{\prime}},\Pi_{\mathcal{J}^{\prime}}), it holds that maxΠ𝒥Vtmin𝒟Utϵt\max_{\Pi_{\mathcal{J}^{\prime}}}V_{t}-\min_{{\mathcal{D}}_{\mathcal{I}^{\prime}}}U_{t}\leq\epsilon^{\prime}t for all tt0t\geq t_{0}. We next obtain a slightly greater bound on maxΠ𝒥Vtmin𝒟Ut\max_{\Pi_{\mathcal{J}}}V_{t}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{t} for all sufficiently large tt.

We first prove that if, for a given pair of cumulative Bayes risk functions {(Ut,Vt)}t\{(U_{t},V_{t})\}_{t} constructed from (𝒟,Π𝒥)({\mathcal{D}}_{\mathcal{I}},\Pi_{\mathcal{J}}), there exists ii^{\prime}\in\mathcal{I} or j𝒥j^{\prime}\in\mathcal{J} such that 𝒟i{\mathcal{D}}_{i^{\prime}} or Πj\Pi_{j^{\prime}} is not eligible in an interval [s,s+t0][s,s+t_{0}], then

maxΠ𝒥Vs+t0min𝒟Us+t0maxΠ𝒥Vsmin𝒟Us+ϵt0.\max_{\Pi_{\mathcal{J}}}V_{s+t_{0}}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{s+t_{0}}\leq\max_{\Pi_{\mathcal{J}}}V_{s}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{s}+\epsilon^{\prime}t_{0}. (6)

Suppose that 𝒟i{\mathcal{D}}_{i^{\prime}} is not eligible in [s,s+t0][s,s+t_{0}], then define Ut:=Us+tU_{t}^{\prime}:=U_{s+t} and Vt:=Vs+tmaxΠ𝒥Vs+min𝒟UsV_{t}^{\prime}:=V_{s+t}-\max_{\Pi_{\mathcal{J}}}V_{s}+\min_{{\mathcal{D}}_{\mathcal{I}}}U_{s} for all t0t\geq 0. It is straightforward to check that {(Ut,Vt)}t=0t0\{(U_{t}^{\prime},V_{t}^{\prime})\}_{t=0}^{t_{0}} satisfies the recursive definition of a pair of cumulative Bayes risk functions constructed from (𝒟{i},Π𝒥)({\mathcal{D}}_{\mathcal{I}\setminus\{i^{\prime}\}},\Pi_{\mathcal{J}}). By the induction hypothesis, maxΠ𝒥Vt0min𝒟{i}Ut0ϵt0\max_{\Pi_{\mathcal{J}}}V_{t_{0}}^{\prime}-\min_{{\mathcal{D}}_{\mathcal{I}\setminus\{i^{\prime}\}}}U_{t_{0}}^{\prime}\leq\epsilon^{\prime}t_{0}. Therefore, maxΠ𝒥Vs+t0min𝒟Us+t0=maxΠ𝒥Vt0min𝒟{i}Ut0+maxΠ𝒥Vsmin𝒟UsmaxΠ𝒥Vsmin𝒟Us+ϵt0\max_{\Pi_{\mathcal{J}}}V_{s+t_{0}}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{s+t_{0}}=\max_{\Pi_{\mathcal{J}}}V_{t_{0}}^{\prime}-\min_{{\mathcal{D}}_{\mathcal{I}\setminus\{i^{\prime}\}}}U_{t_{0}}^{\prime}+\max_{\Pi_{\mathcal{J}}}V_{s}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{s}\leq\max_{\Pi_{\mathcal{J}}}V_{s}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{s}+\epsilon^{\prime}t_{0}. Similar argument can be applied if Πj\Pi_{j^{\prime}} is not eligible in [s,s+t0][s,s+t_{0}].

Now we obtain a bound on maxΠ𝒥Vtmin𝒟Ut\max_{\Pi_{\mathcal{J}}}V_{t}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{t}. Let t>t0t>t_{0}, 𝒬:=t/t01\mathscr{Q}:=\lfloor t/t_{0}\rfloor\geq 1 and :=t/t0𝒬[0,1)\mathscr{R}:=t/t_{0}-\mathscr{Q}\in[0,1). There are two cases.

Case 1: There exists s0𝒬s_{0}\leq\mathscr{Q} such that 𝒟i{\mathcal{D}}_{i} and Πj\Pi_{j} are eligible in [(s01+)t0,(s0+)t0][(s_{0}-1+\mathscr{R})t_{0},(s_{0}+\mathscr{R})t_{0}] for all ii\in\mathcal{I} and j𝒥j\in\mathcal{J}. Take s0s_{0} to be the largest such integer. Then, repeatedly apply (6) to intervals [(s0+)t0,(s0+1+)t0],[(s0+1+)t0,(s0+2+)t0],,[(𝒬1+)t0,(𝒬+)t0]=[tt0,t][(s_{0}+\mathscr{R})t_{0},(s_{0}+1+\mathscr{R})t_{0}],[(s_{0}+1+\mathscr{R})t_{0},(s_{0}+2+\mathscr{R})t_{0}],\ldots,[(\mathscr{Q}-1+\mathscr{R})t_{0},(\mathscr{Q}+\mathscr{R})t_{0}]=[t-t_{0},t] and we derive that

maxΠ𝒥Vtmin𝒟UtmaxΠ𝒥V(s0+)t0min𝒟U(s0+)t0+ϵ(𝒬s0)t0.\max_{\Pi_{\mathcal{J}}}V_{t}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{t}\leq\max_{\Pi_{\mathcal{J}}}V_{(s_{0}+\mathscr{R})t_{0}}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{(s_{0}+\mathscr{R})t_{0}}+\epsilon^{\prime}(\mathscr{Q}-s_{0})t_{0}.

By Lemma 5, maxΠ𝒥V(s0+)t0min𝒟U(s0+)t0(4B+ϵ)t0\max_{\Pi_{\mathcal{J}}}V_{(s_{0}+\mathscr{R})t_{0}}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{(s_{0}+\mathscr{R})t_{0}}\leq(4B+\epsilon)t_{0}. Therefore,

maxΠ𝒥Vtmin𝒟Ut(4B+ϵ)t0+ϵ(𝒬s0)t0(4B+ϵ)t0+ϵt.\max_{\Pi_{\mathcal{J}}}V_{t}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{t}\leq(4B+\epsilon)t_{0}+\epsilon^{\prime}(\mathscr{Q}-s_{0})t_{0}\leq(4B+\epsilon)t_{0}+\epsilon^{\prime}t.

Case 2: There is no integer s0s_{0} satisfying the condition in Case 1. Then, repeatedly apply (6) to intervals [t0,(1+)t0],[(1+)t0,(2+)t0],,[(𝒬1+)t0,(𝒬+)t0]=[tt0,t][\mathscr{R}t_{0},(1+\mathscr{R})t_{0}],[(1+\mathscr{R})t_{0},(2+\mathscr{R})t_{0}],\ldots,[(\mathscr{Q}-1+\mathscr{R})t_{0},(\mathscr{Q}+\mathscr{R})t_{0}]=[t-t_{0},t], we derive that

maxΠ𝒥Vtmin𝒟UtmaxΠ𝒥Vt0min𝒟Ut0+ϵ𝒬t0.\max_{\Pi_{\mathcal{J}}}V_{t}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{t}\leq\max_{\Pi_{\mathcal{J}}}V_{\mathscr{R}t_{0}}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{\mathscr{R}t_{0}}+\epsilon^{\prime}\mathscr{Q}t_{0}.

By the bound on the risk, maxΠ𝒥Vt0Bt0\max_{\Pi_{\mathcal{J}}}V_{\mathscr{R}t_{0}}\leq B\mathscr{R}t_{0} and min𝒟Ut0Bt0\min_{{\mathcal{D}}_{\mathcal{I}}}U_{\mathscr{R}t_{0}}\geq-B\mathscr{R}t_{0}. Hence,

maxΠ𝒥Vtmin𝒟Ut2Bt0+ϵ𝒬t0(4B+ϵ)t0+ϵt.\max_{\Pi_{\mathcal{J}}}V_{t}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{t}\leq 2B\mathscr{R}t_{0}+\epsilon^{\prime}\mathscr{Q}t_{0}\leq(4B+\epsilon)t_{0}+\epsilon^{\prime}t.

Thus, in both cases, it holds that maxΠ𝒥Vtmin𝒟Ut(4B+ϵ)t0+ϵt\max_{\Pi_{\mathcal{J}}}V_{t}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{t}\leq(4B+\epsilon)t_{0}+\epsilon^{\prime}t for t>t0t>t_{0}. Let C>0C>0 be any constant (which may depend on ϵ\epsilon, the approximation error of the covers, that is, the bound on MVt/t{\mathrm{MV}}_{t}/t). The following holds for any sufficiently large tt,

maxΠ𝒥Vtmin𝒟Ut(4B+ϵ)t0+ϵt(1+C)ϵt.\max_{\Pi_{\mathcal{J}}}V_{t}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{t}\leq(4B+\epsilon)t_{0}+\epsilon^{\prime}t\leq(1+C)\epsilon^{\prime}t. (7)

In other words, we show that after increasing the size of either index set by 1, for all sufficiently large tt, we obtain a bound on maxΠ𝒥Vtmin𝒟Ut\max_{\Pi_{\mathcal{J}}}V_{t}-\min_{{\mathcal{D}}_{\mathcal{I}}}U_{t} that grows by a multiplicative factor of (1+C)(1+C) relative to the original bound.

It takes finitely many, say NN, steps to induct from the initial case where the sizes of both index sets are one to the case of interest with index sets {1,,I}\{1,\ldots,I\} and {1,,J}\{1,\ldots,J\}. (Note that NN may also depend on ϵ\epsilon through its dependence on II and JJ.) Take C=1/NC=1/N in (7) and we derive that, for all sufficiently large tt,

maxVtminUt=maxΠ{1,,J}Vtmin𝒟{1,,I}Ut(1+1/N)N2ϵt2eϵt\max V_{t}-\min U_{t}=\max_{\Pi_{\{1,\ldots,J\}}}V_{t}-\min_{{\mathcal{D}}_{\{1,\ldots,I\}}}U_{t}\leq(1+1/N)^{N}\cdot 2\epsilon t\leq 2e\epsilon t

where ee is the base of natural logarithm. Since ϵ\epsilon is arbitrary, we show that lim supt(maxVtminUt)/t0\limsup_{t\rightarrow\infty}(\max V_{t}-\min U_{t})/t\leq 0. ∎

E.5 Derivation of Γ\Gamma-minimax estimator of the mean in Section C

In this section, we show that, for the problem of estimating the mean in Section C, one Γ\Gamma-minimax estimator lies in 𝒟linear{\mathcal{D}}_{\mathrm{linear}}. This is formally presented below.

Proposition 1.

Let {\mathcal{M}} consist of all probability distributions defined on the Borel σ\sigma-algebra on [0,1][0,1]. Let X1,,XniidP0X_{1},\ldots,X_{n}\overset{\mathrm{iid}}{\sim}P_{0}\in{\mathcal{M}} and 𝐗=(X1,X2,,Xn)\mathbf{X}=(X_{1},X_{2},\ldots,X_{n}) be the observed data. Let Ψ:PxP(dx)\Psi:P\mapsto\int xP({\mathrm{d}}x) denote the mean parameter and Γ={πΠ:Ψ(P)π(dP)=μ}\Gamma=\{\pi\in\Pi:\int\Psi(P)\pi({\mathrm{d}}P)=\mu\} be the set of priors that represent prior information. Let 𝒟{\mathcal{D}} denote the space of estimators that are square-integrable with respect to all PP\in{\mathcal{M}}. Consider the risk in Example 1, R:(d,P)𝔼P[(d(𝐗)Ψ(P))2]R:(d,P)\mapsto{\mathbb{E}}_{P}[(d(\mathbf{X})-\Psi(P))^{2}]. Define X¯=i=1nXi/n\bar{X}=\sum_{i=1}^{n}X_{i}/n and d0:𝐗(μ+nX¯)/(1+n)d_{0}:\mathbf{X}\mapsto(\mu+\sqrt{n}\bar{X})/(1+\sqrt{n}). Then d0𝒟lineard_{0}\in{\mathcal{D}}_{\mathrm{linear}} is Γ\Gamma-minimax over 𝒟{\mathcal{D}}.

We first present a theorem on a criterion of Γ\Gamma-minimaxity.

Theorem 7.

Suppose that d0𝒟d_{0}\in{\mathcal{D}} is a Bayes estimator for π0Γ\pi_{0}\in\Gamma and r(d0,π0)=rsup(d0,Γ)r(d_{0},\pi_{0})=r_{\sup}(d_{0},\Gamma). Then d0d_{0} is a Γ\Gamma-minimax estimator in 𝒟{\mathcal{D}}.

Proof of Theorem 7.

Clearly rsup(d0,Γ)infd𝒟rsup(d,Γ)r_{\sup}(d_{0},\Gamma)\geq\inf_{d\in{\mathcal{D}}}r_{\sup}(d,\Gamma). Fix d𝒟d^{\prime}\in{\mathcal{D}}. Then, rsup(d,Γ)r(d,π0)r(d0,π0)=rsup(d0,Γ)r_{\sup}(d^{\prime},\Gamma)\geq r(d^{\prime},\pi_{0})\geq r(d_{0},\pi_{0})=r_{\sup}(d_{0},\Gamma). Since dd^{\prime} is arbitrary, this shows that infd𝒟rsup(d,Γ)rsup(d0,Γ)\inf_{d\in{\mathcal{D}}}r_{\sup}(d,\Gamma)\geq r_{\sup}(d_{0},\Gamma). Thus, rsup(d0,Γ)=infd𝒟rsup(d,Γ)r_{\sup}(d_{0},\Gamma)=\inf_{d\in{\mathcal{D}}}r_{\sup}(d,\Gamma) and d0d_{0} is Γ\Gamma-minimax. ∎

We now present a lemma that is used to prove Proposition 1.

Lemma 6.

Let a<ba<b and suppose that {\mathcal{M}} denotes the model space that consists of all probability distributions defined on the Borel σ\sigma-algebra on [a,b][a,b]\subseteq with mean μ[a,b]\mu\in[a,b]. Let XX denote a generic random variable generated from some PP\in{\mathcal{M}}. Then maxPVarP(X)=VarP(X)=(bμ)(μa)\max_{P\in{\mathcal{M}}}{\mathrm{Var}}_{P}(X)={\mathrm{Var}}_{P^{*}}(X)=(b-\mu)(\mu-a), where PP^{*} is defined by P(X=a)=(bμ)/(ba)P^{*}(X=a)=(b-\mu)/(b-a) and P(X=b)=(μa)/(ba)P^{*}(X=b)=(\mu-a)/(b-a).

Proof of Lemma 6.

Without loss of generality, we may assume that a=1a=-1 and b=1b=1. Note that for any PP\in{\mathcal{M}}, it holds that VarP(X)=𝔼P[X2]𝔼P[X]2=𝔼P[X2]μ21μ2{\mathrm{Var}}_{P}(X)={\mathbb{E}}_{P}[X^{2}]-{\mathbb{E}}_{P}[X]^{2}={\mathbb{E}}_{P}[X^{2}]-\mu^{2}\leq 1-\mu^{2}, where the equality is attained if P(X{1,1})=1P(X\in\{-1,1\})=1. Therefore, the maximum variance is achieved at the distribution with the specified mean μ\mu and support being {a,b}\{a,b\}, that is, at the distribution PP^{*} defined in the lemma statement. Straightforward calculations show that VarP(X)=(bμ)(μa){\mathrm{Var}}_{P^{*}}(X)=(b-\mu)(\mu-a). ∎

Proof of Proposition 1.

Let :={Bernoulli(θ):θ(0,1)}{\mathcal{M}}^{\prime}:=\{\text{Bernoulli}(\theta):\theta\in(0,1)\}\subseteq{\mathcal{M}} and let π0\pi_{0} be a prior distribution over {\mathcal{M}}^{\prime} such that the prior distribution on the success probability θ\theta is Beta(μn,(1μ)n)\mathrm{Beta}(\mu\sqrt{n},(1-\mu)\sqrt{n}). By Theorem 1.1 in Chapter 4 of Lehmann and Casella, (1998), a Bayes estimator for π0\pi_{0} minimizes the risk under the posterior distribution, whose minimizer over 𝒟{\mathcal{D}} is the posterior mean d0d_{0} for our choice of risk. That is, d0d_{0} is a Bayes estimator in 𝒟{\mathcal{D}} for π0\pi_{0}.

We next show that r(d0,π0)=supπΓr(d0,π)r(d_{0},\pi_{0})=\sup_{\pi\in\Gamma}r(d_{0},\pi). Let πΓ\pi\in\Gamma be arbitrary. Since 𝔼P[X¯]=Ψ(P){\mathbb{E}}_{P}[\bar{X}]=\Psi(P) and VarP(X¯)=VarP(X1)/n{\mathrm{Var}}_{P}(\bar{X})={\mathrm{Var}}_{P}(X_{1})/n, we can derive that

r(d0,π)\displaystyle r(d_{0},\pi) =𝔼P[{μ+nX¯1+nΨ(P)}2]π(dP)\displaystyle=\int{\mathbb{E}}_{P}\left[\left\{\frac{\mu+\sqrt{n}\bar{X}}{1+\sqrt{n}}-\Psi(P)\right\}^{2}\right]\pi({\mathrm{d}}P)
=𝔼P[{n1+n(X¯Ψ(P))+μΨ(P)1+n}2]π(dP)\displaystyle=\int{\mathbb{E}}_{P}\left[\left\{\frac{\sqrt{n}}{1+\sqrt{n}}\left(\bar{X}-\Psi(P)\right)+\frac{\mu-\Psi(P)}{1+\sqrt{n}}\right\}^{2}\right]\pi({\mathrm{d}}P)
={1(1+n)2VarP(X1)+(μΨ(P))2(1+n)2}π(dP)\displaystyle=\int\left\{\frac{1}{(1+\sqrt{n})^{2}}{\mathrm{Var}}_{P}(X_{1})+\frac{(\mu-\Psi(P))^{2}}{(1+\sqrt{n})^{2}}\right\}\pi({\mathrm{d}}P)
Apply Lemma 6 to VarP(X1){\mathrm{Var}}_{P}(X_{1}) and the display continues as
{1(1+n)2Ψ(P)(1Ψ(P))+(μΨ(P))2(1+n)2}π(dP)\displaystyle\leq\int\left\{\frac{1}{(1+\sqrt{n})^{2}}\Psi(P)(1-\Psi(P))+\frac{(\mu-\Psi(P))^{2}}{(1+\sqrt{n})^{2}}\right\}\pi({\mathrm{d}}P)
=1(1+n)2{μ2+(12μ)Ψ(P)}π(dP)=μ(1μ)(1+n)2.\displaystyle=\int\frac{1}{(1+\sqrt{n})^{2}}\left\{\mu^{2}+(1-2\mu)\Psi(P)\right\}\pi({\mathrm{d}}P)=\frac{\mu(1-\mu)}{(1+\sqrt{n})^{2}}.

This upper bound can be attained by any π\pi with support contained in {\mathcal{M}}^{\prime}, for example, π0\pi_{0}. Therefore, rsup(d0,Γ)=r(d0,π0)r_{\sup}(d_{0},\Gamma)=r(d_{0},\pi_{0}). By Theorem 7, d0d_{0} is Γ\Gamma-minimax over 𝒟{\mathcal{D}}. ∎

Table 7: Summary of frequently used symbols
Symbol
P0P_{0} True data-generating mechanism
{\mathcal{M}} Space of data-generating mechanisms containing P0P_{0}
𝐗\mathbf{X}^{*} and 𝐗=𝒞(𝐗)\mathbf{X}=\mathcal{C}(\mathbf{X}^{*}) Full generated data and coarsened data
𝒟{\mathcal{D}} Space of candidate estimators or decision functions (e.g., neural networks)
RR Risk function
rr Bayes risk function r:(d,π)R(d,P)π(dP)r:(d,\pi)\mapsto\int R(d,P)\pi({\mathrm{d}}P)
Γ(Π)\Gamma(\subseteq\Pi) Set of prior distributions consistent with prior knowledge
Ψ\Psi Functional defining the estimand Ψ(P0)\Psi(P_{0}) in Examples 13
{\mathcal{M}}_{\ell} An increasing sequence of finite subsets of {\mathcal{M}}
Γ\Gamma_{\ell} Set of priors in Γ\Gamma with support in {\mathcal{M}}_{\ell}
rsupr_{\sup} Worst-case Bayes risk function rsup:(d,Γ)supπΓr(d,π)r_{\sup}:(d,\Gamma^{\prime})\mapsto\sup_{\pi\in\Gamma^{\prime}}r(d,\pi)
dd_{\ell}^{*} Γ\Gamma_{\ell}-minimax estimator in 𝒟{\mathcal{D}}
dd^{*} A limit point of sequence {d}=1\{d_{\ell}^{*}\}_{\ell=1}^{\infty}, which is Γ\Gamma-minimax in 𝒟{\mathcal{D}} by Theorem 1
β(D)\beta(\in^{D}) Coefficient of a finite-dimensional estimator (e.g., neural network)
ξΞ\xi\sim\Xi Exogenous randomness
R^(β,P,ξ)\hat{R}(\beta,P,\xi) Unbiased approximation of R(β,P)R(\beta,P)
d¯(ϖ)\overline{d}(\varpi) Stochastic estimator following distribution ϖ\varpi over 𝒟{\mathcal{D}}
𝒟¯\overline{{\mathcal{D}}} Space of stochastic estimators d¯(ϖ)\overline{d}(\varpi)
d¯=d¯(ϖ)\overline{d}^{*}=\overline{d}(\varpi^{*}) Γ\Gamma-minimax estimator in 𝒟¯\overline{{\mathcal{D}}}

References

  • Amazon, (2019) Amazon (2019). Amazon EC2 Instance Types - Amazon Web Services.
  • Bartlett, (1997) Bartlett, P. L. (1997). For valid generalization, the size of the weights is more important than the size of the network. Advances in Neural Information Processing Systems, pages 134–140.
  • Bartlett et al., (2017) Bartlett, P. L., Foster, D. J., and Telgarsky, M. (2017). Spectrally-normalized margin bounds for neural networks. Advances in Neural Information Processing Systems, 2017-December:6241–6250.
  • Berger, (1985) Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis. Springer Series in Statistics. Springer New York, New York, NY.
  • Bickel et al., (1993) Bickel, P. J., Klaassen, C. A., Bickel, P. J., Ritov, Y., Klaassen, J., Wellner, J. A., and Ritov, Y. (1993). Efficient and adaptive estimation for semiparametric models, volume 4. Johns Hopkins University Press Baltimore.
  • Birmingham et al., (2003) Birmingham, J., Rotnitzky, A., and Fitzmaurice, G. M. (2003). Pattern-mixture and selection models for analysing longitudinal data with monotone missing patterns. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 65(1):275–297.
  • Brown, (1951) Brown, G. W. (1951). Iterative solution of games by fictitious play. Activity analysis of production and allocation, 13(1):374–376.
  • Bryan et al., (2007) Bryan, B., McMahan, H. B., Schafer, C. M., and Schneider, J. (2007). Efficiently computing minimax expected-size confidence regions. ACM International Conference Proceeding Series, 227:97–104.
  • Bunge et al., (2014) Bunge, J., Willis, A., and Walsh, F. (2014). Estimating the Number of Species in Microbial Diversity Studies. Annual Review of Statistics and Its Application, 1(1):427–445.
  • Chen et al., (1991) Chen, L., Eichenauer-Herrmann, J., Hofmann, H., and Kindler, J. (1991). Gamma-minimax estimators in the exponential family. Polska Akademia Nauk, Instytut Matematyczny.
  • Chen et al., (1988) Chen, L., Eichenauer-Herrmann, J., and Lehn, J. (1988). Gamma-minimax estimators for the parameters of a multinomial distribution. Applicationes Mathematicae, 20(4):561–564.
  • Chen, (2007) Chen, X. (2007). Chapter 76 Large Sample Sieve Estimation of Semi-Nonparametric Models. Handbook of Econometrics, 6(SUPPL. PART B):5549–5632.
  • Csáji, (2001) Csáji, B. (2001). Approximation with artificial neural networks. Technical report.
  • Eckle and Schmidt-Hieber, (2019) Eckle, K. and Schmidt-Hieber, J. (2019). A comparison of deep networks with ReLU activation function and linear spline-type methods. Neural Networks, 110:232–242.
  • Eichenauer-Herrmann, (1990) Eichenauer-Herrmann, J. (1990). A gamma-minimax result for the class of symmetric and unimodal priors. Statistical Papers, 31(1):301–304.
  • Eichenauer-Herrmann et al., (1994) Eichenauer-Herrmann, J., Ickstadt, K., and Weiß, E. (1994). Gamma-minimax results for the class of unimodal priors. Statistical Papers, 35(1):43–56.
  • Erhan et al., (2010) Erhan, D., Courville, A., Bengio, Y., and Vincent, P. (2010). Why does unsupervised pre-training help deep learning? Technical report.
  • Fan, (1953) Fan, K. (1953). Minimax theorems. Proceedings of the National Academy of Sciences of the United States of America, 39(1):42.
  • Gill et al., (1997) Gill, R. D., van der Laan, M. J., and Robins, J. M. (1997). Coarsening at Random: Characterizations, Conjectures, Counter-Examples. pages 255–294. Springer, New York, NY.
  • Glorot et al., (2011) Glorot, X., Bordes, A., and Bengio, Y. (2011). Deep sparse rectifier neural networks. Technical report.
  • Goel et al., (2016) Goel, S., Kanade, V., Klivans, A., and Thaler, J. (2016). Reliably Learning the ReLU in Polynomial Time.
  • Goodfellow et al., (2014) Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. Technical Report January.
  • Green, (1995) Green, P. J. (1995). Reversible Jump Markov Chain Monte Carlo Computation and Bayesian Model Determination. Biometrika, 82(4):711.
  • Hanin and Sellke, (2017) Hanin, B. and Sellke, M. (2017). Approximating Continuous Functions by ReLU Nets of Minimal Width. arXiv preprint arXiv:1710.11278v2.
  • Hastings, (1970) Hastings, W. K. (1970). Monte Carlo Sampling Methods Using Markov Chains and Their Applications. Biometrika, 57(1):97.
  • Heitjan, (1993) Heitjan, D. F. (1993). Ignorability and Coarse Data: Some Biomedical Examples. Biometrics, 49(4):1099.
  • Heitjan, (1994) Heitjan, D. F. (1994). Ignorability in general incomplete-data models. Biometrika, 81(4):701–708.
  • Heitjan and Rubin, (1991) Heitjan, D. F. and Rubin, D. B. (1991). Ignorability and Coarse Data. Technical Report 4.
  • Hornik, (1991) Hornik, K. (1991). Approximation capabilities of multilayer feedforward networks. Neural Networks, 4(2):251–257.
  • Huang et al., (2021) Huang, F., Wu, X., and Huang, H. (2021). Efficient mirror descent ascent methods for nonsmooth minimax problems. Advances in Neural Information Processing Systems, 34.
  • (31) Huang, G. B., Chen, L., and Siew, C. K. (2006a). Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Transactions on Neural Networks, 17(4):879–892.
  • (32) Huang, G. B., Zhu, Q. Y., and Siew, C. K. (2006b). Extreme learning machine: Theory and applications. Neurocomputing, 70(1-3):489–501.
  • Jiang et al., (2020) Jiang, S., Song, Z., Weinstein, O., and Zhang, H. (2020). Faster Dynamic Matrix Inverse for Faster LPs. arXiv preprint arXiv:2004.07470v1.
  • Jiao et al., (2015) Jiao, J., Venkat, K., Han, Y., and Weissman, T. (2015). Minimax Estimation of Functionals of Discrete Distributions. IEEE Transactions on Information Theory, 61(5):2835–2885.
  • Kempthorne, (1987) Kempthorne, P. J. (1987). Numerical Specification of Discrete Least Favorable Prior Distributions. SIAM Journal on Scientific and Statistical Computing, 8(2):171–184.
  • Kidger and Lyons, (2020) Kidger, P. and Lyons, T. (2020). Universal Approximation with Deep Narrow Networks. Technical report.
  • Kosorok, (2008) Kosorok, M. R. (2008). Introduction to Empirical Processes and Semiparametric Inference, volume 77 of Springer Series in Statistics. Springer New York.
  • Lehmann and Casella, (1998) Lehmann, E. L. and Casella, G. (1998). Theory of Point Estimation. Springer.
  • Lin et al., (2020) Lin, T., Jin, C., and Jordan, M. I. (2020). On gradient descent ascent for nonconvex-concave minimax problems. 37th International Conference on Machine Learning, ICML 2020, PartF168147-8:6039–6049.
  • (40) Luedtke, A., Carone, M., Simon, N., and Sofrygin, O. (2020a). Learning to learn from data: Using deep adversarial learning to construct optimal statistical procedures. Science Advances, 6(9):eaaw2140.
  • (41) Luedtke, A., Chung, I., and Sofrygin, O. (2020b). Adversarial Monte Carlo Meta-Learning of Optimal Prediction Procedures. arXiv preprint arXiv:2002.11275v1.
  • Maron et al., (2019) Maron, H., Fetaya, E., Segol, N., and Lipman, Y. (2019). On the Universality of Invariant Networks.
  • Metropolis et al., (1953) Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. (1953). Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6):1087–1092.
  • Miller and Wiegert, (1989) Miller, R. I. and Wiegert, R. G. (1989). Documenting completeness, species-area relations, and the species-abundance distribution of a regional flora. Ecology, 70(1):16–22.
  • Nelson, (1966) Nelson, W. (1966). Minimax Solution of Statistical Decision Problems by Iteration. The Annals of Mathematical Statistics, 37(6):1643–1657.
  • Neyshabur et al., (2017) Neyshabur, B., Bhojanapalli, S., McAllester, D., and Srebro, N. (2017). Exploring generalization in deep learning. Advances in Neural Information Processing Systems, 2017-December:5948–5957.
  • Noubiap and Seidel, (2001) Noubiap, R. F. and Seidel, W. (2001). An algorithm for calculating Γ\Gamma-minimax decision rules under generalized moment conditions. Annals of Statistics, 29(4):1094–1116.
  • Olman and Shmundak, (1985) Olman, V. and Shmundak, A. (1985). Minimax Bayes estimation of mean of normal law for the class of unimodal a priori distributions. Proc. Acad. Sci. Estonian Physics Math, 34:148–153.
  • Orlitsky et al., (2016) Orlitsky, A., Suresh, A. T., and Wu, Y. (2016). Optimal prediction of the number of unseen species. Proceedings of the National Academy of Sciences of the United States of America, 113(47):13283–13288.
  • Paszke et al., (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. (2019). Pytorch: An imperative style, high-performance deep learning library. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R., editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc.
  • Pfanzagl, (1990) Pfanzagl, J. (1990). Estimation in semiparametric models. pages 17–22. Springer, New York, NY.
  • Pinelis, (2016) Pinelis, I. (2016). On the extreme points of moments sets. Mathematical Methods of Operations Research, 83(3):325–349.
  • Qiu, (2022) Qiu, H. (2022). QIU-Hongxiang-David/Gamma-minimax-learning: Simulation code for “Constructing Gamma-minimax estimators to leverage vague prior information”. https://github.com/QIU-Hongxiang-David/Gamma-minimax-learning/. [Online; accessed 2022-03-14].
  • Robbins, (1951) Robbins, H. (1951). Asymptotically Subminimax Solutions of Compound Statistical Decision Problems. In Proceedings of the second Berkeley symposium on mathematical statistics and probability, pages 131–149.
  • Robinson, (1951) Robinson, J. (1951). An Iterative Method of Solving a Game. The Annals of Mathematics, 54(2):296.
  • Sarma and Kay, (2020) Sarma, A. and Kay, M. (2020). Prior Setting in Practice: Strategies and Rationales Used in Choosing Prior Distributions for Bayesian Analysis. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery.
  • Schafer and Stark, (2009) Schafer, C. M. and Stark, P. B. (2009). Constructing confidence regions of optimal expected size. Journal of the American Statistical Association, 104(487):1080–1089.
  • Shannon, (1948) Shannon, C. E. (1948). A Mathematical Theory of Communication. Bell System Technical Journal, 27(3):379–423.
  • Shen et al., (2003) Shen, T. J., Chao, A., and Lin, C. F. (2003). Predicting the number of new species in further taxonomic sampling. Ecology, 84(3):798–804.
  • Sion, (1958) Sion, M. (1958). On general minimax theorems. Pacific Journal of Mathematics, 8(1):171–176.
  • Spielman and Teng, (2004) Spielman, D. A. and Teng, S. H. (2004). Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. Journal of the ACM, 51(3):385–463.
  • Torrey and Shavlik, (2009) Torrey, L. and Shavlik, J. (2009). Transfer learning. Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques, 11:242–264.
  • v. Neumann, (1928) v. Neumann, J. (1928). Zur Theorie der Gesellschaftsspiele. Mathematische Annalen, 100(1):295–320.
  • van der Vaart and Wellner, (2000) van der Vaart, A. and Wellner, J. (2000). Weak Convergence and Empirical Processes: With Applications to Statistics. Springer Series in Statistics. Springer.
  • Vidakovic, (2000) Vidakovic, B. (2000). Γ\Gamma-Minimax: A Paradigm for Conservative Robust Bayesians. pages 241–259. Springer, New York, NY.
  • Wald, (1945) Wald, A. (1945). Statistical Decision Functions Which Minimize the Maximum Risk. The Annals of Mathematics, 46(2):265.
  • Zaheer et al., (2017) Zaheer, M., Kottur, S., Ravanbhakhsh, S., Póczos, B., Salakhutdinov, R., and Smola, A. J. (2017). Deep sets. Advances in Neural Information Processing Systems, 2017-Decem(ii):3392–3402.
  • Zhang et al., (2016) Zhang, Y., Lee, J. D., and Jordan, M. I. (2016). L1-regularized neural networks are improperly learnable in polynomial time. 33rd International Conference on Machine Learning, ICML 2016, 3:1555–1563.