Adversarial Meta-Learning of Gamma-Minimax Estimators
That Leverage Prior Knowledge
Abstract
Bayes estimators are well known to provide a means to incorporate prior knowledge that can be expressed in terms of a single prior distribution. However, when this knowledge is too vague to express with a single prior, an alternative approach is needed. Gamma-minimax estimators provide such an approach. These estimators minimize the worst-case Bayes risk over a set of prior distributions that are compatible with the available knowledge. Traditionally, Gamma-minimaxity is defined for parametric models. In this work, we define Gamma-minimax estimators for general models and propose adversarial meta-learning algorithms to compute them when the set of prior distributions is constrained by generalized moments. Accompanying convergence guarantees are also provided. We also introduce a neural network class that provides a rich, but finite-dimensional, class of estimators from which a Gamma-minimax estimator can be selected. We illustrate our method in two settings, namely entropy estimation and a prediction problem that arises in biodiversity studies.
1 Introduction
A variety of principles can be used to guide the search for a suitable statistical estimator. Asymptotic efficiency (Pfanzagl,, 1990), minimaxity (Wald,, 1945) and Bayes optimality (Berger,, 1985) are popular examples of such principles. Defining the performance criteria underlying these principles requires specifying a model space, that is, a collection of possible data-generating mechanisms known to contain the true, underlying distribution.
It is often desirable to incorporate prior information about the true data-generating mechanism into a statistical procedure. This might be done differently in different statistical paradigms. For frequentist methods, such as those based on the asymptotic efficiency or minimax principle, the primary way to incorporate this information is via the definition of the model space. In the Bayesian paradigm, such information may be represented by further specifying a prior distribution (or prior for short) over the model space and aiming for an estimator that minimizes the induced Bayes risk. However, in many cases, there may be several priors that are compatible with the available information, especially in the context of rich model spaces.
The Gamma-minimax paradigm, proposed by Robbins, (1951), provides a principled means to overcome the challenge of specifying a single prior distribution. Under this paradigm, a statistician first specifies a set of all priors that are consistent with the available prior information and subsequently seeks an estimator that minimizes the worst-case Bayes risk over this set of priors. The Gamma-minimax paradigm may be viewed as a robust version of the Bayesian paradigm that is less sensitive to misspecification of a prior distribution (Vidakovic,, 2000). When it is infeasible to specify a prior due to the complexity of the model space, the Gamma-minimax paradigm may also be viewed as a feasible substitute for the Bayesian paradigm. The Gamma-minimax paradigm is closely related to Bayes and minimax paradigms: when the set of priors consists of a single prior, a Gamma-minimax estimator is Bayes with respect to that prior; when the set of priors is the entire set of possible prior distributions, a Gamma-minimax estimator is also minimax.
Gamma-minimax estimators have been studied for a variety of problems. Some explicit forms of Gamma-minimax estimators have been obtained. For example, Olman and Shmundak, (1985) studied Gamma-minimax estimation of the mean of a normal distribution for the set of symmetric and unimodal priors on an interval and obtained an explicit form when this interval is sufficiently small. Eichenauer-Herrmann, (1990) generalized this result to more general parametric models and Eichenauer-Herrmann et al., (1994) obtained a further generalization with the requirement of symmetry on the priors dropped. Chen et al., (1988) studied Gamma-minimax estimation for multinomial distributions and the set of priors with bounded mean. Chen et al., (1991) studied Gamma-minimax estimation for one-parameter exponential families and the set of priors that place certain bounds on the first two moments. These results do not deal with general model spaces, such as semiparametric or nonparametric models, and general forms of the set of priors that may not directly impose bounds on prior moments of the parameters of interest. One reason for this lack of generality might be that, in the existing literature, Gamma-minimaxity is defined only for parametric models. However, an issue with parametric models is that they often fail to contain the true data-generating mechanism, in which case output from the aforementioned statistical procedures may no longer be interpretable. Another possible reason is that it is typically intractable to analytically derive Gamma-minimax estimators, even for parametric models.
To overcome this lack of analytical tractability, meta-learning algorithms to compute a minimax or Gamma-minimax estimator have been proposed. Still, most of these works focus on parametric models. For example, Nelson, (1966) and Kempthorne, (1987) each proposed an algorithm to compute a minimax estimator. Bryan et al., (2007) and Schafer and Stark, (2009) proposed an algorithm to compute an approximate confidence region of optimal expected size in the minimax sense. Noubiap and Seidel, (2001) proposed an iterative algorithm to compute a Gamma-minimax decision for the set of priors constrained by generalized moment conditions. More recent works explored computing estimators under more general models. For example, Luedtke et al., 2020a introduced an approach, termed Adversarial Monte Carlo meta-learning (AMC), for constructing minimax estimators. In the special case of prediction problems with mean-squared error, Luedtke et al., 2020b studied the invariance properties of the decision problem and their implications for AMC.
In this paper, we make the following contributions:
-
1.
We propose iterative adversarial meta-learning algorithms for constructing Gamma-minimax estimators for a general model space and class of estimators. We further provide convergence guarantees for these algorithms.
To our best knowledge, this is the first algorithm to compute Gamma-minimax estimators under general models, including infinite-dimensional models. We also show that, for certain problems, there is a unique Gamma-minimax estimator and our computed estimator converges to this estimator as the number of iterations increases to infinity.
Like the approach proposed in Noubiap and Seidel, (2001), we consider sets of priors characterized by (in)equality constraints on prior generalized moments and our proposed iterative algorithm involves solving a discretized Gamma-minimax optimization problem in each intermediate step. However, we explicitly describe algorithms to solve these minimax problems, which facilitates the use of our approach by practitioners. When the space of estimators can be parameterized by a Euclidean space and gradients are available, we propose to use a gradient-based algorithm or a stochastic variant thereof. When gradients are unavailable, we propose to instead use fictitious play (Brown,, 1951; Robinson,, 1951) to compute a stochastic estimator, which is a mixture of deterministic estimators belonging to some specified collection. We also provide a convergence result that is applicable even when this collection has infinite cardinality. This is in contrast to the results in Robinson, (1951), which require that each player has only finitely many possible deterministic strategies.
-
2.
We propose a Markov chain Monte Carlo (MCMC) method to construct the approximating grids defining the discretized Gamma-minimax problems used in our iterative scheme.
Like the approach proposed in Noubiap and Seidel, (2001), our proposed iterative algorithm relies on increasingly fine finite grids over the model space. However, since we allow the model space to be high or even infinite-dimensional, randomly adding points to the grid may lead to unacceptably slow convergence. To overcome this challenge, we propose to use MCMC to efficiently construct such grids.
Our theoretical results allow for many different choices of classes of estimators. Our final contribution concerns the introduction of one such class:
-
3.
We introduce a new neural network architecture that can be used to parameterize statistical estimators and argue that this class represents an appealing class to optimize over.
For this final point, we build on existing works in adversarial learning (e.g., Goodfellow et al.,, 2014; Luedtke et al., 2020a, ; Luedtke et al., 2020b, ) and extreme learning machines (Huang et al., 2006b, ). Thanks to the universal approximation properties of neural networks (e.g., Hornik,, 1991; Csáji,, 2001) and extreme learning machines (Huang et al., 2006a, ), we also show that both of these parameterizations can achieve good performance for sufficiently large networks. Furthermore, inspired by pre-training (e.g., Erhan et al.,, 2010) and transfer learning (e.g., Torrey and Shavlik,, 2009), we recommend leveraging knowledge of existing estimators as inputs to the network in settings where this is possible. Under such choices of the space of estimators, we can expect to obtain a useful estimator even if the associated nonconvex-concave minimax problems prove to be difficult.
This paper is organized as follows. In Section 2, we introduce the framework of Gamma-minimax estimation and regularity conditions that we assume throughout the paper. In Section 3, we describe our proposed iterative adversarial meta-learning algorithms. In Section 4, we discuss considerations when choosing hyperparameters in the algorithms. In Section 5, we demonstrate our method in three simulation studies. We conclude with a discussion in Section 6. Proof sketches of key results are provided in the main text, and complete proofs can be found in the appendix. We also provide a table summarizing the frequently used symbols in Table 7 in the appendix. The code for our simulations is available on GitHub (Qiu,, 2022).
2 Problem setup
Let be a separable Hausdorff space of data-generating mechanisms that contains the truth and is equipped with a metric . Under a data-generating mechanism , let denote the random data being generated, where is the space of values that the random data takes. Let denote a known coarsening mechanism such that the observed data , where is the space of observed data. In some cases, the coarsening mechanism will be the identity map, whereas in other settings, such as those in which missing, censored or truncated data is present, the coarsening mechanism will be nontrivial (e.g., Birmingham et al.,, 2003; Gill et al.,, 1997; Heitjan and Rubin,, 1991; Heitjan,, 1993, 1994). Let denote the space of estimators (or decision functions) equipped with a metric . In practice, for computational feasibility, we will mainly consider an estimator space that contains estimators parameterized by a Euclidean space such as linear estimators or neural networks, and approximates a more general space , for example, the space of all estimators satisfying certain smoothness conditions. We discuss considerations concerning the choice of in Section 4.2 and note that our proposed methods may be applied to broader estimator classes. We treat as fixed throughout this paper. Let denote a risk function that measures the performance of an estimator under a data-generating mechanism such that smaller risks are preferable. We suppose throughout that and are equipped with the topologies induced by and , respectively.
We now present three examples in which we formulate statistical decision problems in the above form. The first example is a general example of point estimation. We use this example to illustrate how the Gamma-minimax estimation framework naturally fits into many statistical problems. The other two examples are more concrete and we will study them in the simulations and data analyses.
Example 1 (Point estimation).
Suppose that is a statistical model, which may be parametric, semiparametric, or nonparametric (Bickel et al.,, 1993). The data consists of independently and identically distributed (iid) random variables , , following the true distribution . We set to be the identity function so that . We wish to estimate an aspect of . Then, we can consider to be a set of functions and the mean-squared error risk . Some specific examples of estimands include:
-
i)
Mean: ;
-
ii)
Cumulative distribution function at a point : ;
-
iii)
Correlation: with , .
Example 2 (Predicting the expected number of novel categories to be observed in a new sample).
Suppose that consists of multinomial distributions with an unknown number of categories. Let an iid random sample of size be generated from the true multinomial distribution, so that is a multiset containing the number of observations in each category . Suppose that only categories with nonzero occurrences are observed, so that is a left-truncated version of . In other words, is the multiset . Then, we may wish to predict the number of new categories that would be observed if a new sample of size were collected. This problem has been extensively studied in the literature, with applications in microbiome data, species taxonomic surveys, and assessment of vocabulary size, among other areas (e.g., Shen et al.,, 2003; Bunge et al.,, 2014; Orlitsky et al.,, 2016). This prediction problem can be formulated in our framework. For each , let () be the probability of category , and be , the expected number of new observed categories given the current full data . We consider to be a set of functions and set the risk to be the mean-squared error, that is, . This prediction problem is known to be intrinsically difficult when the future sample size is greater than the observed sample size , and we might expect prior information to substantially improve prediction.
Example 3 (Entropy estimation).
Consider the same data-generating mechanism and observed data as in Example 2. We may wish to estimate Shannon entropy (Shannon,, 1948) , a measure of diversity. We consider to be a set of functions and set the risk to be the mean-squared error, that is, . Jiao et al., (2015) proposed a rate-minimax estimator. Thus, in contrast to Example 2, this is an example of a statistical problem with a satisfactory solution. For these problems, we might not expect prior information to substantially improve estimation.
We now define Gamma-minimaxity within our decision-theoretic framework. We assume that is equipped with the Borel -field and let denote the set of all probability distributions on the measurable space . We also assume that, for any and any , is -integrable. The Bayes risk corresponding to an estimator and a prior is defined as . Let be the set of priors such that all are consistent with the available prior information. An estimator is called a -minimax estimator if it is in the set
(1) |
Throughout the rest of this paper, we assume the existence of this solution set and other solution sets to minimax problems, and that is finite for any .
In this paper, we consider the case in which is characterized by finitely many generalized moment conditions, that is,
where each is a prespecified function that extracts an aspect of a data-generating mechanism and is a prespecified constant. The validity of our proposed template to find a -minimax estimator in Section 3.1 does not require to take this form, but our proposed algorithms in Sections 3.2 and 3.3 are computationally feasible for such constraints because these linear constraints lead to linear programs, which can be solved efficiently (e.g., Jiang et al.,, 2020). In principle, more general constraints can be handled by using suitable minimax problem solvers. Such constraints were considered in Noubiap and Seidel, (2001) and can represent a variety of forms of prior information. For example, with for some , imposes bounds on prior moments of ; with for some known interval , imposes bounds on the prior probability of lying in . Similar prior information on aspects of other than can also be represented. In addition, since an equality can be equivalently expressed by two inequalities, may also impose equality constraints on prior generalized moments. Such information is commonly used to choose prior distributions in Bayesian settings (Sarma and Kay,, 2020). Since we do not require specifying a parametric model or specifying an entire prior distribution for any finite-dimensional summary of , specifying a set of prior distributions in the above form is no more difficult — and often easier — than specifying a single prior distribution, as would be required in a Bayesian approach.
3 Proposed meta-learning algorithms to compute a -minimax estimator
Since both the model space and the estimator space may be infinite, it is computationally infeasible to directly solve the minimax problem (1) defining a -minimax estimator. Similarly to Noubiap and Seidel, (2001), our general strategy is to discretize and thus consider prior distributions with discrete supports. Once the supports of prior distributions are discrete, the optimization over prior distributions only involves finitely many parameters, namely the probability masses at support points, and thus is computationally possible. We will show that, when the grid is sufficiently fine, a solution to the discretized minimax problem is close to a solution to the original minimax problem.
Our proposed algorithm consists of two main steps. The first step is to discretize the model space and consider an approximating grid instead of the original complicated model space . This discretization is illustrated in Fig. 1. We will describe in more detail in Section 3.1. In the second step, we consider a set of priors with support contained and compute a -minimax estimator. We will describe two classes of algorithms to solve this discretized minimax problem in Sections 3.2 and 3.3, respectively.

3.1 Grid-based approximation of -minimax estimators
We first define the discretization of the model space that we will use. Let be an increasing sequence of finite subsets of such that is dense in . That is, is an increasingly fine grid over . Since is separable, such an necessarily exists. Define
for any and .
Algorithm 1 describes how the grids are used to compute an approximately -minimax estimator in our proposed algorithms. We will show that the approximation error decays to zero as grows to infinity. Here and in the rest of the algorithms in the paper, for any real-valued function , when we assign or to a variable, we arbitrarily pick a minimizer or maximizer if there are multiple optimizers. In practice, the user may stop the iteration at some and use a -minimax estimator as the output estimator. We discuss the stopping criterion in more detail at the end of this section.
We note that the minimax problem in Line 3 of Algorithm 1 is nontrivial to solve, and therefore we propose two algorithms that can solve this minimax problem in Sections 3.2 and 3.3.
We assume that the following conditions hold throughout the rest of the paper.
Condition 1.
There exists a limit point of the sequence .
Condition 1 holds if the sequence eventually falls in a compact set. For example, if is a space of neural networks and we take to be the Euclidean norm in the coefficient space, then we expect Condition 1 to hold if all coefficients are restricted to fall in a bounded set, which is a common restriction in theoretical analyses of neural networks (see, e.g., Goel et al.,, 2016; Zhang et al.,, 2016; Eckle and Schmidt-Hieber,, 2019) and often leads to desirable generalization bounds (see, e.g., Bartlett,, 1997; Bartlett et al.,, 2017; Neyshabur et al.,, 2017). Our theoretical results hold for any limit point in Condition 1, even if there is more than one of them.
Condition 2.
The mapping is continuous at for all .
Condition 2 also often holds. For example, when parameterized using neural networks, all estimators are continuous functions of coefficients for common activation functions such as the sigmoid or the rectified linear unit (ReLU) (Glorot et al.,, 2011) function, and therefore is continuous everywhere.
We next present a sufficient condition to ensure that is -minimax, so that is approximately -minimax for sufficiently large .
Condition 3.
We assume that there exists an increasing sequence of subsets of such that
-
1.
;
-
2.
for all and all , define and . For any with a finite support, there exists a sequence such that as .
We note that, in contrast to , may be an infinite set. We may expect Condition 3 to hold in many cases, especially when is continuous at each and the grid contains a variety of distributions that are consistent with prior information represented by . We illustrate this point with two counterexamples in Appendix A. We will check the plausibility of Condition 3 for Example 2 in our simulation and data analysis in Section 5.1 for exemplar prior information; an almost identical argument shows the plausibility of Condition 3 for Example 3.
We now present the theorem on -minimaxity of .
To prove Theorem 1, we utilize a result in Pinelis, (2016) to establish that can be approximated arbitrarily well by a discrete prior in for any . This is a key ingredient in the proof of Lemma 1, which states that, for any , converges to . Then, we show that the sequence is nondecreasing and upper bounded by , which is less than or equal to the -maximal Bayes risk of the limit point of in Condition 1. Therefore, converges to a limit. We finally use a contradiction argument to prove that this limit is greater than or equal to , which implies Theorem 1.
We have the following corollary on the uniqueness of the -minimax estimator and the convergence of for certain problems.
Corollary 1 (Convergence of -minimax estimator).
We prove Corollary 1 by establishing that is strictly convex.
In practice, the user also needs to specify a stopping criterion for Algorithm 1. In Noubiap and Seidel, (2001), the authors recommended computing or approximating and stop if is sufficiently close to . However, the procedure to approximate in that work relies on the compactness of , but we do not want to assume this condition because it may restrict the applicability of the method. Therefore, we propose to use the following alternative criterion: stop if for a prespecified tolerance level . This criterion was proposed but not recommended in Noubiap and Seidel, (2001) because it does not guarantee that is close to . For example, if is small, it is even possible that , but is far from being -minimax. In contrast, we recommend this criterion for our proposed methods because we allow more flexibility in model specification, that is, need not be compact. We discuss this issue in more detail in Section 4.1.
We finally remark that may be difficult to evaluate exactly. Since the risk is often an expectation, we recommend approximating for any given via Monte Carlo as follows: first, estimate risks for all with a large number of Monte Carlo runs; second, estimate the corresponding least favorable prior using the estimated risks; third, estimate the risks () again with independent Monte Carlo runs, and, finally, calculate with the estimated risks and the estimated least favorable prior. Using two independent estimates of the risk can remove the positive bias that would otherwise arise due to using the same data to estimate the risks and the least favorable prior.
3.2 Computation of an estimator on a grid via stochastic gradient descent with max-oracle
In this section, we present methods to compute a -minimax estimator, which corresponds to Line 3 in Algorithm 1. Gradient descent with max-oracle (GDmax) and its stochastic variant (SGDmax), which were presented in Lin et al., (2020), can be used to solve general minimax problems in Euclidean spaces. We focus on SGDmax in the main text and present GDmax in Appendix B. To apply these algorithms to find a -m inimax estimator, we need to assume that can be parameterized by a subset of a Euclidean space, that is, that for any , there exists a real vector-valued coefficient such that may be written as . For example, may be a neural network class. More discussions on the parameterization of can be found in Section 4.2. In this section, in a slight abuse of notation, we define , and for a coefficient , a data-generating mechanism and a prior . We assume that is differentiable for all , and hence so is for all .
It is often the case that is expressed as an expectation. In this case, may instead be approximated using Monte Carlo techniques. With being an exogenous source of randomness according to law , let be an unbiased approximation of with , where denotes the -norm in Euclidean spaces. Let for . In this case, SGDmax (Algorithm 2) may be used to find a (locally) -minimax estimator. Note that Algorithm 2 represents a generalization of the nested minimax AMC strategy in Luedtke et al., 2020a to -minimax problems.
We next present two conditions needed for the validity of Algorithm 2.
Condition 4.
For each and all , is Lipschitz continuous with a universal Lipschitz constant independent of .
Note that Condition 4 differs from Condition 2 in that the former relies on the parameterization of in a Euclidean space equipped with the Euclidean norm, while the latter may rely on a different metric on such as an -distance.
Condition 5.
For each and all , is bounded; is Lipschitz continuous with a universal Lipschitz constant independent of .
Under these conditions, using the results in Lin et al., (2020), we can show that SGDmax yields an approximation to a local minimum of when the algorithms’ hyperparameters are suitably chosen. Before we formally present the theorem, we introduce some definitions related to the local optimality of a potentially nondifferentiable and nonconvex function. A real-valued function is called -weakly convex if is convex (). The Moreau envelope of a real-valued function with parameter is . A point is an -stationary point () of a -weakly convex function if . Similarly, a random point is an -stationary point () of a -weakly convex function in expectation if . If is an -stationary point in expectation, we may conclude that it is an -stationary point with high probability by Markov’s inequality. Lemma 3.8 in Lin et al., (2020) shows that an -stationary point of is close to a point at which has at least one small subgradient for small , so that is close to a local minimum. In other words, if an algorithm outputs an estimator such that is an -stationary point of , then we know that is close to a local minimum of .
We next present the validity result for Algorithm 2.
Theorem 2 (Validity of SGDmax (Algorithm 2)).
The assumption that the batch size is purely for convenience since increasing corresponds to decreasing variance . To run Algorithm 2 in practice, the user only needs to specify tuning parameters in Line 1 and all other constants in Theorem 2 need not be known. In general, a small learning rate , a stringent accuracy , and a large batch size make Algorithm 2 likely to eventually reach an approximation of a local minimum of , but computation time might increase. Similar to most numeric optimization algorithms, fine-tuning is needed to achieve a balance between convergence guarantee and computation time, but a conservative choice of tuning parameters would typically result in convergence at the cost of computation time.
We note that Line 3 in Algorithm 2 may be inconvenient to implement because linear program solvers often do not use stochastic optimization. Therefore, we propose to use a convenient variant (Algorithm 6 in Appendix B), where the stochastic maximization step (Line 3 in Algorithm 2) is replaced by solving a linear program where the objective is approximated via Monte Carlo. This variant has similar validity under similar conditions. We also note that the two uniform Lipschitz continuity conditions (4 and 5) heavily rely on the fact that is finite and the compactness of a set containing the coefficients. Nevertheless, the latter compactness restriction is common in theoretical analyses of neural networks (see, e.g., Goel et al.,, 2016; Zhang et al.,, 2016; Eckle and Schmidt-Hieber,, 2019). Moreover, these two conditions are sufficient conditions for the validity of the gradient-based methods, namely SGDmax, our variant of SGDmax and GDmax; a guarantee similar to these validity results might hold when two conditions are violated.
We finally remark that other algorithms similar to SGDmax can be applied, for example, (stochastic) gradient descent ascent with projection (Lin et al.,, 2020), (stochastic) mirror descent ascent, or accelerated (stochastic) mirror descent ascent (Huang et al.,, 2021). It is of future research interest to develop gradient-based methods to solve minimax problems with convergence guarantees under weaker conditions.
3.3 Computation of an estimator on a grid via fictitious play
The algorithms in Section 3.2 may be convenient in many cases, but the requirements such as parameterization of the space of estimators in a Euclidean space, differentiability of the risk function with respect to the coefficients , and uniform Lipschitz continuity may be restrictive for certain problems. In this section, we propose an alternative algorithm, fictitious play, that avoids these requirements. We also present its convergence results.
Brown, (1951) introduced fictitious play as a means to find the value of a zero-sum game, that is, the optimal mixed strategy for both players and their expected gains. Robinson, (1951) then proved that fictitious play can be used to iteratively solve a two-player zero-sum game for a saddle point that is a pair of mixed strategies where both players have finitely many pure strategies. Our problem of finding a -minimax estimator may also be viewed as a two-player zero-sum game where one player chooses a prior from and the other player chooses an estimator from . If we assume that, for the -minimax problem at hand, the pair of both players’ optimal strategies is a saddle point, which holds in many minimax problems (e.g., v. Neumann,, 1928; Fan,, 1953; Sion,, 1958), then fictitious play may also be used to find a -minimax estimator. Since may be too rich to allow for feasible implementation of fictitious play, we propose to use this algorithm to find a -minimax estimator.
In the fictitious play algorithm in Robinson, (1951), the two players take turns to play the best pure strategy against the mixture of the opponent’s historic pure strategies, and the final output is a pair of mixtures of the two players’ historic pure strategies. Since this algorithm aims to find minimax mixed strategies, we consider stochastic estimators. That is, consider the Borel -field over and let denote the set of all probability distributions on the measurable space . We define to be the space of stochastic estimators with each element taking the following form: first draw an estimator from according to a distribution with an exogenous random mechanism and then use the estimator to obtain an estimate based on the data. Note that we may write any as for some . We consider estimators in throughout this section, with the definition of -minimaxity extended in the natural way, so that is -minimax if ; we similarly extend all other definitions from Section 2. We assume that there exists () such that
(2) |
In other words, is a saddle point of in . Under this condition and the further conditions that is convex and is convex for all , it is possible to use a -minimax estimator over the richer class of stochastic estimators to derive a -minimax estimator over the original class . Indeed, for any and , by Jensen’s inequality, where is the average of the stochastic estimator ; that is, the risk of is never greater than that of . Therefore, we may use the fictitious play algorithm to compute for each and further apply Algorithm 1 to compute . After that, we may take as the final output deterministic estimator.
Algorithm 3 presents the fictitious play algorithm for finding a -minimax estimator in . Note that is convex, and hence always lies in throughout the iterations. In practice, we may initialize as a point mass at an initial estimator in . In addition, similarly to Robinson, (1951), we may replace Line 5 with , that is, minimizing the Bayes risk with the most recently updated prior rather than with the previous prior.
We next present a convergence result for this algorithm.
Theorem 3 (Validity of fictitious play (Algorithm 3)).
Robinson, (1951) proved a similar case for two-player zero-sum games where each player has finitely many pure strategies. In contrast, in our problem, each player may have infinitely many pure strategies. A natural attempt to prove Theorem 3 would be to consider finite covers of and , that is, and , such that the range of in each and is small (say less than ), bin pure strategies into these subsets, and then apply the argument in Robinson, (1951) to these bins. The collection of and may be viewed as finitely many approximated pure strategies to and up to accuracy , respectively. Unfortunately, we found that this approach fails. The problem arises because Robinson, (1951) inducted on and , and, after each induction step, the corresponding upper bound becomes twice as large. Unlike the case with finitely many pure strategies that was considered in Brown, (1951) and Robinson, (1951), as the desired approximation accuracy approaches zero, the numbers of approximated pure strategies, and , may diverge to infinity, and so does the number of induction steps. Therefore, the resulting final upper bound is of order and generally does not converge to zero as tends to zero. To overcome this challenge, we instead control the increase in the relevant upper bound after each induction step more carefully so that the final upper bound converges to zero as decreases to zero, despite the fact that and may diverge to infinity.
We remark that, because Line 5 of Algorithm 3 typically involves another layer of iteration in addition to that over , this algorithm will often be more computationally intensive than SGDmax. Nevertheless, Algorithm 3 provides an approach to construct -minimax estimators in cases where these other algorithms cannot be applied, for example, in settings where the risk is not differentiable in the parameters indexing the estimator or uniform Lipschitz conditions fail. In our numerical experiments, we have implemented this algorithm in the context of mean estimation (Appendix C).
4 Considerations in implementation
4.1 Considerations when constructing the grid over the model space
By Theorem 1, whenever Conditions 1–3 hold and the increasing sequence is such that is dense in . Though this guarantee holds for all such sequences , in practice, judiciously choosing this sequence of grids of distributions can lead to faster convergence. In particular, it is desirable that the least favorable prior puts mass on some of the distributions in since, if this is not the case, then will be the same as . While we may try to arrange for this to occur by adding many new points when enlarging to , it may not be likely that any of these points will actually modify the least favorable prior unless they are carefully chosen.
To better address this issue, we propose to add grid points using a Markov chain Monte Carlo (MCMC) method. Our intuition is that, given an estimator , the maximal Bayes risk is likely to significantly increase if we add distributions that (i) have a high risk for , and (ii) are consistent with prior information so that there exists some prior such that these distributions lie in a high-probability region. We propose to use the MCMC algorithm to bias the selection of distributions in favor of those with the above characteristics. Let denote a function such that if is more consistent with prior information than . For example, given a prior mean of some real-valued summary of and an interval that contains with prior probability at least 95%, we may choose , where is the density of a normal distribution that has mean and places 95% of its probability mass in . We call a pseudo-prior. Then, with the current estimator being , we wish to select distributions for which is large. We may use the Metropolis-Hastings-Green algorithm (Metropolis et al.,, 1953; Hastings,, 1970; Green,, 1995) to draw samples from a density proportional to . We then let be equal to the union of and the set containing all unique distributions in this sample.
Details of the proposed scheme are provided in Algorithm 4. To use this proposed algorithm, we rely on it being possible to define a sequence of parametric models such that is dense in —this is possible in many interesting examples (see, e.g., Chen,, 2007). When combined with separability of , this condition enables the definition of an increasing sequence of grids of distributions such that, for each , .
The following theorem on distributional convergence follows from that for the Metropolis-Hastings-Green algorithm (see Section 3.2 and 3.3 of Green,, 1995).
Theorem 4 (Validity of MCMC algorithm (Algorithm 4)).
Suppose that is bounded and integrable with respect to some measure on and let denote the probability law on whose density function with respect to is proportional to this function. Suppose that the MCMC is constructed such that the Markov chain is irreducible and aperiodic. Then, converges weakly to as .
Therefore, if corresponds to a continuous distribution with nonzero density over the parameter space of , then Theorem 4 implies that is dense in , as required by Algorithm 1.
Implementing Algorithm 4 relies on the user making several decisions. These decisions include the choice of the pseudo-prior and the technique used to approximate the risk to a reasonable accuracy. Fortunately, regardless of the decisions made, Theorem 1 suggests that for a wide range of sequences . Indeed, all that theorem requires on this sequence is that the grid becomes arbitrarily fine as increases. Though the final decisions made are not important when is large, we still comment briefly on the decisions that we have made in our experiments, First, we have found it effective to approximate via a large number of Monte Carlo draws. Second, in a variety of settings, we have also identified, via numerical experiments, candidate pseudo-priors that balance high risk and consistency with prior information (see Sections 5.1 and 5.2 for details).
4.2 Considerations when choosing the space of estimators
It is desirable to consider a rich space of estimators to obtain an estimator with low maximal Bayes risk, and thus good general performance. However, to make numerically constructing these estimators computationally feasible, we usually have to consider a restricted space of estimators. This approximation is justified because, if estimators in can approximate the -minimax estimator in well, then we expect the resulting excess maximal Bayes risk is small.
Feedforward neural networks (or neural networks for short) are natural options for the space of estimators because of their universal approximation property (e.g., Hornik,, 1991; Csáji,, 2001; Hanin and Sellke,, 2017; Kidger and Lyons,, 2020). However, training commonly used neural networks can be computationally intensive. Moreover, a space of neural networks is typically nonconvex, and hence it may be difficult to find a global minimizer of the maximal Bayes risk even if the risk is convex in the estimator. Therefore, the learned estimator might not perform well.
To help overcome this challenge, we advocate for utilizing available statistical knowledge when designing the space of estimators. We call estimators that take this form statistical knowledge networks. In particular, if a simple estimator is already available, we propose to use neural networks with such an estimator as a node connected to the output node. An example of such an architecture is presented in Fig. 2. In this sample architecture, each node is an activation function such as the sigmoid or the rectified linear unit (ReLU) (Glorot et al.,, 2011) function applied to an affine transformation of the vector containing the ancestors of the node. The only exception is the output node, which is again an affine transformation of its ancestors but uses the identity activation function. When training the neural network, we may initialize the affine transformation in the output layer to only give weight to the simple estimator. Under this approach, the space of estimators is a set of perturbations of an existing simple estimator. Although we may still face the challenge of nonconvexity and local optimality, we can at least expect to improve the initial simple estimator.
In the simulation we describe in Appendix C, we compared the empirical performance of several spaces of estimators. This simulation concerns the simple problem of estimating the mean of a true distribution whose support has known bounds (Example 1), and the existing simple estimator we use in the statistical neural network is the sample mean. Fig. 3 presents the trajectory of estimated Bayes risks. As shown in subfigures (b)–(d), using the statistical knowledge network, the estimator is almost -minimax after a few iterations; on the other hand, it took about 1000 iterations for the feedforward neural network to reach an approximately -minimax estimator. Therefore, in this simple problem where the true -minimax estimator is a shifted and scaled sample mean, statistical knowledge substantially reduced the number of iterations required to obtain an approximately -minimax estimator. For more complicated problems, we expect statistical knowledge to further help improve the performance of the computed estimator.

We note that we might overcome the challenge of nonconvexity and local optimality by using an extreme learning machine (ELM) (Huang et al., 2006b, ) to parameterize the estimator. ELMs are neural networks for which the weights in hidden nodes are randomly generated and are held fixed, and only the weights in the output layer are trained. Thus, the space of ELMs with a fixed architecture and fixed hidden layer weights is convex. Like traditional neural networks, ELMs have the universal approximation property (Huang et al., 2006a, ). In addition, Corollary 1 may be applied to an ELM so that the -minimax estimator may converge to the -minimax estimator. As for traditional neural networks, we may incorporate knowledge of existing statistical estimators into an ELM.
We finally remark that, besides computational intensity when constructing (i.e., learning) a -minimax estimator, another important factor to be considered when choosing is the computational intensity to evaluate the learned estimator at the observed dataset. This is another reason for our choosing neural networks or ELMs as the space of estimators. Indeed, existing software packages (e.g., Paszke et al.,, 2019) make it easy to leverage graphics processing units to efficiently evaluate the output of neural networks for any given input. Therefore, if the existing estimator being used is not too difficult to compute, then estimators parameterized using similar architectures to that displayed in Figure 2 will be able to be computed efficiently in practice. This efficiency may be especially important in settings where the estimator will be applied to many datasets, so that the cost of learning the estimator is amortized and the main computational expense is evaluating the learned estimator.
5 Simulations and data analyses
We illustrate our methods in Examples 1–3. A toy example of Example 1 is presented in Appendix C. We focus on the more complex Examples 2 and 3 in this section.
5.1 Prediction of the expected number of new categories
We apply our proposed method to Example 2. In the simulation, we set the true population to be an infinite population with the same categories and same proportions as the sample studied in Miller and Wiegert, (1989), which consists of 1088 observations in 188 categories. This setting is the same as the simulation setting in Shen et al., (2003). We set the sample size to be and the size of the new sample to be . In this setting, the expected number of new categories in the new sample unconditionally on the observed sample, namely , can be analytically computed and equals . We note that this quantity can also be computed via simulation: (i) sample and individuals with replacement from the dataset in Miller and Wiegert, (1989), (ii) count the number of new categories in the second sample, and (iii) repeat steps (i) and (ii) many times and compute the average.
It is well known that this prediction problem is difficult when , and we run this simulation to investigate the potential gain from leveraging prior information by computing a Gamma-minimax estimator for such difficult or even ill-posed problems. We consider three sets of prior information:
-
1.
strongly informative: prior mean of in , prior probability that lies in ;
-
2.
weakly informative: prior mean of in , prior probability that lies in ; and
-
3.
almost noninformative: prior mean of in , prior probability that lies in .
We note that a traditional Bayesian approach would require specifying a prior on , including the total number of categories and the proportion of each category, which may be difficult in practice.
We check the plausibility of Condition 3 in this context. We take the strongly informative prior information as an example. Take to be the collection of multinomial distributions with at most categories. It is obvious that . Let be fixed and be a fixed prior with finite support, that is, where denotes the point mass distribution, , and . Let be an arbitrary small number such that or . Since is dense in and is continuous, there exists a sufficiently large such that, for every distribution , there exists satisfying the following:
-
;
-
if , then ;
-
.
Take to be . Then it is easy to verify that and thus ; moreover, implies that and therefore . Thus, . Moreover, . Therefore, as and Condition 3 holds.
We design the architecture of the neural network estimator as in Fig. 4. We choose two existing estimators (referred to as the OSW and SCL estimators, respectively) proposed by Orlitsky et al., (2016) and Shen et al., (2003) as human knowledge inputs to the architecture. We use the ReLU activation function. There are 50 hidden nodes in the first hidden layer. We initialize the neural network that we train to output the average of these two existing estimators.
We use Algorithm 4 to construct . There are 2000 grid points in , and we add 1000 grid points each time we enlarge the grid. When generating , we chose the starting point to be a distribution with 146 categories and . The choice of this starting point was quite arbitrary. We first generated a sample from and treated it as data from a pilot study. We then came up with a distribution such that five random samples generated from all appear qualitatively similar to the pilot data. In practice, this starting point can be chosen based on prior knowledge. Our chosen grid sizes for Algorithm 4 were quite arbitrary. For , the generated distributions appear similar for all , and thus the initial grid size 2000 and the increment size 1000 appeared sufficient. Smaller grid sizes would simply lead to more iterations in Algorithm 1, which effectively increases the grid size. We selected the log pseudo-prior as a weighted sum of two log density functions: (i) a normal distribution with the mean being the midpoint of the interval constraint on the prior mean of and central 95% probability interval being the interval with at least 95% prior probability, (ii) a negative-binomial distribution of the total number of categories with success probability and failures until the Bernoulli trial is stopped so that the mode and the variance are approximately and , respectively. These log-densities are provided weights 30 and 10, respectively. We selected the weights based on the empirical observation that distributions with only a few categories tend to have high risks, but these distributions are relatively inconsistent with prior information and may well be given almost negligible probability weight in a computed least favorable prior, thus contributing little to computing a -minimax estimator. We chose the aforementioned weights so that Algorithm 4 can explore a fairly large range of distributions and does not generate too many distributions with too few categories.
We use Algorithm 6 with learning rate and batch size to compute -minimax estimators. The number of iterations is 4,000 for and 200 for (). The stopping criterion in Algorithm 1 is that the estimated maximal Bayes risk with 2000 Monte Carlo runs does not relatively increase by more than 2% or absolutely increase by more than 0.0001. We chose the aforementioned tuning parameters based on the prior belief that at least one of OSW and SCL estimators should perform reasonably well, but the performance of SGDmax (Algorithm 6) and Algorithm 4 might be sensitive to tuning parameters. Thus, the network we used is neither deep nor wide. We chose a moderately small learning rate and a large number of iterations for SGDmax. Our chosen learning rate and chosen number of iterations led to a trajectory of estimated Bayes risks that approximately reached a plateau with small fluctuations, suggesting that the obtained estimator is approximately -minimax (see Fig. 5). In practice, such trajectory plots may help tune the learning rate and the number of iterations.
We also run additional simulations to investigate the sensitivity of our methods to tuning parameter selections. We present these simulations in Appendix D. The results suggest that our methods may be insensitive to tuning parameter selections.
We examine the performance of the OSW estimator, the SCL estimator and our trained -minimax estimator by comparing their risks under our set data-generating mechanism computed with 20000 Monte Carlo runs. We also compare their Bayes risks under the computed prior from Algorithm 6 using the last and finest grid in the computation with 20000 Monte Carlo runs. We present the results in Table 1. In this simulation experiment, our -minimax estimator substantially reduces the risk compared to two existing estimators. The -minimax estimator also has the lowest Bayes risk in all cases. Therefore, incorporating fairly informative prior knowledge into the estimator may lead to a significant improvement in predicting the number of new categories. We expect similar substantial improvement for difficult or even ill-posed statistical problems by incorporating prior knowledge.
Strength of prior | Estimator | ||
---|---|---|---|
strong | OSW | 265 | 303 |
SCL | 146 | 159 | |
-minimax | 18 | 35 | |
weak | OSW | 265 | 328 |
SCL | 146 | 184 | |
-minimax | 17 | 61 | |
almost none | OSW | 265 | 293 |
SCL | 146 | 124 | |
-minimax | 24 | 81 |
Fig. 5 presents the unbiased estimator of Bayes risks over iterations when computing a -minimax estimator. The Bayes risks appear to have a decreasing trend and to approach a liming value. Over iterations, the Bayes risks decrease by a considerable amount. The limiting value of the Bayes risks appears to be slightly higher than the risk of the computed -minimax estimator under . This might indicate that is not an extreme distribution that yields a high risk.

We also apply the above methods to analyze the dataset studied in Miller and Wiegert, (1989), which is used as the true population in the simulation. Based on this sample consisting of observations in 188 categories, we use various methods to predict the number of new categories that would be observed if another observations were to be collected. We train Gamma-minimax estimators using exactly the same tuning parameters as those in the above simulation, except that the starting point in Algorithm 4 has more categories. The predictions of all methods are presented in Table 2. The -minimax estimator outputs a more similar prediction to the SCL estimator. This similarity appears different from our observation in the simulation, but can be explained by the fact that having more observations ( vs ; vs ) decreases the variance of the number of new observed categories and thus lowers discrepancies between predictions from these methods. Since the SCL estimator outperforms the OSW estimator in the above simulation where this dataset is the true population, we expect the SCL estimator to achieve reasonably good performance in this application. Moreover, given that the -minimax estimators outperform the SCL estimator in the above simulation, we expect that 57 or 58 represents an improved prediction of the number of new categories as compared to the SCL prediction of 51 in the case where there is limited prior information available.
Estimator | Predicted # new categories |
---|---|
OSW | 72 |
SCL | 51 |
-minimax (strong) | 57 |
-minimax (weak) | 57 |
-minimax (almost none) | 58 |
The computation time to compute an approximated -minimax estimator was about five to seven hours on an AWS EC2 instance (Amazon,, 2019) with at least 4 vCPUs and at least 8 GiB of memory, depending on the number of times the grid was enlarged. As shown in Fig. 5, far few iterations are needed for SGDmax to output a good approximation of a -minimax estimator, which is itself quite close to -minimax. Therefore, with suitably less conservative tuning parameters or more adaptive minimax problem solvers, the computation time might drastically decrease. Moreover, the computation time needed to evaluate the computed -minimax estimator at any sample is almost zero.
5.2 Estimation of the entropy
We also apply our method to estimate the entropy of a multinomial distribution (Example 3). The data-generating mechanism is the same as that described in Example 2, and the estimand of interest is Shannon entropy (Shannon,, 1948), that is, . In the simulation, we choose the same true population and the same sample size as in Section 5.1. The true entropy is . As a reference, the entropy of the uniform distribution with the same number of categories—which corresponds to the maximum entropy of multinomial distributions with the same total number of categories—is .
Jiao et al., (2015) developed a minimax rate optimal estimator of the Shannon entropy, and we run this simulation to investigate the potential gain of computing a Gamma-minimax estimator in well-posed problems with satisfactory solutions. As in Section 5.1, we consider three sets of prior information:
-
1.
Strongly informative: Prior mean of in , probability that lies in ;
-
2.
Weakly informative: Prior mean of in , probability that lies in ;
-
3.
Almost noninformative: Prior mean of in , probability that lies in .
The architecture of our neural network estimator is almost identical to that in Section 5.1 except that the existing estimator being used is the one proposed in Jiao et al., (2015) (referred to as the JVHW estimator), and we initialize the network to return the JVHW estimator. We use Algorithm 4 to construct and Algorithm 6 to compute a -minimax estimator. The tuning parameters in the algorithms are identical to those used in Section 5.1 except that, in Algorithm 6, (i) the learning rate is , and (ii) the number of iterations is 6,000 for . We change these tuning parameters because the JVHW estimator is already minimax in terms of its convergence rate (Jiao et al.,, 2015), and we may need to update the estimator in a more cautious manner in Algorithm 6 to obtain any possible improvement. The trajectories of the estimated Bayes risks (Fig. 6) all appear to approximately reach a plateau, suggesting that the obtained estimator approximately -minimax and that our choice of a smaller learning rate and a larger number of iterations is valid. Because of the additional complexity of the JVHW estimator, we ran our simulations on an AWS EC2 instance (Amazon,, 2019) with 4 vCPUs and 32 GiB of memory. The computation time was ten to seventeen hours, depending on the number of times the grid was enlarged. The longer computation time than that described in Section 5.1 is primarily due to more iterations in SGDmax and the additional complexity of the JVHW estimator.
We compare the risk of the JVHW estimator and our trained -minimax estimator under our set data-generating mechanism computed with 20000 Monte Carlo runs. We also compare their Bayes risk under the computed prior from Algorithm 6 using the last and finest grid in the computation with 20000 Monte Carlo runs. The results are summarized in Table 3. In this simulation experiment, our -minimax estimator reduces the risk by a fair percentage compared with the JVHW estimator and achieves lower worst-case Bayes risk. According to these simulation results, we conclude that incorporating informative prior knowledge into the estimator may result in some improvement in estimating entropy. Thus, for well-posed statistical problems with satisfactory solutions, we expect mild or no substantial improvement and little deterioration from using a Gamma-minimax estimator.
Strength of prior | Estimator | ||
---|---|---|---|
strong | JVHW | 0.041 | 0.035 |
-minimax | 0.036 | 0.021 | |
weak | JVHW | 0.041 | 0.028 |
-minimax | 0.018 | 0.024 | |
almost none | JVHW | 0.041 | 0.031 |
-minimax | 0.025 | 0.016 |
Fig. 6 presents the unbiased estimator of Bayes risks over iterations when computing a -minimax estimator. With strongly informative prior information present, the Bayes risks appear to fluctuate without an increasing or decreasing trend at the beginning and decrease slowly after several thousand iterations. With weakly informative or almost no prior information, the Bayes risks also decrease slowly. A reason may be that the JVHW estimator is already minimax rate optimal (Jiao et al.,, 2015). The computed -minimax estimators also appear to be somewhat similar to the JVHW estimator: in the output layer of the three settings with different prior information, the coefficients for the JVHW estimator are , and , respectively; the coefficients for the previous hidden layer are , and , respectively; the intercepts are , and , respectively.

We further use the above methods to estimate entropy based on the dataset used as the true population in the simulation. The tuning parameters of the -minimax estimators are exactly the same as those in the above simulation except that the starting point in Algorithm 4 has more categories. The estimates are presented in Table 4. All methods produce almost identical estimates. Because the sample size is more than ten times the sample size in the simulation and the JVHW estimator is minimax rate optimal (Jiao et al.,, 2015), we expect the JVHW estimator to have little room for improvement, which explains why the three -minimax estimators perform similarly to the JVHW estimator. In other words, Gamma-minimax estimators appear to maintain, if not improve, the performance of the original JVHW estimator.
Estimator | Estimated entropy |
---|---|
JVHW | 4.709 |
-minimax (strong) | 4.709 |
-minimax (weak) | 4.708 |
-minimax (almost none) | 4.703 |
6 Discussion
We propose adversarial meta-learning algorithms to compute a Gamma-minimax estimator with theoretical guarantees under fairly general settings. These algorithms still leave room for improvement. As we discussed in Section 3.1, the stopping criterion we employ does not necessarily indicate that the maximal Bayes risk is close to the true minimax Bayes risk. In future work, it would be interesting to derive a better criterion that necessarily does indicate this near optimality. Our algorithms also require the user to choose increasingly fine approximating grids to the model space. Although we propose a heuristic algorithm for this procedure that performed well in our experiments, at this point, we have not provided optimality guarantees for this scheme. It may also be possible to improve our proposed algorithms to solve intermediate minimax problems in Section 3.1 by utilizing recent and ongoing advances from the machine learning literature that can be used to improve the training of generative adversarial networks.
We do not explicitly consider uncertainty quantification such as confidence intervals or credible intervals under a Gamma-minimax framework. Uncertainty quantification is important in practice since it provides more information than a point estimator and can be used for decision-making. In theory, our method may be directly applied if such a problem can be formulated into a Gamma-minimax problem. However, such a formulation remains unclear. The most challenging part is to identify a suitable risk function that correctly balances the level of uncertainty and the size of the output interval/region. Though the risk function used in Schafer and Stark, (2009) appears to provide one possible starting point, it is not clear how to extend this approach to nonparametric settings.
It is possible to allow the space of estimators to increase as the grid increase. For example, we may specify an increasing sequence of estimator spaces whose limit is dense in a general space ; then, in Line 3 of Algorithm 1, we compute a -minimax estimator in , namely replace with . This sequence of estimators might converge to a -minimax estimator in . One possible choice of () in this approach is a space of statistical knowledge networks with the given estimator being the computed -minimax estimator in . It is of future interest to investigate the properties of such an approach.
In conclusion, we propose adversarial meta-learning algorithms to compute a Gamma-minimax estimator under general models that can incorporate prior information in the form of generalized moment conditions. They can be useful when a parametric model is undesirable, semi-parametric efficiency theory does not apply, or we wish to utilize prior information to improve estimation.
Acknowledgements
Generous support was provided by Amazon through an AWS Machine Learning Research Award, the NIH under award number DP2-LM013340, and the NSF under award number DMS-2210216. The content is solely the responsibility of the authors and does not necessarily represent the official views of Amazon, the NIH, or the NSF.
Appendix A Two counterexamples of Condition 3
We provide two counterexamples of Condition 3 to illustrate that this condition fails in extremely ill cases.
In the first counterexample, is discontinuous: we set to be zero for a fixed and to be one for all other . If we choose the grid to be dense in but to never contain , then Condition 3 does not hold since for sufficiently large such that but for all and . This issue can be resolved by choosing a continuous risk function.
In the second counterexample, does not contain distributions that are consistent with prior information. Suppose that where . In other words, it is known that the true data-generating mechanism must be a distribution that is a point mass at zero, and thus also only contains a point mass at . If for every , then, even if is dense in , and thus Condition 3 does not hold. This issue can be resolved by rewriting the problem such that these hard constraints on are incorporated into the specification of rather than .
Appendix B Additional gradient-based algorithms
If we can evaluate exactly for all and , then the GDmax algorithm (Algorithm 5) may be used. Note that Line 3 can be formulated into a linear program, which can always be solved in polynomial time with an interior point method (e.g., Jiang et al.,, 2020) and often be solved in polynomial time with a simplex method (Spielman and Teng,, 2004).
We have the following result on the validity of GDmax.
Theorem 5 (Validity of GDmax (Algorithm 5)).
Therefore, we propose a variant (Algorithm 6) by replacing this line with Lines 3–4 so that ordinary linear program solvers can be directly applied. The following theorem justifies this variant.
The validity of this variant of SGDmax is given in Theorem 6 below.
Theorem 6 (Validity of convenient variant of SGDmax (Algorithm 6)).
Suppose that is a -Glivenko-Centelli class (van der Vaart and Wellner,, 2000). Then, for any , there exists a sufficiently large such that
for all , where the expectation is taken over and is fixed. Therefore, with the chosen parameters in Theorem 2, we may choose a sufficiently large so that is an -stationary point of in expectation for and is thus close to a local minimum of with high probability.
We prove Theorem 6 by showing that converges to as . The proof is essentially an application of empirical process theory to the study of an M-estimator.
Appendix C Additional simulation: mean estimation
In this appendix, we illustrate our proposed methods via simulation in a special case of Example 1, namely for estimating the mean of a distribution. We assume that consists of all probability distributions defined on the Borel -algebra on and we observe , where . Here we take . The estimand is . We use the mean squared error risk introduced in Example 1. Suppose that we represent the prior information by , which corresponds to the set of prior distributions in that satisfy an equality constraint on the prior mean of .
We apply our method to three spaces of estimators separately. The first space, , is the set of affine transformations of the sample mean, that is, . As shown in Proposition 1 in Appendix E.5, there is an estimator in that is -minimax in the space of all estimators that are square-integrable with respect to all , so we consider this simple space to better compare our computed estimator with that theoretical -minimax estimator. When computing a -minimax estimator in , we initialize the estimator to be the sample mean, that is, we let and .
The second space, (statistical knowledge network), is a set of neural networks designed based on statistical knowledge that includes the sample mean as an input. We consider this space to illustrate our proposal in Section 4.2. More precisely, we use the architecture in Fig. 7, which is similar to the deep set architecture (Zaheer et al.,, 2017; Maron et al.,, 2019) and is a permutation invariant neural network. We use such an architecture to account for the fact that the sample is iid. In this architecture, the sample mean node is used as an augmenting node to an ordinary deep set network and is combined with the output of that ordinary network in the fourth hidden layer to obtain the final output. Note that . When computing a -minimax estimator for this class, we also initialize the network to be exactly the sample mean, which is a reasonable choice given that the sample mean is known to be a sensible estimator. In this simulation experiment, we choose the dimensionality of nodes in each hidden layer in Fig. 7 as follows: each node in the first, second, third and fourth hidden layer represents a vector in 10, 5, 10 and , respectively. We do not use larger architectures because usually the sample mean is already a good estimator, and we expect to obtain a useful estimator as a small perturbation of this estimator. We also use the ReLU as the activation function. We did not use ELMs in this and the following simulations because we found that neural networks perform well.
The third space, , is a set of neural networks that do not utilize knowledge of the sample mean. We consider this space to illustrate our method without utilizing existing estimators. These estimators are also deep set networks with similar architecture as in Fig. 7. The main difference is that the explicit sample mean node and the fourth hidden layer are removed. When computing a -minimax estimator in , we also randomly initialize the network, unlike and , in order not to input statistical knowledge. Because the ReLU activation function is used, , and we do not expect that optimizing over should not lead to a -minimax estimator with worse performance than those in and .
To construct the grid for this problem, we use a simpler method than Algorithm 4. As indicated by Lemma 6 in Appendix E.5, for estimators in , Bernoulli distributions tend to have high risks since all probability weights lie on the boundary of ; in addition, a prior for which is Bayes is a Beta prior over Bernoulli distributions. Therefore, we randomly generate 2000 Bernoulli distributions as grid points in . We also include two degenerate distributions in this grid, namely the distribution that places all of its mass at and that which places all of its mass at . When constructing from , we still add in more complicated distributions to make the grid dense in the limit: we first randomly generate 500 discrete distributions with support being those in ; then we randomly generate 10 new support points in and 1000 distributions with support points being the union of the new support points and the existing support points in .
When computing the -minimax estimator, for each grid , we compute the -minimax estimator for all three estimator spaces with Algorithm 6. We set the learning rate , the batch size and the number of iterations to be 200 for (). The number of iterations for is larger because, in our experiments, we saw that a -minimax estimator is already close to a -minimax estimator, and using a large number of iterations in this step can improve the initial estimator substantially. For and , the number of iterations for is 2000; the corresponding number for is 6000 to account for the lack of human knowledge input. We also use Algorithm 3 with 10000 iterations to compute a -minimax estimator for for illustration. In this setup, as described in Section 3.3, we take the average of the computed -minimax stochastic estimator as the final output estimator in . We do not apply Algorithm 3 to or because it is computationally intractable for these estimator spaces.
We set the stopping criterion in Algorithm 1 as follows. When Algorithm 6 is used to compute -minimax estimators, we estimate both and with 2000 Monte Carlo runs as described in Section 3.1; when Algorithm 3 is used, and are computed exactly because has a closed-form expression for all and . We set the tolerance to be equal to so that we stop Algorithm 1 if .
After computation, we report the Bayes risk of the computed and theoretical -minimax estimators under , the prior such that . For the estimators in , we further report their coefficients. We also report two coefficients of the computed estimator in as follows. Since and we initialize the estimator to be the sample mean for , we would expect that the bias and the weight of the sample mean in the output layer for the computed -minimax estimator in may correspond to those in . Therefore, we also report these two coefficients and for . This may not be the case for because the sample mean is not explicit in its parameterization and all coefficients are randomly initialized, so we do not report any coefficients for .
Table 5 presents the computation results. By Theorem 7 in Appendix E.5, these computed estimators are all approximately -minimax since their Bayes risks for are all close to that of a theoretical -minimax estimator. The coefficients and of the computed estimators in and are also close to a theoretically derived estimator. For the computed estimator in , the weight of the other ancestor node in the output layer (i.e., the node in the 4th hidden layer in Fig. 7) is . Therefore, our computed -minimax estimator in is also close to a theoretically derived -minimax estimator.
Estimator space | Method to obtain | |||
---|---|---|---|---|
Unrestricted space | Theoretical derivation | |||
Algorithms 1 & 6 | ||||
Algorithms 1 & 6 | ||||
Algorithms 1 & 6 | — | — | ||
Algorithms 1 & 3 |
In our experiments, Algorithm 1 converged after computing a -minimax estimator except when using Algorithm 6 for . Even in this exceptional case, the computed -minimax estimator is still approximately -minimax. We think the algorithm does not stop then in these cases because of Monte Carlo errors when computing and .
Fig. 3 presents the Bayes risks (or its unbiased estimates) over iterations when computing a -minimax estimator. In all cases using Algorithm 6, the Bayes risks appear to decrease and converge. When using Algorithm 3, the upper and lower bounds both converge to the same limit. The limiting values of the Bayes risks in all cases are close to because can approximate well.
Appendix D Sensitivity analysis for tuning parameter selection
For the simulation in Section 5.1 with strongly informative prior information, we conduct three simulations to investigate the sensitivity of our proposed method to the selection of tuning parameters. In each simulation below, we vary one set of tuning parameters and rerun the algorithm to obtain an estimator. In the first simulation, we vary the starting point of Algorithm 4 to construct the first grid . The new starting point is a distribution with 173 categories and , and so this starting point is qualitatively different from the one chosen in the original simulation. In the second simulation, we vary the grid sizes: There are 500 grid points in and we add 500 grid points each time we enlarge the grid. In the third simulation, we chose a wider and deeper statistical knowledge network (see Fig. 8): Compared to the original simulation, we add one more hidden layer and increased the number of hidden nodes in the first two hidden layers to 100. As shown in Table 6, the results in these sensitivity simulations appear similar to that in Section 5.1 within the variation due to randomness in MCMC (Algorithm 4) and SGDmax (Algorithm 6).
Varied tuning parameter | ||
---|---|---|
Initial distribution in MCMC | 19 | 44 |
Grid size | 15 | 34 |
Statistical knowledge network structure | 17 | 38 |
Appendix E Proofs
E.1 Proof of Theorem 1 and Corollary 1
Lemma 1.
If is an increasing sequence of subsets of such that , then, for any , ().
Proof of Lemma 1.
Since , it holds that , and so we only need to lower bound . Fix . By Corollary 5 of Pinelis, (2016), can be approximated by arbitrarily well for priors with a finite support; that is, there exists with finite support such that . For sufficiently large , contains all support points of and hence . The desired result follows. ∎
Lemma 2.
Under Condition 2, for any and , there exists such that for all such that .
Proof of Lemma 2.
Lemma 3.
Under Condition 3, it holds that .
Proof of Lemma 3.
Proof of Theorem 1.
Let . There exists such that
Moreover, there exists such that
Using the fact that is -minimax and the definition of , it holds that
Since this inequality holds for any , we have that . An almost identical argument shows that the sequence is nondecreasing. Therefore, this sequence converges to some limit .
We next prove that . Let . Without loss of generality, we may assume that for all in Condition 3. (Otherwise, we may instead consider the sequence where . Note that Condition 3 also holds for .) By Lemma 1, there exists such that . By Condition 3, there exists such that . Without loss of generality, suppose that (otherwise, take a convergent subsequence to this limit point). This then implies that there exists such that is sufficiently small, such that, by Lemma 2, . Moreover, since , it holds that . Therefore, . Since the sequence is nondecreasing, it holds that for all . Since is arbitrary, we have that , and hence .
Combining the results from the preceding two paragraphs, . Consequently, is -minimax. Moreover, as increases to , this sequence also increases to . This concludes the proof. ∎
Proof of Corollary 1.
We first establish the strict convexity of for any . We then establish the strict convexity of . We then establish that there is a unique minimizer of and show that the desired result follows from Theorem 1.
Let and be arbitrary, then by the convexity of and the strict convexity of for each ,
Therefore, is strictly convex for any .
Let be distinct and be arbitrary. Since is attainable for any , there exists such that
Thus, is strictly convex.
As is strictly convex and is convex, this function achieves exactly one minimum on . By Theorem 1, any limit point of is a minimizer of , and so the sequence has a limit point, which is also the unique -minimax estimator. ∎
E.2 Proof of Theorems 2 & 5
We prove Theorems 2 and 5 by checking that Assumptions 3.1 and 3.6 in Lin et al., (2020) are satisfied and using Theorem E.3 and E.4 in Lin et al., (2020), respectively. Since Assumption 3.1 is satisfied by our construction of , we focus on Assumption 3.6 for the rest of this section.
Let . For any , let denote the probability weight of on (). For the rest of this section, we also use to denote the vector . We also use to denote less than equal to up to a universal positive constant that may depend on . Then, straightforward calculations imply that and .
For each , for any and , by Conditions 4 and 5,
and similarly for ,
This implies that for each , the gradient of (, ) is Lipschitz continuous.
For each , for any and , Condition 4 implies that
Therefore, is Lipschitz continuous with a universal Lipschitz constant independent of .
E.3 Proof of Theorem 6
Proof of Theorem 6.
Let denote a maximizer of . It holds that
Note that the right hand side does not depend on . Therefore,
where stands for outer expectation. We may apply Corollary 9.27 in Kosorok, (2008) to and show that is a -Glivenko-Cantelli class. Therefore,
as . Here, stands for the minimal measurable majorant with respect to of a (possibly non-measurable) mapping (van der Vaart and Wellner,, 2000).
By Problem 1 of Section 2.4 in van der Vaart and Wellner, (2000), there exists a random variable such that -almost surely and . Then,
-almost surely. By dominated convergence theorem,
as , and so does . Thus, for any , there exists a sufficiently large such that for all . ∎
E.4 Proof of Theorem 3
Our proof of Theorem 3 builds on that of Robinson, (1951). Major modifications are needed to allow for more general definitions that can accommodate for potentially infinite spaces of pure strategies and a more careful control on a bound on towards the end of the proof.
In this appendix, we slightly abuse the notation and use to denote the compact set that contains all (). We first introduce the notion of cumulative Bayes risk functions. Under Algorithm 3, we let and be any two continuous functions such that
(3) |
and recursively define
(4) |
for and . Here, we let and . Note that the choices of and in Algorithm 3 corresponds to setting and , in which case and . In general,
(5) |
for some and . Later in this section, we will also make use of and with other initializations and .
To make notations concise, we define for any , and define , and () similarly. We also drop the subscript denoting the set when the set is the whole space we consider, that is, or . Note that for any , under the setting of Algorithm 3 and (2), it holds that
Therefore, to prove the first result in Theorem 3, it suffices to show that .
We next introduce additional definitions related to iterations. We say that is eligible in the interval if there exists such that ; we say that is eligible in the interval if there exists such that . We also define eligibility for sets. We say that is eligible in the interval if there exists that is eligible in that interval; we say that is eligible in the interval if there exists that is eligible in the interval . In addition, for any , we define maximum variation and similarly for any . By Condition 2, there exists such that . Note that by Condition 1 and Lemma 2, given an arbitrary desired approximation accuracy , can be covered by finitely many compact subsets with the maximum variation of each subset bounded by for all ; by Condition 2, since is parameterized by a compact subset of a simplex in a Euclidean space, can also be covered by finitely many compact subsets with the maximum variation of each subset bounded by for all . These covers can be viewed as discrete finite approximations to and , respectively.
All of the above definitions are associated with the space of estimators and the set of priors . We call a pair of cumulative Bayes risk functions constructed from the pair of the space of estimators and the set of priors, and will consider pairs of cumulative Bayes risk functions constructed from other pairs of the space of estimators and the set of priors in the subsequent proof. We can define the above quantities similarly for such cases.
The following lemma gives an upper bound on the maximum variation of and over the corresponding entire space from which they are constructed after iterations from when essentially all parts of these spaces are eligible in .
Lemma 4.
Suppose that is a pair of cumulative Bayes risk functions constructed from . Suppose that and where
for . If all and are eligible in , then and .
Proof of Lemma 4.
Without loss of generality, assume that . Since is eligible in , there exists such that . By the recursive definition of the sequence in (4), the bound on the risk, and the assumption that , we have that . Letting , by the bound on the risk, we can derive that . Combine these two inequalities and we have that . An identical argument applied to the sequence shows that . ∎
The next lemma builds on the previous lemma and provides an upper bound on under the same conditions.
Lemma 5.
Under the same setup and conditions as in Lemma 4, .
Proof of Lemma 5.
Summing the two inequalities in Lemma 4 and rearranging the terms, we have that . It therefore suffices to show that .
Let . There exists and a stochastic strategy such that and for all and all . Therefore, for this choice of and , using (3), . ∎
Proof of Theorem 3.
It suffices to show that by letting and , which corresponds to Algorithm 3. Let . Note that is Lipschitz continuous by Lemma 2 and the fact that is an average of bounded risks with weights . Furthermore, and are both compact. In addition, and are both continuous. Therefore, there exist covers and such that (i) and are all compact, and (ii) , . (Note that and may depend on .) For index sets and , define and . We show that for an absolute constant and all sufficiently large via induction on the sizes of and .
Let be a pair of cumulative Bayes risk functions constructed from where . By (5) and the fact that and , we have that
Therefore, .
Let be arbitrary. Suppose that there exists such that, for any and such that or , for any pair of cumulative Bayes risk functions constructed from , it holds that for all . We next obtain a slightly greater bound on for all sufficiently large .
We first prove that if, for a given pair of cumulative Bayes risk functions constructed from , there exists or such that or is not eligible in an interval , then
(6) |
Suppose that is not eligible in , then define and for all . It is straightforward to check that satisfies the recursive definition of a pair of cumulative Bayes risk functions constructed from . By the induction hypothesis, . Therefore, . Similar argument can be applied if is not eligible in .
Now we obtain a bound on . Let , and . There are two cases.
Case 1: There exists such that and are eligible in for all and . Take to be the largest such integer. Then, repeatedly apply (6) to intervals and we derive that
By Lemma 5, . Therefore,
Case 2: There is no integer satisfying the condition in Case 1. Then, repeatedly apply (6) to intervals , we derive that
By the bound on the risk, and . Hence,
Thus, in both cases, it holds that for . Let be any constant (which may depend on , the approximation error of the covers, that is, the bound on ). The following holds for any sufficiently large ,
(7) |
In other words, we show that after increasing the size of either index set by 1, for all sufficiently large , we obtain a bound on that grows by a multiplicative factor of relative to the original bound.
It takes finitely many, say , steps to induct from the initial case where the sizes of both index sets are one to the case of interest with index sets and . (Note that may also depend on through its dependence on and .) Take in (7) and we derive that, for all sufficiently large ,
where is the base of natural logarithm. Since is arbitrary, we show that . ∎
E.5 Derivation of -minimax estimator of the mean in Section C
In this section, we show that, for the problem of estimating the mean in Section C, one -minimax estimator lies in . This is formally presented below.
Proposition 1.
Let consist of all probability distributions defined on the Borel -algebra on . Let and be the observed data. Let denote the mean parameter and be the set of priors that represent prior information. Let denote the space of estimators that are square-integrable with respect to all . Consider the risk in Example 1, . Define and . Then is -minimax over .
We first present a theorem on a criterion of -minimaxity.
Theorem 7.
Suppose that is a Bayes estimator for and . Then is a -minimax estimator in .
Proof of Theorem 7.
Clearly . Fix . Then, . Since is arbitrary, this shows that . Thus, and is -minimax. ∎
We now present a lemma that is used to prove Proposition 1.
Lemma 6.
Let and suppose that denotes the model space that consists of all probability distributions defined on the Borel -algebra on with mean . Let denote a generic random variable generated from some . Then , where is defined by and .
Proof of Lemma 6.
Without loss of generality, we may assume that and . Note that for any , it holds that , where the equality is attained if . Therefore, the maximum variance is achieved at the distribution with the specified mean and support being , that is, at the distribution defined in the lemma statement. Straightforward calculations show that . ∎
Proof of Proposition 1.
Let and let be a prior distribution over such that the prior distribution on the success probability is . By Theorem 1.1 in Chapter 4 of Lehmann and Casella, (1998), a Bayes estimator for minimizes the risk under the posterior distribution, whose minimizer over is the posterior mean for our choice of risk. That is, is a Bayes estimator in for .
Symbol | |
---|---|
True data-generating mechanism | |
Space of data-generating mechanisms containing | |
and | Full generated data and coarsened data |
Space of candidate estimators or decision functions (e.g., neural networks) | |
Risk function | |
Bayes risk function | |
Set of prior distributions consistent with prior knowledge | |
Functional defining the estimand in Examples 1–3 | |
An increasing sequence of finite subsets of | |
Set of priors in with support in | |
Worst-case Bayes risk function | |
-minimax estimator in | |
A limit point of sequence , which is -minimax in by Theorem 1 | |
Coefficient of a finite-dimensional estimator (e.g., neural network) | |
Exogenous randomness | |
Unbiased approximation of | |
Stochastic estimator following distribution over | |
Space of stochastic estimators | |
-minimax estimator in |
References
- Amazon, (2019) Amazon (2019). Amazon EC2 Instance Types - Amazon Web Services.
- Bartlett, (1997) Bartlett, P. L. (1997). For valid generalization, the size of the weights is more important than the size of the network. Advances in Neural Information Processing Systems, pages 134–140.
- Bartlett et al., (2017) Bartlett, P. L., Foster, D. J., and Telgarsky, M. (2017). Spectrally-normalized margin bounds for neural networks. Advances in Neural Information Processing Systems, 2017-December:6241–6250.
- Berger, (1985) Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis. Springer Series in Statistics. Springer New York, New York, NY.
- Bickel et al., (1993) Bickel, P. J., Klaassen, C. A., Bickel, P. J., Ritov, Y., Klaassen, J., Wellner, J. A., and Ritov, Y. (1993). Efficient and adaptive estimation for semiparametric models, volume 4. Johns Hopkins University Press Baltimore.
- Birmingham et al., (2003) Birmingham, J., Rotnitzky, A., and Fitzmaurice, G. M. (2003). Pattern-mixture and selection models for analysing longitudinal data with monotone missing patterns. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 65(1):275–297.
- Brown, (1951) Brown, G. W. (1951). Iterative solution of games by fictitious play. Activity analysis of production and allocation, 13(1):374–376.
- Bryan et al., (2007) Bryan, B., McMahan, H. B., Schafer, C. M., and Schneider, J. (2007). Efficiently computing minimax expected-size confidence regions. ACM International Conference Proceeding Series, 227:97–104.
- Bunge et al., (2014) Bunge, J., Willis, A., and Walsh, F. (2014). Estimating the Number of Species in Microbial Diversity Studies. Annual Review of Statistics and Its Application, 1(1):427–445.
- Chen et al., (1991) Chen, L., Eichenauer-Herrmann, J., Hofmann, H., and Kindler, J. (1991). Gamma-minimax estimators in the exponential family. Polska Akademia Nauk, Instytut Matematyczny.
- Chen et al., (1988) Chen, L., Eichenauer-Herrmann, J., and Lehn, J. (1988). Gamma-minimax estimators for the parameters of a multinomial distribution. Applicationes Mathematicae, 20(4):561–564.
- Chen, (2007) Chen, X. (2007). Chapter 76 Large Sample Sieve Estimation of Semi-Nonparametric Models. Handbook of Econometrics, 6(SUPPL. PART B):5549–5632.
- Csáji, (2001) Csáji, B. (2001). Approximation with artificial neural networks. Technical report.
- Eckle and Schmidt-Hieber, (2019) Eckle, K. and Schmidt-Hieber, J. (2019). A comparison of deep networks with ReLU activation function and linear spline-type methods. Neural Networks, 110:232–242.
- Eichenauer-Herrmann, (1990) Eichenauer-Herrmann, J. (1990). A gamma-minimax result for the class of symmetric and unimodal priors. Statistical Papers, 31(1):301–304.
- Eichenauer-Herrmann et al., (1994) Eichenauer-Herrmann, J., Ickstadt, K., and Weiß, E. (1994). Gamma-minimax results for the class of unimodal priors. Statistical Papers, 35(1):43–56.
- Erhan et al., (2010) Erhan, D., Courville, A., Bengio, Y., and Vincent, P. (2010). Why does unsupervised pre-training help deep learning? Technical report.
- Fan, (1953) Fan, K. (1953). Minimax theorems. Proceedings of the National Academy of Sciences of the United States of America, 39(1):42.
- Gill et al., (1997) Gill, R. D., van der Laan, M. J., and Robins, J. M. (1997). Coarsening at Random: Characterizations, Conjectures, Counter-Examples. pages 255–294. Springer, New York, NY.
- Glorot et al., (2011) Glorot, X., Bordes, A., and Bengio, Y. (2011). Deep sparse rectifier neural networks. Technical report.
- Goel et al., (2016) Goel, S., Kanade, V., Klivans, A., and Thaler, J. (2016). Reliably Learning the ReLU in Polynomial Time.
- Goodfellow et al., (2014) Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. Technical Report January.
- Green, (1995) Green, P. J. (1995). Reversible Jump Markov Chain Monte Carlo Computation and Bayesian Model Determination. Biometrika, 82(4):711.
- Hanin and Sellke, (2017) Hanin, B. and Sellke, M. (2017). Approximating Continuous Functions by ReLU Nets of Minimal Width. arXiv preprint arXiv:1710.11278v2.
- Hastings, (1970) Hastings, W. K. (1970). Monte Carlo Sampling Methods Using Markov Chains and Their Applications. Biometrika, 57(1):97.
- Heitjan, (1993) Heitjan, D. F. (1993). Ignorability and Coarse Data: Some Biomedical Examples. Biometrics, 49(4):1099.
- Heitjan, (1994) Heitjan, D. F. (1994). Ignorability in general incomplete-data models. Biometrika, 81(4):701–708.
- Heitjan and Rubin, (1991) Heitjan, D. F. and Rubin, D. B. (1991). Ignorability and Coarse Data. Technical Report 4.
- Hornik, (1991) Hornik, K. (1991). Approximation capabilities of multilayer feedforward networks. Neural Networks, 4(2):251–257.
- Huang et al., (2021) Huang, F., Wu, X., and Huang, H. (2021). Efficient mirror descent ascent methods for nonsmooth minimax problems. Advances in Neural Information Processing Systems, 34.
- (31) Huang, G. B., Chen, L., and Siew, C. K. (2006a). Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Transactions on Neural Networks, 17(4):879–892.
- (32) Huang, G. B., Zhu, Q. Y., and Siew, C. K. (2006b). Extreme learning machine: Theory and applications. Neurocomputing, 70(1-3):489–501.
- Jiang et al., (2020) Jiang, S., Song, Z., Weinstein, O., and Zhang, H. (2020). Faster Dynamic Matrix Inverse for Faster LPs. arXiv preprint arXiv:2004.07470v1.
- Jiao et al., (2015) Jiao, J., Venkat, K., Han, Y., and Weissman, T. (2015). Minimax Estimation of Functionals of Discrete Distributions. IEEE Transactions on Information Theory, 61(5):2835–2885.
- Kempthorne, (1987) Kempthorne, P. J. (1987). Numerical Specification of Discrete Least Favorable Prior Distributions. SIAM Journal on Scientific and Statistical Computing, 8(2):171–184.
- Kidger and Lyons, (2020) Kidger, P. and Lyons, T. (2020). Universal Approximation with Deep Narrow Networks. Technical report.
- Kosorok, (2008) Kosorok, M. R. (2008). Introduction to Empirical Processes and Semiparametric Inference, volume 77 of Springer Series in Statistics. Springer New York.
- Lehmann and Casella, (1998) Lehmann, E. L. and Casella, G. (1998). Theory of Point Estimation. Springer.
- Lin et al., (2020) Lin, T., Jin, C., and Jordan, M. I. (2020). On gradient descent ascent for nonconvex-concave minimax problems. 37th International Conference on Machine Learning, ICML 2020, PartF168147-8:6039–6049.
- (40) Luedtke, A., Carone, M., Simon, N., and Sofrygin, O. (2020a). Learning to learn from data: Using deep adversarial learning to construct optimal statistical procedures. Science Advances, 6(9):eaaw2140.
- (41) Luedtke, A., Chung, I., and Sofrygin, O. (2020b). Adversarial Monte Carlo Meta-Learning of Optimal Prediction Procedures. arXiv preprint arXiv:2002.11275v1.
- Maron et al., (2019) Maron, H., Fetaya, E., Segol, N., and Lipman, Y. (2019). On the Universality of Invariant Networks.
- Metropolis et al., (1953) Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. (1953). Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6):1087–1092.
- Miller and Wiegert, (1989) Miller, R. I. and Wiegert, R. G. (1989). Documenting completeness, species-area relations, and the species-abundance distribution of a regional flora. Ecology, 70(1):16–22.
- Nelson, (1966) Nelson, W. (1966). Minimax Solution of Statistical Decision Problems by Iteration. The Annals of Mathematical Statistics, 37(6):1643–1657.
- Neyshabur et al., (2017) Neyshabur, B., Bhojanapalli, S., McAllester, D., and Srebro, N. (2017). Exploring generalization in deep learning. Advances in Neural Information Processing Systems, 2017-December:5948–5957.
- Noubiap and Seidel, (2001) Noubiap, R. F. and Seidel, W. (2001). An algorithm for calculating -minimax decision rules under generalized moment conditions. Annals of Statistics, 29(4):1094–1116.
- Olman and Shmundak, (1985) Olman, V. and Shmundak, A. (1985). Minimax Bayes estimation of mean of normal law for the class of unimodal a priori distributions. Proc. Acad. Sci. Estonian Physics Math, 34:148–153.
- Orlitsky et al., (2016) Orlitsky, A., Suresh, A. T., and Wu, Y. (2016). Optimal prediction of the number of unseen species. Proceedings of the National Academy of Sciences of the United States of America, 113(47):13283–13288.
- Paszke et al., (2019) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. (2019). Pytorch: An imperative style, high-performance deep learning library. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R., editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc.
- Pfanzagl, (1990) Pfanzagl, J. (1990). Estimation in semiparametric models. pages 17–22. Springer, New York, NY.
- Pinelis, (2016) Pinelis, I. (2016). On the extreme points of moments sets. Mathematical Methods of Operations Research, 83(3):325–349.
- Qiu, (2022) Qiu, H. (2022). QIU-Hongxiang-David/Gamma-minimax-learning: Simulation code for “Constructing Gamma-minimax estimators to leverage vague prior information”. https://github.com/QIU-Hongxiang-David/Gamma-minimax-learning/. [Online; accessed 2022-03-14].
- Robbins, (1951) Robbins, H. (1951). Asymptotically Subminimax Solutions of Compound Statistical Decision Problems. In Proceedings of the second Berkeley symposium on mathematical statistics and probability, pages 131–149.
- Robinson, (1951) Robinson, J. (1951). An Iterative Method of Solving a Game. The Annals of Mathematics, 54(2):296.
- Sarma and Kay, (2020) Sarma, A. and Kay, M. (2020). Prior Setting in Practice: Strategies and Rationales Used in Choosing Prior Distributions for Bayesian Analysis. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery.
- Schafer and Stark, (2009) Schafer, C. M. and Stark, P. B. (2009). Constructing confidence regions of optimal expected size. Journal of the American Statistical Association, 104(487):1080–1089.
- Shannon, (1948) Shannon, C. E. (1948). A Mathematical Theory of Communication. Bell System Technical Journal, 27(3):379–423.
- Shen et al., (2003) Shen, T. J., Chao, A., and Lin, C. F. (2003). Predicting the number of new species in further taxonomic sampling. Ecology, 84(3):798–804.
- Sion, (1958) Sion, M. (1958). On general minimax theorems. Pacific Journal of Mathematics, 8(1):171–176.
- Spielman and Teng, (2004) Spielman, D. A. and Teng, S. H. (2004). Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. Journal of the ACM, 51(3):385–463.
- Torrey and Shavlik, (2009) Torrey, L. and Shavlik, J. (2009). Transfer learning. Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques, 11:242–264.
- v. Neumann, (1928) v. Neumann, J. (1928). Zur Theorie der Gesellschaftsspiele. Mathematische Annalen, 100(1):295–320.
- van der Vaart and Wellner, (2000) van der Vaart, A. and Wellner, J. (2000). Weak Convergence and Empirical Processes: With Applications to Statistics. Springer Series in Statistics. Springer.
- Vidakovic, (2000) Vidakovic, B. (2000). -Minimax: A Paradigm for Conservative Robust Bayesians. pages 241–259. Springer, New York, NY.
- Wald, (1945) Wald, A. (1945). Statistical Decision Functions Which Minimize the Maximum Risk. The Annals of Mathematics, 46(2):265.
- Zaheer et al., (2017) Zaheer, M., Kottur, S., Ravanbhakhsh, S., Póczos, B., Salakhutdinov, R., and Smola, A. J. (2017). Deep sets. Advances in Neural Information Processing Systems, 2017-Decem(ii):3392–3402.
- Zhang et al., (2016) Zhang, Y., Lee, J. D., and Jordan, M. I. (2016). L1-regularized neural networks are improperly learnable in polynomial time. 33rd International Conference on Machine Learning, ICML 2016, 3:1555–1563.