This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Adaptive Crowdsourcing Via Self-Supervised Learning

Anmol Kagrecha Department of Electrical Engineering, Stanford University Henrik Marklund Department of Computer Science, Stanford University Benjamin Van Roy Department of Electrical Engineering, Stanford University Department of Management Science and Engineering, Stanford University Hong Jun Jeon Department of Computer Science, Stanford University Richard Zeckhauser John F. Kennedy School of Government, Harvard University
Abstract

Common crowdsourcing systems average estimates of a latent quantity of interest provided by many crowdworkers to produce a group estimate. We develop a new approach – predict-each-worker – that leverages self-supervised learning and a novel aggregation scheme. This approach adapts weights assigned to crowdworkers based on estimates they provided for previous quantities. When skills vary across crowdworkers or their estimates correlate, the weighted sum offers a more accurate group estimate than the average. Existing algorithms such as expectation maximization can, at least in principle, produce similarly accurate group estimates. However, their computational requirements become onerous when complex models, such as neural networks, are required to express relationships among crowdworkers. Predict-each-worker accommodates such complexity as well as many other practical challenges. We analyze the efficacy of predict-each-worker through theoretical and computational studies. Among other things, we establish asymptotic optimality as the number of engagements per crowdworker grows.

1 Introduction

Aggregating opinions from a diverse group often yields more accurate estimates than relying on a single individual. Applications are wide-ranging. Intelligence agencies query crowds to help predict uncertain and significant events, showing that a collective effort can give better results (Tetlock and Gardner, 2016). Aggregating inputs has also been shown to improve answers to questions in the single-question crowd wisdom problem (Prelec et al., 2017) and in financial forecasting (Da and Huang, 2020). Modern artificial intelligence (AI) also relies on aggregation of input from human annotators for the development of image classifiers (Vaughan, 2017) and chatbots (Ouyang et al., 2022; OpenAI, 2023; Touvron et al., 2023; Google, 2023; Anthropic, 2023).

The recent explosion of AI has led to considerable discussion of situations where computers could partly or wholly replace humans for crowdsourcing tasks (Zhu et al., 2023; Törnberg, 2023; Ollion et al., 2023; Boussioux et al., 2023). The algorithms we discuss in this paper could apply to such settings but were conceived to apply to groups of people in the spirit of initial work on crowdsourcing. The mid-nineteenth-century Smithsonian Institution Meteorological Project processed localized data into national weather maps. One hundred fifty volunteer respondents provided information by telegraph (Smithsonian Institution Archives, 2024), which was processed by a dozen-plus humans to create a national data map. By 1860, five hundred local weather stations were reporting (National Weather Service, 2024).

Many traditional crowdsourcing examples involve large numbers of participants. However, the underlying principles apply even with quite small groups, as would be the norm in many contexts, such as with boards of directors or R&D research teams.

A conventional approach to aggregation assigns equal importance to each crowdworker’s input. This approach, which we refer to as averaging, is simple and enjoys wide use. With a sufficiently large number of crowdworkers, averaging performs well. However, engaging so many crowdworkers can be costly or infeasible. Because of this, practical crowdsourcing systems work with a limited number of crowdworkers, and averaging leaves substantial room for improvement. We discuss two simple cases where averaging can require too many crowdworkers in order to produce accurate estimates. One arises when skills vary across crowdworkers; those who are more skilled ought to be assigned greater weight. Another example arises when crowdworkers share sources of information, perhaps because they watch the same news channels or follow similar accounts on social media, or utilize the same data sets. In this case, each crowdworker’s estimate does not provide independent information, and the crowdworker’s weight should decrease with the degree of dependence.

In principle, if each crowdworker’s level of skill and independence are known, it should be possible to derive weights that improve group estimates relative to averaging. Our work introduces an approach that approximates these weights given estimates of past outcomes provided by the same crowdworkers. The weights reflect what is learned about skills and independence.

If patterns among crowdworker estimates are expressed by simple – for example, linear – models, existing algorithms offer effective means for fitting to past observations. Expectation maximization (EM), in particular, offers a popular option used for this purpose (Zhang et al., 2016a; Zheng et al., 2017). However, in many contexts, the relationships are complex and call for flexible machine learning models such as neural networks. Computational requirements of existing algorithms such as EM become onerous.

To address the limitations of existing methods, we introduce a new approach called predict-each-worker, which leverages self-supervised learning and a novel aggregation scheme. This approach uses self-supervised learning (SSL) (Liu et al., 2021) to infer patterns among crowdworker estimates. In particular, for each crowdworker, predict-each-worker produces an SSL model that predicts the crowdworker’s estimate based on past observations and estimates of other crowdworkers. This enables modeling of complex patterns among crowdworker estimates, for example, by using neural network SSL models. The aggregation scheme leverages these SSL models to weight crowdworkers to a degree that increases with the crowdworker’s skill and independence.

In short, the predict-each-worker approach employs each individual crowdworker’s estimates, after processing by the center, as the basis for producing a group estimate. These individual estimates represent an atomic approach for building a group estimate. This approach contrasts sharply with other methods that instead proceed from a series of group estimates. For example, the famed Delphi method (Dalkey and Helmer, 1963) involves a panel of experts or crowdworkers who, through several rounds of interactions, refine their estimates. After each round, a facilitator provides an anonymous summary of the crowdworkers’ estimates. The crowdworkers are then encouraged to revise their earlier estimates in light of the replies of other members of their panel. This process is repeated until the group converges on a consensus. This method has been applied, for example, to military forecasting (Dalkey and Helmer, 1963), health research (de Meyrick, 2003), and as a method to do graduate research (Skulmoski et al., 2007). While problem settings where crowdworkers could update their estimates are important and interesting, our approach, predict-each-worker, is not designed for such settings, and we leave it for future work to develop an extension.

To investigate the performance of our approach, we perform both computational and theoretical studies. For a Gaussian data-generating process, we prove that our algorithm is asymptotically optimal, and through simulations, we find that this method outperforms averaging crowdworker estimates. Moreover, our empirical results indicate that when the dataset is large, predict-each-worker performs as well as an EM-based algorithm. These studies offer a ‘sanity check’ for our approach and motivate future work for more complex data-generating processes as well as real-world problem settings.

2 Problem Formulation

We consider a crowdsourcing system as illustrated in Figure 1. In each round, indexed by positive integers tt, KK distinct crowdworkers provide estimates YtKY_{t}\in\Re^{K} for an independent outcome ZtZ_{t}\in\Re. A center observes estimates YtY_{t} to produce a group estimate Z^t\hat{Z}_{t}\in\Re for the outcome ZtZ_{t}. We assume that the center does not observe the outcomes.

To model uncertainty from the center’s perspective, we take ZtZ_{t} and YtY_{t} to be random variables. These and all random variables we consider are defined with respect to a probability space (Ω,𝔽,)(\Omega,\mathbb{F},\mathbb{P}). We assume that (Yt:t++)(Y_{t}:t\in\mathbb{Z}_{++}) is exchangeable, and for simplicity, we assume that 𝔼[Yt]=0\mathbb{E}[Y_{t}]=0.

While we could consider such a process for any sequence of outcomes ZtZ_{t}, we will restrict attention to the case where ZtZ_{t} is the consensus of an infinite population of crowdworkers, in a sense we now define. For each tt, we model crowdworker estimates (Y~t,k:k++)(\tilde{Y}_{t,k}:k\in\mathbb{Z}_{++}) as an exchangeable stochastic process. We assume that the limit limKk=1KY~t,k/K\lim_{K\to\infty}\sum_{k=1}^{K}\nicefrac{{\tilde{Y}_{t,k}}}{{K}} exists almost surely and the consensus is given by

Zt=limKk=1KY~t,kK.\displaystyle Z_{t}=\lim_{K\to\infty}\sum_{k=1}^{K}\frac{\tilde{Y}_{t,k}}{K}. (1)

We define ZtZ_{t} this way because we believe that such a consensus estimate is often useful – this is motivated by work on the wisdom of crowds (Larrick and Soll, 2006). Another benefit of such a definition is that one can assess performance using observable estimates from out-of-sample crowdworkers without relying on subjective beliefs about the relationship between estimates and outcomes. The notion of assessment based on out-of-sample crowdworker estimates can be used for hyperparameter tuning, as explained in Appendix D.5.

We take YtY_{t} to be the first KK components Y~t,1:K\tilde{Y}_{t,1:K}. Hence, each of the KK crowdworkers can be thought of as sampled uniformly and independently from the infinite population. The center’s objective is to produce an accurate estimate of the consensus ZtZ_{t} based on estimates supplied by the finite sample of KK crowdworkers.

Refer to caption
Figure 1: Crowdsourcing: each crowdworker provides an estimate of an unobserved quantity ZtZ_{t}, and a center aggregates them to produce a group estimate.

Estimates YtY_{t} depend on unknown characteristics of the crowdworkers, such as their skills and relationships. By de Finetti’s Theorem, exchangeability across tt implies existence of a random variable θ\theta such that, conditioned on θ\theta, (Yt:t++)(Y_{t}:t\in\mathbb{Z}_{++}) is iid. Intuitively, this latent random variable θ\theta encodes all relevant information about the crowdworkers and their relative behavior. While multiple latent variables can serve this purpose, we take θ\theta to be one that is minimal. By this we mean that θ\theta determines the distribution (Yt|θ)\mathbb{P}(Y_{t}\in\cdot|\theta) and vice versa.

Before producing a group estimate Z^t\hat{Z}_{t}, the center observes past crowdworker estimates Y1:t1Y_{1:t-1} as well as current estimates YtY_{t}. The manner in which the center produces its estimate can be expressed in terms of a policy, which is a function π\pi that identifies a group estimate π(Y1:t)\pi(Y_{1:t}) based on the observed history Y1:tY_{1:t} of crowdworker estimates. When particulars about the policy under discussion matter or when we consider alternatives, we express the dependence of group estimates on the policy using superscripts. In particular, Z^tπ=π(Y1:t)\hat{Z}_{t}^{\pi}=\pi(Y_{1:t}). For each tt, our objective is to minimize the expected loss 𝔼[(Zt,Z^tπ)]\mathbb{E}[\mathcal{L}(Z_{t},\hat{Z}^{\pi}_{t})] for some loss function \mathcal{L}. With real-valued group estimates, the squared error (Zt,Z^tπ)=(ZtZ^tπ)2\mathcal{L}(Z_{t},\hat{Z}_{t}^{\pi})=(Z_{t}-\hat{Z}^{\pi}_{t})^{2} offers a natural loss function.

3 Crowdsourcing Based on Self-Supervised Learning

This section formally introduces our approach, predict-each-worker. We will do so in three steps. First, we present the algorithm in an abstract form, which is not efficiently implementable but applies across real-valued estimate distributions. Then, we describe an implementable instance specialized to a Gaussian data-generating process. Finally, we explain how our approach extends and scales to complex settings.

3.1 The General Approach

Our approach first learns via self-supervised learning (SSL) to predict each crowdworker’s estimate based on the estimates of all other crowdworkers. The resulting models are then used to produce weights for aggregating crowdworker estimates. We now describe each of these steps in greater detail.

Self-supervised learning

In the self-supervised learning (SSL) step, we aim to learn patterns exhibited by crowdworkers. This is accomplished by learning KK models, each of which predicts the estimate Yt,kY_{t,k} of a crowdworker kk given estimates Yt,kY_{t,-k} supplied by all other crowdworkers. Let P^t1(k)\hat{P}^{(k)}_{t-1} denote the kkth model learned from Y1:t1Y_{1:t-1}. Given Yt,kY_{t,-k}, this model generates a predictive distribution P^t1(k)(|Yt,k)\hat{P}^{(k)}_{t-1}(\cdot|Y_{t,-k}) of Yt,kY_{t,k}. The intention is for these predictive distributions to accurately estimate a gold standard: the clairvoyant conditional probability P(k)(|Yt,k):=(Yt,k|Yt,k,θ)P^{(k)}_{*}(\cdot|Y_{t,-k}):=\mathbb{P}(Y_{t,k}\in\cdot|Y_{t,-k},\theta).

As P(k)P^{(k)}_{*} can be used to produce a distribution over one crowdworker’s estimate given the estimates of all other crowdworkers, clearly, it should depend on patterns among crowdworker estimates. However, it may not immediately be clear that the set of models {P(k)}k=1K\{P^{(k)}_{*}\}_{k=1}^{K} ought to encode all observable patterns. It turns out that this is indeed the case under weak technical conditions. The following term will be helpful in expressing these conditions.

Definition 3.1 (product-form support).

A probability density function pp over M\Re^{M} has product form support if support(p)=m=1Msupport(pm)\mathrm{support}(p)=\prod_{m=1}^{M}\mathrm{support}(p_{m}), where pmp_{m} is the marginal probability density function of the mmth component.

The following theorem establishes sufficiency.

Theorem 3.1 (sufficiency of SSL).

If (Yt|θ)\mathbb{P}(Y_{t}\in\cdot|\theta) is absolutely continuous with respect to the Lebesgue measure and the corresponding density has product-form support, then {P(k)}k=1K\{P^{(k)}_{*}\}_{k=1}^{K} determines θ\theta.

While we defer the proof for this theorem to Appendix A, we now provide some intuition for why it should be true. It is well-known that Gibbs sampling can be used to sample from a joint distribution based only on its “leave-one-out” conditionals. Hence, one could use Gibbs sampling to sample from (Yt|θ)\mathbb{P}(Y_{t}\in\cdot|\theta) – the joint distribution – using the corresponding “leave-one-out” conditionals {P(k)}k=1K\{P_{*}^{(k)}\}_{k=1}^{K}. This implies that the set {P(k)}k=1K\{P_{*}^{(k)}\}_{k=1}^{K} determines (Yt|θ)\mathbb{P}(Y_{t}\in\cdot|\theta). Because θ\theta is minimal, in the sense that (Yt|θ)\mathbb{P}(Y_{t}\in\cdot|\theta) determines θ\theta, {P(k)}k=1K\{P_{*}^{(k)}\}_{k=1}^{K} determines θ\theta.

Aggregation

To compute the aggregation weights, we first compute two quantities:

  1. 1.

    Expected Error. For each kk, we compute the expected error ^t(k)=𝕍[Yt,k|Yt,k,P(k)P^t1(k)]\hat{\ell}^{(k)}_{t}=\mathbb{V}[Y_{t,k}|Y_{t,-k},P_{*}^{(k)}\leftarrow\hat{P}_{t-1}^{(k)}]. The expected error tells us how predictable crowdworker kk is.

  2. 2.

    SSL Gradient. For each kk, we compute the mean Y^t(k)=𝔼[Yt,k|Yt,k,P(k)P^t1(k)]\hat{Y}^{(k)}_{t}=\mathbb{E}[Y_{t,k}|Y_{t,-k},P_{*}^{(k)}\leftarrow\hat{P}_{t-1}^{(k)}]. Then, we compute the gradient of Y^t(k)\hat{Y}^{(k)}_{t} with respect to Yt,kY_{t,-k}. This gradient Yt(k)\nabla Y_{t}^{(k)} estimates how much any other crowdworker contributed to the predictability of crowdworker kk.

Finally, we compute the aggregation weights ν^tK\hat{\nu}_{t}\in\Re^{K} as follows. For each kk,

ν^t,k=v¯1𝟏Y^t(k)^t(k),\displaystyle\hat{\nu}_{t,k}=\overline{v}\frac{1-\mathbf{1}^{\top}\nabla\hat{Y}_{t}^{(k)}}{\hat{\ell}^{(k)}_{t}}, (2)

where v¯+\overline{v}\in\Re_{+} is a hyperparameter that indicates the prior variance 𝕍[Zt]\mathbb{V}[Z_{t}] of true outcomes. If the variance is not known, this hyperparameter can be tuned based on observed data. We further discuss in Appendix D.5 how to tune v¯\overline{v}. In Section 3.2, we establish that this algorithm is asymptotically optimal for a Gaussian data-generating process.

Two principles may help in interpreting the aggregation formula:

  1. 1.

    All else equal, estimate kk should be assigned greater weight if it is more predictable. Consider the SSL gradient and error of estimate kk. Assume we assign an optimal aggregation weight to this estimate. Fixing the gradient, if the error were to decrease, this aggregation weight ought to increase because skilled crowdworkers tend be more predictable. Equivalently, error-prone crowdworkers tend to be less predictable.

  2. 2.

    All else equal, estimate kk should be assigned less weight if it is more sensitive to estimate kk^{\prime}. Consider the SSL gradient and error of estimate kk. Assume we assign an optimal aggregation weight to this estimate. An increase in the kk^{\prime}th component of the gradient indicates increased sensitivity of kk to kk^{\prime}. This implies an increase in either the covariance with or skill of kk^{\prime}. In either case, this ought to reduce the aggregation weight.

3.2 A Concrete Algorithm for Gaussian Data-Generating Processes

In this section, we specialize our algorithm to a Gaussian data-generating process and establish that the algorithm is asymptotically optimal. Recall that θ\theta represents a minimal expression of all useful information the center can glean from observing crowdworker estimates Y1:Y_{1:\infty}. Let Δt=YtZt𝟏\Delta_{t}=Y_{t}-Z_{t}\mathbf{1} denote the vector of crowdworker errors. By Gaussian data-generating process, we mean that Z1:Z_{1:\infty} is iid Gaussian, Δ1:\Delta_{1:\infty} is independent from Z1:Z_{1:\infty}, and, conditioned on θ\theta, Δ1:\Delta_{1:\infty} is iid zero-mean Gaussian. To interpret this data-generating process, it can be useful to imagine first sampling ZtZ_{t}, then for each kkth crowdworker, adding an independent perturbation Δt,k\Delta_{t,k} to arrive at an estimate Yt,k=Zt+Δt,kY_{t,k}=Z_{t}+\Delta_{t,k}. To simplify our analysis, we will further assume that the covariance matrix 𝕍[Δt]\mathbb{V}[\Delta_{t}] is positive definite. Our methods and analysis can be extended to relax this assumption.

For any Gaussian process, the distribution of any specific variable conditioned on others is Gaussian with expectation linear in the other variables. It follows that, for our Gaussian data-generating process, conditioned on θ\theta, the dependence of any kkth estimate on others is perfectly expressed by a linear model with Gaussian noise. In particular, for each kk, θ\theta determines coefficients u(k)K1u_{*}^{(k)}\in\Re^{K-1} and a noise variance (k)\ell_{*}^{(k)} such that

Yt,k=(u(k))Yt,k+ηt,\displaystyle Y_{t,k}=(u^{(k)}_{*})^{\top}Y_{t,-k}+\eta_{t}, (3)

with ηt|θ𝒩(0,(k))\eta_{t}|\theta\sim\mathcal{N}(0,\ell^{(k)}_{*}). This linear dependence motivates use of linear SSL models in implementing our approach to crowdsourcing. With such models, for each tt and kk, the center would produce estimates (u^t(k),^t(k))(\hat{u}_{t}^{(k)},\hat{\ell}_{t}^{(k)}) of (u(k),(k))(u_{*}^{(k)},\ell_{*}^{(k)}).

Aggregation with Linear SSL Models

With linear SSL models, our general aggregation formula (2) simplifies. This is because the gradient with respect to Yt,kY_{t,-k} is simply u^t1(k)\hat{u}^{(k)}_{t-1}. Consequently, the aggregation weights satisfy

ν^t,k=v¯(1𝟏u^t1(k))^t1(k),\displaystyle\hat{\nu}_{t,k}=\overline{v}\frac{(1-\mathbf{1}^{\top}\hat{u}^{(k)}_{t-1})}{\hat{\ell}^{(k)}_{t-1}}, (4)

where, as defined in Section 3.1, v¯=𝕍[Zt]\overline{v}=\mathbb{V}[Z_{t}]. We explained intuition in Section 3.1 to motivate our general aggregation formula. For the special case of a Gaussian data-generating process, it is easy to corroborate this intuition with a formal mathematical result, which we now present.

No policy can produce a better estimate than 𝔼[Zt|Yt,θ]\mathbb{E}[Z_{t}|Y_{t},\theta], which benefits from knowledge of θ\theta. In the Gaussian case, it is easy to show this estimate is attained by the weight vector

ν=v¯S1𝟏,\displaystyle\nu_{*}=\overline{v}S_{*}^{-1}\mathbf{1}, (5)

where SS_{*} is the covariance matrix of YtY_{t} conditioned on θ\theta. The following result, which is proved in Appendix B, offers an alternative characterization of ν\nu_{*} that shares the form of Equation 4.

Theorem 3.2.

For the Gaussian data-generating process, for each k{1,,K}k\in\{1,\cdots,K\},

ν,k=v¯1𝟏u(k)(k).\displaystyle\nu_{*,k}=\overline{v}\frac{1-\mathbf{1}^{\top}u^{(k)}_{*}}{\ell^{(k)}_{*}}.

From this result, we see that if the linear SSL model parameters (u^t(k),^t(k))(\hat{u}_{t}^{(k)},\hat{\ell}_{t}^{(k)}) converge to (u(k),(k))(u_{*}^{(k)},\ell_{*}^{(k)}) then the aggregation weights ν^t\hat{\nu}_{t} converge to ν\nu_{*}.

Fitting Linear SSL Models

The center can generate the estimates (u^t(k),^t(k))(\hat{u}_{t}^{(k)},\hat{\ell}_{t}^{(k)}) of coefficients and variances via Bayesian linear regression (BLR). This is accomplished by minimizing the loss function

fY1:t1,Λ,λ,u¯,¯(k)(u,)=\displaystyle f^{(k)}_{Y_{1:t-1},\Lambda,\lambda_{\ell},\overline{u},\overline{\ell}}(u,\ell)= τ<t(Yτ,kuYτ,k)22+log()2loglikelihood+ϕΛ,u¯(u,)+φλ,¯()regularization,\displaystyle\underbrace{\sum_{\tau<t}\frac{(Y_{\tau,k}-u^{\top}Y_{\tau,-k})^{2}}{2\ell}+\frac{\log(\ell)}{2}}_{\mathrm{log\ likelihood}}+\underbrace{\phi_{\Lambda,\overline{u}}(u,\ell)+\varphi_{\lambda_{\ell},\overline{\ell}}(\ell)}_{{\rm regularization}}, (6)

to obtain an estimate (u(k),(k))(u^{(k)}_{*},\ell^{(k)}_{*}). The regularization term can be interpreted as inducing a prior, which is specified by four hyperparameters: Λ𝒮+K\Lambda\in\mathcal{S}^{K}_{+}, λ+\lambda_{\ell}\in\Re_{+}, u¯\overline{u}\in\Re, ¯+\overline{\ell}\in\Re_{+}. The term ϕΛ,u¯(,)\phi_{\Lambda,\overline{u}}(\cdot,\ell) is the negative log density of a Gaussian random variable with mean u¯𝟏\overline{u}\mathbf{1} and covariance matrix Λ1\ell\Lambda^{-1}, while φλ,¯()\varphi_{\lambda_{\ell},\overline{\ell}}(\cdot) is the negative log density of an inverse gamma distribution with shape parameter λ/2\nicefrac{{\lambda_{\ell}}}{{2}} and scale parameter (λ+K+1)¯/2\nicefrac{{(\lambda_{\ell}+K+1)\overline{\ell}}}{{2}}. Note that each of the KK models shares the same hyperparameters. This means that, before the center has observed any data, each crowdworker will receive equal aggregation weight.

While BLR produces the posterior distribution over weights for each of the KK models, it ignores interdependencies across models. In the asymptotic regime of large tt, this is inconsequential because the posterior concentrates. However, when tt is small, this imperfection induces error. To ameliorate this error, we introduce a form of aggregation weight regularization. Let ν~1K\tilde{\nu}_{1}\in\Re^{K} be a vector of equal weights produced by our algorithm if applied before learning from any data. For any tt, we generate aggregation weights by taking a convex combination ν^t,k=γtν~1,k+(1γt)ν~t,k\hat{\nu}_{t,k}=\gamma_{t}\tilde{\nu}_{1,k}+(1-\gamma_{t})\tilde{\nu}_{t,k}, where ν~t,k\tilde{\nu}_{t,k} is the vector of unregularized aggregation weights. The parameter γt\gamma_{t} decays with tt. Specifically, γt=r/(r+t1)\gamma_{t}=\nicefrac{{r}}{{(r+t-1)}}, where rr is a hyperparameter governs the rate of decay. All steps we have described to produce an estimate Z^t\hat{Z}_{t} are presented in Algorithm 1.

Algorithm 1 Predict-each-worker for Gaussian data generating process
1:procedure PredictEachWorkerGaussian(Y1:tY_{1:t}, Λ\Lambda, λ\lambda_{\ell}, u¯\overline{u}, ¯\overline{\ell}, rr)
2:     // SSL
3:     for k{1,,K}k\in\{1,\cdots,K\} do
4:         u^t1(k),^t1(k)argminuN1,+fY1:t1,Λ,λ,u¯,¯(k)(u,)\hat{u}^{(k)}_{t-1},\hat{\ell}^{(k)}_{t-1}\in\operatorname*{arg\,min}_{u\in\Re^{N-1},\ell\in\Re_{+}}f_{Y_{1:t-1},\Lambda,\lambda_{\ell},\overline{u},\overline{\ell}}^{(k)}(u,\ell)
5:     end for
6:     
7:     // Aggregation
8:     γ=r/r+t1\gamma=\nicefrac{{r}}{{r+t-1}}
9:     for n{1,,K}n\in\{1,\cdots,K\} do
10:         ν~t,k(1𝟏u^t1(k))/^t1(k)\tilde{\nu}_{t,k}\leftarrow\nicefrac{{(1-\mathbf{1}^{\top}\hat{u}^{(k)}_{t-1})}}{{\hat{\ell}^{(k)}_{t-1}}}
11:         ν~1,k(1(K1)u¯)/¯\tilde{\nu}_{1,k}\leftarrow\nicefrac{{(1-(K-1)\overline{u})}}{{\overline{\ell}}}
12:         ν^t,kγν~1,k+(1γ)ν~t,k\hat{\nu}_{t,k}\leftarrow\gamma\tilde{\nu}_{1,k}+(1-\gamma)\tilde{\nu}_{t,k}
13:     end for
14:     Z^t(ν^t)Yt\hat{Z}_{t}\leftarrow\left(\hat{\nu}_{t}\right)^{\top}Y_{t}
15:     
16:     Return Z^t\hat{Z}_{t}
17:end procedure

The following result formalizes our claim of asymptotic optimality.

Theorem 3.3.

If Λ0\Lambda\succ 0, ¯>0\overline{\ell}>0, and λ0\lambda_{\ell}\geq 0, then, for any τ++\tau\in\mathbb{Z}_{++},

limt|ν^tYtestimate𝔼[Zt|Y1:]clairvoyant|=a.s.0.\displaystyle\lim_{t\to\infty}\Big{|}\underbrace{\hat{\nu}_{t}^{\top}Y_{t}}_{\mathrm{estimate}}-\underbrace{\mathbb{E}[Z_{t}|Y_{1:\infty}]}_{\mathrm{clairvoyant}}\Big{|}\overset{{\rm a.s.}}{=}0.

We defer the proof to Appendix C. While this theorem establishes asymptotic optimality, we empirically study finite-data performance in Section 4.

3.3 Extensions to Complex Settings using Neural Networks

Before conducting an empirical study with the Gaussian data-generating process, we first discuss how our general approach extends beyond the Gaussian data-generating process. In particular, with neural network SSL models, our approach can capture complex characteristics of and relationships among crowdworkers. Moreover, with neural network models, our approach extends to accommodate scenarios with missing estimates or where additional context is provided, for example, a language model prompt or crowdworker profile.

Concretely, this process involves training KK neural networks, where the kkth network predicts the kkth crowdworker using others’ estimates. Additional information, such as a context vector, can just be appended to the neural network’s input. Training can be done via standard supervised learning algorithms, such as stochastic gradient descent (SGD).

To reduce computational demands, rather than training KK separate models, we can train one neural network with KK so-called output heads. The kkth head would output an estimate of the mean and variance of the kkth crowdworker. The input to the neural network is the vector of estimates with the kkth crowdworker masked out. When training, we would use SGD and randomly choose which crowdworker to mask at each SGD step. This training procedure, where we try and predict a masked out component, is prototypical of self-supervised learning.

Incorporating Context

In practice, crowdworkers respond to specific inputs or contexts, such as images or text from a language model. To extend to such settings, we can transform the context into a vector. Such a transformation is naturally available for images. With text, we can use embedding vectors, which can be generated using established techniques (Kenter and De Rijke, 2015; Mars, 2022). By appending such context vectors to crowdworker estimates, the SSL step can learn context-specific patterns. For example, if we learn that certain crowdworkers are more skilled for particular types of inputs, it should be possible to produce better group estimates.

Including Crowdworker Metadata

Metadata about crowdworkers, like their educational background or expertise, can be used to improve group estimates. For example, if a crowdworker’s metadata takes the form of a feature vector, our approach can leverage this data. By concatenating crowdworker estimates with their feature vectors, the neural network can then account for their distinguishing characteristics and not merely the estimates that they provide. This can prove particularly beneficial when managing a transient crowdworker pool, as it allows us to generalize quickly to previously unseen crowdworkers.

Addressing Missing Estimates

With a sizable pool of crowdworkers, each task might only be assigned to a small subset. As such, each crowdworker may or may not supply an estimate for each ttth outcome. To learn from incomplete data gathered in this mode, we can use a neural network architecture where the input associated with each crowdworker indicates whether the crowdworker has supplied an estimate, and if so, what that estimate is. Through this approach, the neural network can discern intricate patterns and interdependencies among the crowdworkers, such as when the significance of one’s input may be contingent on the absence of another due to overlapping expertise.

Aggregating the Predictions

The general aggregation formula presented in Section 3.1 applies to neural network models. For each crowdworker kk, we first use the neural network to estimate the mean and variance of their prediction. Then, we compute the gradient of the mean with respect to Yt,kY_{t,-k}. Using the estimated variance and the gradient, (2) produces aggregation weights.

We primarily designed our approach for problem settings where outcomes are not observed. However, this approach can be extended to settings where some or all outcomes are observed. Please see Appendix E for a brief discussion.

4 A Simulation Study

To understand how our algorithm compares to benchmarks, we perform a simulation study. We use a Gaussian data-generating process and three benchmarks: averaging, an EM-based policy, and the clairvoyant policy. The section first introduces the data-generating process, followed by the three policies, and finally presents the simulation results.

4.1 A Gaussian Data-Generating Process

In a prototypical crowdsourcing application, some crowdworkers may be more skilled than others. Further, because the crowdworkers may share information sources, their errors may end up being correlated. We consider a model that is simple yet captures these characteristics. For each crowdworker k++k\in\mathbb{Z}_{++}, this model generates a sequence of estimates according to

Y~t,k=Z~t+Δ~t,k.\displaystyle\tilde{Y}_{t,k}=\tilde{Z}_{t}+\tilde{\Delta}_{t,k}.

where Z~t𝒩(0,1)\tilde{Z}_{t}\sim\mathcal{N}(0,1), and Δ~t,k\tilde{\Delta}_{t,k} is additive noise that is independent from Z~t\tilde{Z}_{t}. The additive noise is generated as follows:

Δ~t,k=n=1NCk,nXt,n,\displaystyle\tilde{\Delta}_{t,k}=\sum_{n=1}^{N}C_{k,n}X_{t,n},

where (Xt:t++)(X_{t}:t\in\mathbb{Z}_{++}) is an iid sequence of standard Gaussian vectors, and CkNC_{k}\in\Re^{N} modulates the influence of XtX_{t} on Y~t,k\tilde{Y}_{t,k}. It may be helpful to think of each component Xt,nX_{t,n} as a random factor and the coefficient Ck,nC_{k,n} as a factor loading.

For each crowdworker k++k\in\mathbb{Z}_{++} and factor index n{1,,N}n\in\{1,\dots,N\}, we sample Ck,n𝒩(0,nq)C_{k,n}\sim\mathcal{N}(0,n^{-q}). Each Ck,nC_{k,n} is sampled independently from Z~t\tilde{Z}_{t}, the factor XtX_{t}, and any other coefficient Ck,nC_{k^{\prime},n^{\prime}}, where (k,n)(k,n)(k^{\prime},n^{\prime})\neq(k,n).

We show in Appendix D.1 that this data-generating process satisfies the three assumptions of our general problem formulation, as presented in Section 2. In particular, (Y~t:t++)(\tilde{Y}_{t}:t\in\mathbb{Z}_{++}) is exchangeable, and for each tt, (Y~t,k:k++)(\tilde{Y}_{t,k}:k\in\mathbb{Z}_{++}) is exchangeable, with a sample mean that converges almost surely. It is easy to verify that the sample mean Zt=limKk=1KY~t,k/KZ_{t}=\lim_{K\rightarrow\infty}\sum_{k=1}^{K}\tilde{Y}_{t,k}/K is almost surely equal to Z~t\tilde{Z}_{t}.

Recall that we denote by θ\theta a minimal random variable conditioned on which (Y~t:t++)(\tilde{Y}_{t}:t\in\mathbb{Z}_{++}) is iid. For any KK, conditioned on θ\theta, Δ~t,1:K\tilde{\Delta}_{t,1:K} and Y~t,1:K\tilde{Y}_{t,1:K} are distributed Gaussian. It is helpful to consider the covariance matrix of Δ~t,1:K\tilde{\Delta}_{t,1:K} conditioned on θ\theta, which we denote by Σ\Sigma_{*}. Each diagonal element of Σ\Sigma_{*} expresses the reciprocal of a crowdworker’s skill. Off-diagonal elements indicate how noise covaries across crowdworkers.

At this point, this data-generating process can feel abstract and non-intuitive. In Section 4.2, after we define a few benchmarks, we will provide some intuition about our data-generating process using these benchmarks.

4.2 Benchmarks

We compare our method against three other methods: averaging, a clairvoyant policy, and an EM-based policy.

Averaging

Our first benchmark averages across crowdworker estimates to produce a group estimate π(Y1:t)=1Kk=1KYt,k\pi(Y_{1:t})=\frac{1}{K}\sum_{k=1}^{K}Y_{t,k}, where Yt=Y~t,1:KY_{t}=\tilde{Y}_{t,1:K}. For the Gaussian data-generating process, group estimates produced in this manner converge to ZtZ_{t} almost surely as KK grows.

Clairvoyant Policy

The clairvoyant policy offers an upper bound on the performance. This policy has access to θ\theta, which encapsulates all useful information that can be garnered from any number of observations Y1,Y2,Y_{1},Y_{2},\ldots. Equivalently, the policy has access to (Yt|θ)\mathbb{P}(Y_{t}\in\cdot|\theta), and it produces a group estimate 𝔼[Zt|Yt,θ]\mathbb{E}[Z_{t}|Y_{t},\theta]. For the Gaussian data-generating process, this means having access to noise covariance matrix Σ\Sigma_{*} introduced earlier. In particular, the group estimate can be written as 𝔼[Zt|Yt,θ]=𝔼[Zt|Yt,Σ]=νYt\mathbb{E}[Z_{t}|Y_{t},\theta]=\mathbb{E}[Z_{t}|Y_{t},\Sigma_{*}]=\nu_{*}^{\top}Y_{t}. This attains mean-squared error

𝔼[(ZtνYt)2]=𝔼[11+𝟏Σ1𝟏].\displaystyle\mathbb{E}[(Z_{t}-\nu_{*}^{\top}Y_{t})^{2}]=\mathbb{E}\left[\frac{1}{1+\mathbf{1}^{\top}\Sigma_{*}^{-1}\mathbf{1}}\right].

This level of error is lower than what is achievable by any implementable policy, which would not have the privileged access to Σ\Sigma_{*} that is granted to the clairvoyant policy. Yet this represents a useful target in policy design as this level of error is approachable as tt grows.

Having defined averaging and clairvoyant policies, we will now provide some perspective on the benefits of clairvoyance or, equivalently, learning from many interactions with the same crowdworkers. In Figure 2, we compare these two policies and a third, which we refer to as only-skills-clairvoyant. As the name suggests, in order to generate aggregation weights, only-skills-clairvoyant uses diagonal elements (inverse-skill) of Σ\Sigma_{*} but not the off-diagonal elements (error covariances). The clairvoyant policy offers the greatest benefit over averaging, though only-skills-clairvoyant also offers some benefit. For example, to match the performance attained by averaging over one hundred crowdworkers, the clairvoyant policy requires about twenty crowdworkers, while only-skills-clairvoyant needs about seventy. Moreover, the concavity of the plot indicates that the percentage reduction afforded by clairvoyance in the number of crowdworkers required grows with the desired level of performance. Hence, for a data-generating process like ours, it is beneficial to learn leverage patterns among crowdworker estimates.

Refer to caption
Figure 2: The number of crowdworkers required by clairvoyant and only-skills-clairvoyant policies to match the performance of averaging over various numbers of crowdworkers. These results were generated using the Gaussian data-generating process with N=1000N=1000 factors and factor concentration parameter q=1.7q=1.7. Performance was measured in terms of mean-squared error.

EM-Based Policy

We now discuss a policy based on the EM framework. At a high level, EM iterates between estimating the covariance matrix Σ\Sigma_{*} and estimating past outcomes Z1:t1Z_{1:t-1}. Upon termination of this iterative procedure, the estimate Σ^\hat{\Sigma} of Σ\Sigma_{*} is used to produce a group estimate 𝔼[Zt|Yt,ΣΣ^]\mathbb{E}[Z_{t}|Y_{t},\Sigma_{*}\leftarrow\hat{\Sigma}]. Here, ΣΣ^\Sigma_{*}\leftarrow\hat{\Sigma} indicates that the expectation is evaluated as though Σ^\hat{\Sigma} were the realized value of Σ\Sigma_{*}.

Each iteration of the EM algorithm proceeds in two steps. In the first step, the algorithm computes values z^1:t1\hat{z}_{1:t-1} and v^1:t1\hat{v}_{1:t-1}, which represent estimates of 𝔼[Z1:t1|Y1:t1]\mathbb{E}[Z_{1:t-1}|Y_{1:t-1}] and 𝕍(Z1:t1|Y1:t1)\mathbb{V}(Z_{1:t-1}|Y_{1:t-1}), respectively. These estimates are generated as though the estimated covariance matrix Σ^𝒮++K\hat{\Sigma}\in\mathcal{S}^{K}_{++}, from the previous iteration, is Σ\Sigma_{*}; in particular, z^1:t1𝔼[Z1:t1|Y1:t1,Σ=Σ^]\hat{z}_{1:t-1}\leftarrow\mathbb{E}[Z_{1:t-1}|Y_{1:t-1},\Sigma_{*}=\hat{\Sigma}] and v^1:t1𝕍[Z1:t1|Y1:t1,Σ=Σ^]\hat{v}_{1:t-1}\leftarrow\mathbb{V}[Z_{1:t-1}|Y_{1:t-1},\Sigma_{*}=\hat{\Sigma}]. The initial estimate of Σ\Sigma_{*} is taken to be Σ¯\overline{\Sigma}, which is a hyperparameter that can be interpreted as the prior mean for Σ\Sigma_{*}.

In the second step, estimates z^1:t1\hat{z}_{1:t-1} and v^1:t1\hat{v}_{1:t-1}, from the previous step, are used to compute an approximation of the log posterior of Σ\Sigma_{*}. This approximation gy,Σ¯,c,z^,v^(Σ^)g_{y,\overline{\Sigma},c,\hat{z},\hat{v}}(\hat{\Sigma}) is known as the evidence lower bound (ELBO). The goal is to find the matrix Σ^\hat{\Sigma} that maximizes the ELBO. For the Gaussian data-generating process, in particular, the ELBO is

gy,Σ¯,c,z^,v^(Σ^)=\displaystyle g_{y,\overline{\Sigma},c,\hat{z},\hat{v}}(\hat{\Sigma})= (c+2K+1+t)log(|Σ^|)trace(cΣ¯Σ^1)\displaystyle-(c+2K+1+t)\log(|\hat{\Sigma}|)-\text{trace}(c\overline{\Sigma}\hat{\Sigma}^{-1})
τ=1t1((yτz^τ𝟏)Σ^1(yτz^τ𝟏)+v^τ𝟏Σ^1𝟏+2𝐝KL(𝒩(z^τ,v^τ)||𝒩(0,1))),\displaystyle-\sum_{\tau=1}^{t-1}\left((y_{\tau}-\hat{z}_{\tau}\mathbf{1})^{\top}\hat{\Sigma}^{-1}(y_{\tau}-\hat{z}_{\tau}\mathbf{1})+\hat{v}_{\tau}\mathbf{1}^{\top}\hat{\Sigma}^{-1}\mathbf{1}+2\mathbf{d}_{\mathrm{KL}}\left(\mathcal{N}(\hat{z}_{\tau},\hat{v}_{\tau})||\mathcal{N}(0,1)\right)\right),

where 𝐝KL(𝒩(z^τ,v^τ)||𝒩(0,1))\mathbf{d}_{\mathrm{KL}}\left(\mathcal{N}(\hat{z}_{\tau},\hat{v}_{\tau})||\mathcal{N}(0,1)\right) is the Kullback-Leibler divergence between distributions 𝒩(z^τ,v^τ)\mathcal{N}(\hat{z}_{\tau},\hat{v}_{\tau}) and 𝒩(0,1)\mathcal{N}(0,1). The scalar cc is a hyperparameter that expresses the concentration of the prior distribution of Σ\Sigma_{*}. It is easy to verify that the maximizer of gy,Σ¯,c,z^,v^g_{y,\overline{\Sigma},c,\hat{z},\hat{v}} admits a closed-form solution, and this maximizer is stated in step 11 of Algorithm 2.

The iterative EM process continues until either of two stopping conditions is satisfied. The first stopping condition is triggered when consecutive estimates z^1:t1\hat{z}_{1:t-1} and z^1:t1\hat{z}^{\prime}_{1:t-1} are sufficiently similar, in the sense that z^1:t1z^1:t12/(t1)<ϵ\|\hat{z}_{1:t-1}-\hat{z}_{1:t-1}^{\prime}\|^{2}/(t-1)<\epsilon. This is a natural stopping condition, as it represents a measure of near-convergence. The second stopping condition places a cap on the number of iterations. This prevents the algorithm from looping indefinitely. The EM steps are presented concisely in Algorithm 2.

Selecting EM Hyperparameters

In our experiments, we set ϵ=1010\epsilon=10^{-10} and M=10,000M=10,000. For each value of KK and tt, we perform a grid search to choose the values of Σ¯\overline{\Sigma} and cc. For each value in the grid search, we run the EM algorithm over multiple seeds, where each seed corresponds to one train set. We choose the values of Σ¯\overline{\Sigma} and cc that result in the highest estimate of MSE (estimation procedure discussed in Appendix D.3). We let Σ¯\overline{\Sigma} be a matrix with diagonal elements σ¯2\overline{\sigma}^{2} and correlation ρ¯\overline{\rho}. We vary σ¯2{0.2,2,20}\overline{\sigma}^{2}\in\{0.2,2,20\} and ρ¯{0,0.1}\overline{\rho}\in\{0,0.1\}. We vary c{0.1,1,10}c\in\{0.1,1,10\}.

Algorithm 2 expectation-maximization-based policy
1:procedure EM_Aggregation(Y1:tY_{1:t}, Σ¯\overline{\Sigma}, cc, ϵ\epsilon, MM)
2:     Σ^Σ¯\hat{\Sigma}\leftarrow\overline{\Sigma}
3:     z^\hat{z}\leftarrow\infty
4:     m0m\leftarrow 0
5:     repeat
6:         z^z^\hat{z}^{\prime}\leftarrow\hat{z}
7:         for τ{1,,t1}\tau\in\{1,\cdots,t-1\} do
8:              z^τ𝟏Σ^1Yτ1+𝟏Σ^1𝟏\hat{z}_{\tau}\leftarrow\frac{\mathbf{1}^{\top}\hat{\Sigma}^{-1}Y_{\tau}}{1+\mathbf{1}^{\top}\hat{\Sigma}^{-1}\mathbf{1}}
9:              v^τ11+𝟏Σ^1𝟏\hat{v}_{\tau}\leftarrow\frac{1}{1+\mathbf{1}^{\top}\hat{\Sigma}^{-1}\mathbf{1}}
10:         end for
11:         Σ^cΣ¯+τ<t(Yτz^τ)(Yτz^τ)+τ<tv^τ𝟏𝟏c+2K+t+1\hat{\Sigma}\leftarrow\frac{c\overline{\Sigma}+\sum_{\tau<t}(Y_{\tau}-\hat{z}_{\tau})(Y_{\tau}-\hat{z}_{\tau})^{\top}+\sum_{\tau<t}\hat{v}_{\tau}\mathbf{1}\mathbf{1}^{\top}}{c+2K+t+1}
12:         mm+1m\leftarrow m+1
13:     until z^z^2/(t1)<ϵ\|\hat{z}^{\prime}-\hat{z}\|^{2}/(t-1)<\epsilon or m=Mm=M
14:     Z^t(Σ^+𝟏𝟏)1Yt\hat{Z}_{t}\leftarrow\left(\hat{\Sigma}+\mathbf{1}\mathbf{1}^{\top}\right)^{-1}Y_{t}
15:     Return Z^t\text{Return }\hat{Z}_{t}
16:end procedure

4.3 Performance Comparisons

We now compare our method to the three benchmarks described before. We do so using the Gaussian data-generating process, with N=1000N=1000 factors and factor concentration parameter q=1.7q=1.7. This choice of parameters results in expected noise variance 𝔼[Σ,k,k]2\mathbb{E}[\Sigma_{*,k,k}]\approx 2. For each policy π\pi, we use mean squared error, defined 𝔼[(ZtZ^tπ)2]\mathbb{E}[(Z_{t}-\hat{Z}_{t}^{\pi})^{2}], as the evaluation metric.

To study how robust our algorithm is to the number of crowdworkers and the dataset size, we evaluate performance for K{10,20,30}K\in\{10,20,30\} crowdworkers and multiple dataset sizes t{1,K,10×K,100×K,1000×K}t\in\{1,K,10\times K,100\times K,1000\times K\}. We plot results in Figure 3.

We find that predict-each-worker, our algorithm outperforms averaging for all KK and tt. Moreover, our algorithm performs equally well compared to the clairvoyant, as well as EM policies, for large tt. This corroborates Theorem 3.3. For smaller values of tt, EM offers some advantage, though, in the extreme case of t=1t=1, our algorithm again is competitive with EM. The details of how the hyperparameters of predict-each-worker are tuned can be found in Appendix D.2.

Our results demonstrate that, like EM, predict-each-worker performs as well as possible as the dataset grows. Another important feature, as discussed in Section 3.3, is that predict-each-worker can learn from complex patterns using neural network models. Taken together, these two features make predict-each-worker an attractive choice relative to EM.

Refer to caption
(a) K=10K=10
Refer to caption
(b) K=20K=20
Refer to caption
(c) K=30K=30
Figure 3: Performance comparison of different policies for different values of KK and tt. The performance metric is mean squared error (MSE), but we plot root mean squared error (RMSE). We do this because RMSE has the same unit as the outcomes and estimates, while MSE does not.

5 Literature Review

Our literature review is structured around two main categories of prior work. Initially, we examine research directly related to our problem setting, where true outcomes are not observed, and a center interacts with the same crowdworkers multiple times. We aim to succinctly outline the range of algorithms developed for this setting and delineate how these differ from our proposed method, predict-each-worker. For a more comprehensive treatment of this problem setting, see the survey papers by Zhang et al. (2016a); Sheng and Zhang (2019); Zhang et al. (2022). Following this, we address research pertaining to different but related settings, such as those where true outcomes could be observed or interaction with crowdworkers is limited to a single round. This part of the review highlights that our assumptions do not hold for all problem settings of interest and recognizes methods more suited to these alternative settings. Such a comparison is useful, as it informs the reader about the boundaries of applicability of our method and aids in assessing its suitability for different application contexts.

Our problem setting. One of the most popular approaches for this problem setting is to posit a probabilistic model that captures crowdworker patterns and then approximate the maximum a posteriori (MAP) estimates of the parameters that characterize this probabilistic model. The EM algorithm is one way to approximate these MAP estimates and has been applied to a variety of problems. Li et al. (2014), Kara et al. (2015), and Cai et al. (2020) use EM to learn a noise covariance matrix for the case where crowdworker estimates are continuous whereas Dawid and Skene (1979) and Zhang et al. (2016b) use EM to learn a so-called confusion matrix for the case where crowdworker estimates are categorical. Other algorithms to approximate the MAP estimates have been considered in (Moreno et al., 2015; Bach et al., 2017; Li et al., 2019). These algorithms estimate correlations when crowdworker estimates are categorical. Some other works assume that linear models sufficiently capture patterns between crowdworkers conditioned on relevant contexts like images or text (Raykar et al., 2010; Rodrigues and Pereira, 2018; Li et al., 2021; Chen et al., 2021). The algorithms proposed in these works can be interpreted as estimating the MAP estimates of the parameters of crowdworker models conditioned on the context. However, unlike our approach, all the algorithms discussed here are either difficult to extend or become computationally onerous if representing complex patterns requires flexible machine learning models like neural networks.

Cheng et al. (2022) consider fitting parameters of a stylized model that characterizes skills and dependencies among crowdworkers and then using the model to improve group estimates. They demonstrate that resulting group estimates can turn out to be worse than averaging if the model is misspecified. This casts doubt on the practicality of approaches that aim to improve on averaging. Predict-each-worker overcomes this limitation by enabling the use of neural networks, which behave similarly to nonparametric models and thus do not suffer from misspecification to the extent that stylized models do.

Other problem settings. Some prior works consider a problem setting where some or all the outcomes could be observed. Tang and Lease (2011), Atarashi et al. (2018), and Gaunt et al. (2016) develop so-called semi-supervised techniques to improve aggregation for machine learning problems. Another line of work (Budescu and Chen, 2015; Merkle et al., 2017; Satopää et al., 2021) focuses on generating improved forecasts of consequential future events. By having the ability to observe some or all true outcomes, these works propose algorithms that make fewer restrictive assumptions about how crowdworkers relate. Hence, for settings where ground truth information is available for a large number of crowdsourcing tasks, methods in these prior works could be more appropriate, but if ground truth information is sparse, then our approach predict-each-worker could be more helpful.

Now, we discuss a setting where the center only gets to interact with crowdworkers for one round. To gauge the skill level in this setting, prior works (Prelec et al., 2017; Palley and Soll, 2019; Palley and Satopää, 2023) let each crowdworker not only guess the true quantity but also guess what the mean of the other crowdworkers’ estimates will be. The intuition is that crowdworkers who can accurately predict how others will respond will tend to be more skilled. In problem settings where each crowdsourcing task is very different from the others, using methods in the works above might be more beneficial, but if crowdsourcing tasks are similar and allow generalization, then our method could perform better.

While we treat the mechanism for seeking crowdworker estimates as given, there are several works that try to improve the method for collecting crowdworker estimates. We believe that the suggestions given in these works should be used when applying our method. There are three bodies of work that we discuss. First, works by Da and Huang (2020) and Frey and Van de Rijt (2021) suggest that crowdworkers should not get access to estimates of others before they generate their own estimates. These works, using theoretical models and experiments, show that if this requirement is not met, errors made by crowdworkers can covary a lot and may not average out to zero, even with many crowdworkers. As our work relies on the usefulness of averaging, it may be necessary to blind crowdworkers to each other’s estimates.

Second, most crowdworkers get paid. Accordingly, some studies examine how monetary incentives can raise the quality of crowdworkers’ estimates. The work by Mason and Watts (2009) observed that increasing the pay did not increase the amount of quality but only increased the number of tasks that were done. Later, the work by Ho et al. (2015) pointed out that the observation made by Mason and Watts (2009) is usually true when the tasks are easy but would not hold if tasks are “effort responsive.” Their experimental study verified this claim.

Third, works (Faltings et al., 2014; Eickhoff, 2018; Geiger et al., 2020) discuss how cognitive biases can negatively affect the quality of crowdsourced data. To detect and mitigate such biases, the work by Draws et al. (2021) proposes a checklist that designers of crowdsourcing tasks should use.

6 Concluding Remarks

In this work, we proposed a novel approach to crowdsourcing that we call predict-each-worker. We showed that it performs better than sample averaging, matches the performance of EM with a large dataset, and is asymptotically optimal for a Gaussian data-generating process. Additionally, we discussed how our algorithm can accommodate complex patterns between crowdworkers using neural networks. Taken together, our algorithm’s performance and flexibility make it an attractive alternative to EM and other prior work.

Our work motivates many new directions for future work. First, our aggregation rule is limited to cases where the estimates take continuous values. We leave it for future work to develop simple aggregation rules that can be applied when the estimates are categorical, thereby making our approach directly applicable to many machine-learning problems like supervised learning (Zhang et al., 2016a) and fine-tuning language models (Ouyang et al., 2022). It would also be interesting to extend our approach to handle cases where estimates take complex forms, like in the single-question crowdsourcing problem (Prelec et al., 2017) or when they take forms like text or bounding boxes (Braylan and Lease, 2020). Second, for simplicity and clarity, we restricted our focus to simulated data. We leave it for future work to conduct a more thorough experimental evaluation using real-world datasets.

Acknowledements

This work was partly inspired by conversations with Morteza Ibrahimi and John Maggs. The paper also benefited from discussions with Saurabh Kumar and Wanqiao Xu. We gratefully acknowledge the Robert Bosch Stanford Graduate Fellowship, the NSF GRFP Fellowship, and financial support from the Army Research Office through grant W911NF2010055 and SCBX through the Stanford Institute for Human-Centered Artificial Intelligence.

References

  • Anthropic (2023) Anthropic. Introducing claude. https://www.anthropic.com/index/introducing-claude, 2023.
  • Atarashi et al. (2018) Atarashi, K., Oyama, S., and Kurihara, M. Semi-supervised learning from crowds using deep generative models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
  • Bach et al. (2017) Bach, S. H., He, B., Ratner, A., and Ré, C. Learning the structure of generative models without labeled data. In International Conference on Machine Learning, pages 273–282. PMLR, 2017.
  • Boussioux et al. (2023) Boussioux, L., Lane, J. N., Zhang, M., Jacimovic, V., and Lakhani, K. R. The Crowdless Future? How Generative AI is Shaping the Future of Human Crowdsourcing. Technical Report 24-005, Harvard Business School Technology & Operations Management Unit, 2023. URL https://ssrn.com/abstract=4533642.
  • Braylan and Lease (2020) Braylan, A. and Lease, M. Modeling and aggregation of complex annotations via annotation distances. In Proceedings of The Web Conference 2020, pages 1807–1818, 2020.
  • Budescu and Chen (2015) Budescu, D. V. and Chen, E. Identifying expertise to extract the wisdom of crowds. Management science, 61(2):267–280, 2015.
  • Cai et al. (2020) Cai, D., Nguyen, D. T., Lim, S. H., and Wynter, L. Variational bayesian inference for crowdsourcing predictions. In 2020 59th IEEE Conference on Decision and Control (CDC), pages 3166–3172. IEEE, 2020.
  • Chen et al. (2021) Chen, Z., Wang, H., Sun, H., Chen, P., Han, T., Liu, X., and Yang, J. Structured probabilistic end-to-end learning from crowds. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pages 1512–1518, 2021.
  • Cheng et al. (2022) Cheng, C., Asi, H., and Duchi, J. How many labelers do you have? A closer look at gold-standard labels. arXiv preprint arXiv:2206.12041, 2022.
  • Cosma and Evers (2010) Cosma, I. A. and Evers, L. Markov chains and Monte Carlo methods. African Institute for Mathematical Sciences, 2010.
  • Da and Huang (2020) Da, Z. and Huang, X. Harnessing the wisdom of crowds. Management Science, 66(5):1847–1867, 2020.
  • Dalkey and Helmer (1963) Dalkey, N. and Helmer, O. An experimental application of the Delphi method to the use of experts. Management science, 9(3):458–467, 1963.
  • Dawid and Skene (1979) Dawid, A. P. and Skene, A. M. Maximum likelihood estimation of observer error-rates using the em algorithm. Journal of the Royal Statistical Society: Series C (Applied Statistics), 28(1):20–28, 1979.
  • de Meyrick (2003) de Meyrick, J. The Delphi method and health research. Health education, 103(1):7–16, 2003.
  • Draws et al. (2021) Draws, T., Rieger, A., Inel, O., Gadiraju, U., and Tintarev, N. A checklist to combat cognitive biases in crowdsourcing. In Proceedings of the AAAI conference on human computation and crowdsourcing, volume 9, pages 48–59, 2021.
  • Eickhoff (2018) Eickhoff, C. Cognitive biases in crowdsourcing. In Proceedings of the eleventh ACM international conference on web search and data mining, pages 162–170, 2018.
  • Faltings et al. (2014) Faltings, B., Jurca, R., Pu, P., and Tran, B. D. Incentives to counter bias in human computation. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, volume 2, pages 59–66, 2014.
  • Frey and Van de Rijt (2021) Frey, V. and Van de Rijt, A. Social influence undermines the wisdom of the crowd in sequential decision making. Management science, 67(7):4273–4286, 2021.
  • Gallier (2010) Gallier, J. H. Notes on the Schur complement. 2010.
  • Gaunt et al. (2016) Gaunt, A., Borsa, D., and Bachrach, Y. Training deep neural nets to aggregate crowdsourced responses. In Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence. AUAI Press, volume 242251, 2016.
  • Geiger et al. (2020) Geiger, R. S., Yu, K., Yang, Y., Dai, M., Qiu, J., Tang, R., and Huang, J. Garbage in, garbage out? Do machine learning application papers in social computing report where human-labeled training data comes from? In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 325–336, 2020.
  • Google (2023) Google. Bard, 2023. https://bard.google.com/, 2023.
  • Ho et al. (2015) Ho, C.-J., Slivkins, A., Suri, S., and Vaughan, J. W. Incentivizing high quality crowdwork. In Proceedings of the 24th International Conference on World Wide Web, pages 419–429, 2015.
  • Kara et al. (2015) Kara, Y. E., Genc, G., Aran, O., and Akarun, L. Modeling annotator behaviors for crowd labeling. Neurocomputing, 160:141–156, 2015.
  • Kenter and De Rijke (2015) Kenter, T. and De Rijke, M. Short text similarity with word embeddings. In Proceedings of the 24th ACM international on conference on information and knowledge management, pages 1411–1420, 2015.
  • Kwan (2014) Kwan, C. C. A regression-based interpretation of the inverse of the sample covariance matrix. Spreadsheets in Education, 7(1), 2014.
  • Larrick and Soll (2006) Larrick, R. P. and Soll, J. B. Intuitions about combining opinions: Misappreciation of the averaging principle. Management science, 52(1):111–127, 2006.
  • Li et al. (2014) Li, Q., Li, Y., Gao, J., Su, L., Zhao, B., Demirbas, M., Fan, W., and Han, J. A confidence-aware approach for truth discovery on long-tail data. Proceedings of the VLDB Endowment, 8(4):425–436, 2014.
  • Li et al. (2021) Li, S.-Y., Huang, S.-J., and Chen, S. Crowdsourcing aggregation with deep Bayesian learning. Science China Information Sciences, 64:1–11, 2021.
  • Li et al. (2019) Li, Y., Rubinstein, B., and Cohn, T. Exploiting worker correlation for label aggregation in crowdsourcing. In International conference on machine learning, pages 3886–3895. PMLR, 2019.
  • Liu et al. (2021) Liu, X., Zhang, F., Hou, Z., Mian, L., Wang, Z., Zhang, J., and Tang, J. Self-supervised Learning: Generative or Contrastive. IEEE transactions on knowledge and data engineering, 35(1):857–876, 2021.
  • Mars (2022) Mars, M. From word embeddings to pre-trained language models: A state-of-the-art walkthrough. Applied Sciences, 12(17):8805, 2022.
  • Mason and Watts (2009) Mason, W. and Watts, D. J. Financial incentives and the ”performance of crowds”. In Proceedings of the ACM SIGKDD workshop on human computation, pages 77–85, 2009.
  • Merkle et al. (2017) Merkle, E. C., Steyvers, M., Mellers, B., and Tetlock, P. E. A neglected dimension of good forecasting judgment: The questions we choose also matter. International Journal of Forecasting, 33(4):817–832, 2017.
  • Moreno et al. (2015) Moreno, P. G., Artés-Rodríguez, A., Teh, Y. W., and Perez-Cruz, F. Bayesian nonparametric crowdsourcing. Journal of Machine Learning Research, 16(48):1607–1627, 2015.
  • National Weather Service (2024) National Weather Service. Timeline of the national weather service. https://www.weather.gov/timeline, 2024. Accessed: 2024-01-28.
  • Ollion et al. (2023) Ollion, E., Shen, R., Macanovic, A., and Chatelain, A. ChatGPT for Text Annotation? Mind the Hype! 2023.
  • OpenAI (2023) OpenAI. GPT-4 technical report. arXiv preprint arXiv:2303.08774, 2:13, 2023.
  • Ouyang et al. (2022) Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
  • Palley and Satopää (2023) Palley, A. B. and Satopää, V. A. Boosting the wisdom of crowds within a single judgment problem: Weighted averaging based on peer predictions. Management Science, 2023.
  • Palley and Soll (2019) Palley, A. B. and Soll, J. B. Extracting the wisdom of crowds when information is shared. Management Science, 65(5):2291–2309, 2019.
  • Prelec et al. (2017) Prelec, D., Seung, H. S., and McCoy, J. A solution to the single-question crowd wisdom problem. Nature, 541(7638):532–535, 2017.
  • Raykar et al. (2010) Raykar, V. C., Yu, S., Zhao, L. H., Valadez, G. H., Florin, C., Bogoni, L., and Moy, L. Learning from crowds. Journal of machine learning research, 11(4), 2010.
  • Rodrigues and Pereira (2018) Rodrigues, F. and Pereira, F. Deep learning from crowds. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
  • Satopää et al. (2021) Satopää, V. A., Salikhov, M., Tetlock, P. E., and Mellers, B. Bias, information, noise: The bin model of forecasting. Management Science, 67(12):7599–7618, 2021.
  • Sheng and Zhang (2019) Sheng, V. S. and Zhang, J. Machine learning with crowdsourcing: A brief summary of the past research and future directions. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 9837–9843, 2019.
  • Skulmoski et al. (2007) Skulmoski, G. J., Hartman, F. T., and Krahn, J. The Delphi method for graduate research. Journal of Information Technology Education: Research, 6(1):1–21, 2007.
  • Smithsonian Institution Archives (2024) Smithsonian Institution Archives. Meteorology: Joseph henry’s forecasts. https://siarchives.si.edu/history/featured-topics/henry/meteorology, 2024. Accessed: 2024-01-28.
  • Tang and Lease (2011) Tang, W. and Lease, M. Semi-supervised consensus labeling for crowdsourcing. In SIGIR 2011 workshop on crowdsourcing for information retrieval (CIR), pages 1–6, 2011.
  • Tetlock and Gardner (2016) Tetlock, P. E. and Gardner, D. Superforecasting: The art and science of prediction. Random House, 2016.
  • Törnberg (2023) Törnberg, P. ChatGPT-4 outperforms experts and crowd workers in annotating political twitter messages with zero-shot learning. arXiv preprint arXiv:2304.06588, 2023.
  • Touvron et al. (2023) Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  • Vaughan (2017) Vaughan, J. W. Making better use of the crowd: How crowdsourcing can advance machine learning research. J. Mach. Learn. Res., 18(1):7026–7071, 2017.
  • Yuan (2010) Yuan, M. High dimensional inverse covariance matrix estimation via linear programming. The Journal of Machine Learning Research, 11:2261–2286, 2010.
  • Zhang et al. (2022) Zhang, J., Hsieh, C.-Y., Yu, Y., Zhang, C., and Ratner, A. A survey on programmatic weak supervision. arXiv preprint arXiv:2202.05433, 2022.
  • Zhang et al. (2016a) Zhang, J., Wu, X., and Sheng, V. S. Learning from crowdsourced labeled data: a survey. Artificial Intelligence Review, 46:543–576, 2016a.
  • Zhang et al. (2016b) Zhang, Y., Chen, X., Zhou, D., and Jordan, M. I. Spectral methods meet EM: A provably optimal algorithm for crowdsourcing. The Journal of Machine Learning Research, 17(1):3537–3580, 2016b.
  • Zheng et al. (2017) Zheng, Y., Li, G., Li, Y., Shan, C., and Cheng, R. Truth inference in crowdsourcing: Is the problem solved? Proceedings of the VLDB Endowment, 10(5):541–552, 2017.
  • Zhu et al. (2023) Zhu, Y., Zhang, P., Haq, E.-U., Hui, P., and Tyson, G. Can ChatGPT reproduce human-generated labels? A study of social computing tasks. arXiv preprint arXiv:2304.10145, 2023.

Appendix A Proof of Theorem 3.1

To prove Theorem 3.1, we first state the Hammersley-Clifford Theorem. It basically says that under minor technical conditions, “leave-one-out” conditional distributions specify the joint distribution.

Lemma A.1 (Hammersley-Clifford Theorem).

Let XdX\in\Re^{d} be a random variable. Suppose its probability density function 𝐩X{\bf p}_{X} has product-form support. Then, for all ξsupport(𝐩X)\xi\in{\rm support}({\bf p}_{X}) and xsupport(𝐩X)x\in{\rm support}({\bf p}_{X}),

𝐩X(x)i=1d𝐩Xi|Xi(xi|x1,,xi1,ξi+1,,ξd)𝐩Xi|Xi(ξi|x1,,xi1,ξi+1,,ξd)\displaystyle{\bf p}_{X}(x)\propto\prod_{i=1}^{d}\frac{{\bf p}_{X_{i}|X_{-i}}(x_{i}|x_{1},\cdots,x_{i-1},\xi_{i+1},\cdots,\xi_{d})}{{\bf p}_{X_{i}|X_{-i}}(\xi_{i}|x_{1},\cdots,x_{i-1},\xi_{i+1},\cdots,\xi_{d})}

The proof is given below. It is adapted from Cosma and Evers (2010).

Proof.
𝐩X(x1,,xd)=(a)\displaystyle{\bf p}_{X}(x_{1},\cdots,x_{d})\overset{(a)}{=} 𝐩Xd(x1,,xd1)𝐩Xd|Xd(xd|xd)\displaystyle{\bf p}_{X_{-d}}(x_{1},\cdots,x_{d-1}){\bf p}_{X_{d}|X_{-d}}(x_{d}|x_{-d})
=(b)\displaystyle\overset{(b)}{=} 𝐩X(x1,,xd1,ξd)𝐩Xd|Xd(ξd|xd)𝐩Xd|Xd(xd|xd)\displaystyle\frac{{\bf p}_{X}(x_{1},\cdots,x_{d-1},\xi_{d})}{{\bf p}_{X_{d}|X_{-d}}(\xi_{d}|x_{-d})}{\bf p}_{X_{d}|X_{-d}}(x_{d}|x_{-d})
=\displaystyle= 𝐩X(x1,,xd1,ξd)𝐩Xd|Xd(xd|xd)𝐩Xd|Xd(ξd|xd)\displaystyle{\bf p}_{X}(x_{1},\cdots,x_{d-1},\xi_{d})\frac{{\bf p}_{X_{d}|X_{-d}}(x_{d}|x_{-d})}{{\bf p}_{X_{d}|X_{-d}}(\xi_{d}|x_{-d})}
=\displaystyle= \displaystyle\cdots
=\displaystyle= 𝐩X(ξ1,,ξd)i=1d𝐩Xi|Xi(xi|x1,,xi1,ξi+1,,ξd)𝐩Xi|Xi(ξi|x1,,xi1,ξi+1,,ξd).\displaystyle{\bf p}_{X}(\xi_{1},\cdots,\xi_{d})\prod_{i=1}^{d}\frac{{\bf p}_{X_{i}|X_{-i}}(x_{i}|x_{1},\cdots,x_{i-1},\xi_{i+1},\cdots,\xi_{d})}{{\bf p}_{X_{i}|X_{-i}}(\xi_{i}|x_{1},\cdots,x_{i-1},\xi_{i+1},\cdots,\xi_{d})}.

Here, (a)(a) and (b)(b) follow from Bayes rule. The product-form support ensures that for all ii, 𝐩Xi|Xi(xi|x1:i1,ξi+1:d){\bf p}_{X_{i}|X_{-i}}(x_{i}|x_{1:i-1},\xi_{i+1:d}) and 𝐩Xi|Xi(ξi|x1:i1,ξi+1:d){\bf p}_{X_{i}|X_{-i}}(\xi_{i}|x_{1:i-1},\xi_{i+1:d}) are both well-defined and non-zero. ∎

Now, we will prove Theorem 3.1. We restate the theorem statement for clarity. See 3.1

Proof.

As (Yt|θ)\mathbb{P}(Y_{t}\in\cdot|\theta) is absolutely continuous, and the corresponding density has product-form support, Lemma A.1 implies that {P(k)}k=1K\{P_{*}^{(k)}\}_{k=1}^{K} determine (Yt|θ)\mathbb{P}(Y_{t}\in\cdot|\theta). Moreover, as θ\theta is minimal, in the sense that, (Yt|θ)\mathbb{P}(Y_{t}\in\cdot|\theta) determines θ\theta, we have that {P(k)}k=1K\{P_{*}^{(k)}\}_{k=1}^{K} determines θ\theta.

Appendix B Proof of Theorem 3.2

The proof is based on the following lemma, a well-known result in the literature for covariance matrix estimation (Yuan, 2010; Kwan, 2014). It shows how a covariance matrix can be expressed in terms of coefficients and errors of certain linear regression problems.

Lemma B.1.

Consider a covariance matrix S𝒮++KS\in\mathcal{S}^{K}_{++}. Suppose A𝒩(0,S)A\sim\mathcal{N}(0,S). For each k{1,,K}k\in\{1,\cdots,K\}, let

u(k)\displaystyle u^{(k)} argminuK1𝔼[(AkuAk)2]\displaystyle\in\operatorname*{arg\,min}_{u\in\Re^{K-1}}\mathbb{E}[(A_{k}-u^{\top}A_{-k})^{2}]
(k)\displaystyle\ell^{(k)} =minuK1𝔼[(AkuAk)2]\displaystyle=\min_{u\in\Re^{K-1}}\mathbb{E}[(A_{k}-u^{\top}A_{-k})^{2}]

Then,

Sk,k1={1(k)if k=ku(k,k)(k)if kk.\displaystyle S^{-1}_{k,k^{\prime}}=\begin{cases}\frac{1}{\ell^{(k)}}&\text{if }k=k^{\prime}\\ -\frac{u^{(k,k^{\prime})}}{\ell^{(k)}}&\text{if }k\neq k^{\prime}.\end{cases}

We defer the proof to Appendix B.1.

Now, we prove Theorem 3.2. We restate the statement for clarity. See 3.2

Proof.

As for any tt, Yt|S𝒩(0,S)Y_{t}|S_{*}\sim\mathcal{N}(0,S_{*}), we have

u(k)\displaystyle u^{(k)}_{*} argminuK1𝔼[(Yt,kuYt,k)2|S];\displaystyle\in\operatorname*{arg\,min}_{u\in\Re^{K-1}}\mathbb{E}[(Y_{t,k}-u^{\top}Y_{t,-k})^{2}|S_{*}];
(k)\displaystyle\ell^{(k)}_{*} =minuK1𝔼[(Yt,kuYt,k)2|S].\displaystyle=\min_{u\in\Re^{K-1}}\mathbb{E}[(Y_{t,k}-u^{\top}Y_{t,-k})^{2}|S_{*}].

Next, by Lemma B.1,

S,k,k1={1(k)if k=ku(k,k)(k)if kk.\displaystyle S^{-1}_{*,k,k^{\prime}}=\begin{cases}\frac{1}{\ell^{(k)}_{*}}&\text{if }k=k^{\prime}\\ -\frac{u_{*}^{(k,k^{\prime})}}{\ell^{(k)}_{*}}&\text{if }k\neq k^{\prime}.\end{cases}

Second, recall that ν=v¯S1𝟏\nu_{*}=\overline{v}S_{*}^{-1}\mathbf{1}. By substituting S1S^{-1}_{*} from above in v¯S1𝟏\overline{v}S_{*}^{-1}\mathbf{1}, we get the result. ∎

B.1 Proof of Lemma B.1

To prove Lemma B.1, we state another lemma that allows us to write the inverse of a block matrix in terms of its blocks.

Lemma B.2.

Let 𝔐\mathfrak{M} be an invertible matrix. Suppose it has the following block form:

𝔐=(ABCD).\displaystyle\mathfrak{M}=\begin{pmatrix}A&B\\ C&D\end{pmatrix}.

If DD is invertible, then

𝔐1=((ABD1C)1(ABD1C)1BD1D1C(ABD1C)1D1+D1C(ABD1C)1BD1)\mathfrak{M}^{-1}=\begin{pmatrix}(A-BD^{-1}C)^{-1}&-(A-BD^{-1}C)^{-1}BD^{-1}\\ -D^{-1}C(A-BD^{-1}C)^{-1}&D^{-1}+D^{-1}C(A-BD^{-1}C)^{-1}BD^{-1}\end{pmatrix}

Please refer to pages 1 and 2 of (Gallier, 2010) on Schur complements for a proof of this lemma.

Now, we prove Lemma B.1. We restate it for clarity. See B.1

Proof.

To prove this Lemma, we begin by writing SS as follows.

S=[S1,1S1,1S1,1S1,1].\displaystyle S=\begin{bmatrix}S_{1,1}&S_{-1,1}\\ S_{1,-1}&S_{-1,-1}\end{bmatrix}.

As SS is positive definite, it is easy to show that u(1)u^{(1)} is unique. This unique value and the corresponding error (1)\ell^{(1)} can be expressed in terms of elements of SS. Specifically, we have

u(1)\displaystyle u^{(1)} =S1,11S1,1;\displaystyle=S_{-1,-1}^{-1}S_{-1,1};
(1)\displaystyle\ell^{(1)} =S1,1S1,1S1,11S1,1.\displaystyle=S_{1,1}-S_{1,-1}S_{-1,-1}^{-1}S_{-1,1}.

Then, we apply Lemma B.2 on matrix SS_{*} to get that the first row of S1S^{-1} is the following.

S1,:1=[1(1),(u(1))(1)]\displaystyle S^{-1}_{1,:}=\left[\frac{1}{\ell^{(1)}},~{}-\frac{(u^{(1)})^{\top}}{\ell^{(1)}}\right]

In other words, for k=1k=1,

Sk,k1={1(k)if k=ku(k,k)(k)if kk..\displaystyle S^{-1}_{k,k^{\prime}}=\begin{cases}\frac{1}{\ell^{(k)}}&\text{if }k=k^{\prime}\\ -\frac{u^{(k,k^{\prime})}}{\ell^{(k)}}&\text{if }k\neq k^{\prime}.\end{cases}.

With some simple algebra, it is straightforward to extend this argument for other values of kk.

See 3.2

Appendix C Proof of Theorem 3.3

To prove Theorem 3.3, we use two results. First, Theorem 3.2, a theorem that offers a characterization of ν\nu_{*} in terms of clairvoyant SSL parameters. Second, a result that shows that Bayesian linear regression is consistent for our problem.

The following lemma formalizes our claim that Bayesian linear regression converges for our problem.

Lemma C.1.

If Λ0\Lambda\succ 0, ¯>0\overline{\ell}>0, and λ0\lambda_{\ell}\geq 0, then for each k{1,,K}k\in\{1,\cdots,K\}, limtu^t1(k)=a.s.u(k)\lim_{t\to\infty}\hat{u}^{(k)}_{t-1}\overset{{\rm a.s.}}{=}u^{(k)}_{*} and limt^t1(k)=a.s.(k)\lim_{t\to\infty}\hat{\ell}^{(k)}_{t-1}\overset{{\rm a.s.}}{=}\ell^{(k)}_{*}.

The proof is deferred to Appendix C.1. We assume Λ0\Lambda\succ 0 and ¯>0\overline{\ell}>0 to make the proof simpler, but it is possible to prove a similar result by only assuming Λ0\Lambda\succeq 0 and ¯0\overline{\ell}\geq 0.

Now, we will prove Theorem 3.3. For clarity, we restate this theorem. See 3.3

Proof.

As 𝔼[Zt|Y1:]=νYt\mathbb{E}[Z_{t}|Y_{1:\infty}]=\nu_{*}^{\top}Y_{t}, to prove the theorem, it suffices to prove that limtν^t,k=ν\lim_{t\to\infty}\hat{\nu}_{t,k}=\nu_{*} almost surely.

Before we prove this, we show that limtν~t,k\lim_{t\to\infty}\tilde{\nu}_{t,k} almost surely.

limtν~t,k=(a)\displaystyle\lim_{t\to\infty}\tilde{\nu}_{t,k}\overset{(a)}{=} limtv¯1𝟏u^t1(k)^t1(k)\displaystyle\lim_{t\to\infty}\overline{v}\frac{1-\mathbf{1}^{\top}\hat{u}^{(k)}_{t-1}}{\hat{\ell}^{(k)}_{t-1}}
=(b)\displaystyle\overset{(b)}{=} v¯1𝟏u(k)(k)a.s.\displaystyle\overline{v}\frac{1-\mathbf{1}^{\top}u^{(k)}_{*}}{\ell^{(k)}_{*}}\quad{\rm a.s.}
=(c)\displaystyle\overset{(c)}{=} ν,k.a.s.\displaystyle\nu_{*,k}.\quad{\rm a.s.}

(a)(a) uses the definition of ν~t,k\tilde{\nu}_{t,k}. (b)(b) is true because of Lemma C.1. (c)(c) is true because of Theorem 3.2.

Now, we show limtν^t,k=ν\lim_{t\to\infty}\hat{\nu}_{t,k}=\nu_{*}.

limtν^t,k=(a)\displaystyle\lim_{t\to\infty}\hat{\nu}_{t,k}\overset{(a)}{=} limt(bb+t1ν~1,k+(1bb+t1)ν~t,k)\displaystyle\lim_{t\to\infty}\left(\frac{b}{b+t-1}\tilde{\nu}_{1,k}+\left(1-\frac{b}{b+t-1}\right)\tilde{\nu}_{t,k}\right)
=(b)\displaystyle\overset{(b)}{=} limtbb+t1ν~1,k+limt(1bb+t1)ν~t,ka.s.\displaystyle\lim_{t\to\infty}\frac{b}{b+t-1}\tilde{\nu}_{1,k}+\lim_{t\to\infty}\left(1-\frac{b}{b+t-1}\right)\tilde{\nu}_{t,k}\quad{\rm a.s.}
=(c)\displaystyle\overset{(c)}{=} 0+limtν~t,ka.s.\displaystyle 0+\lim_{t\to\infty}\tilde{\nu}_{t,k}\quad{\rm a.s.}
=(d)\displaystyle\overset{(d)}{=} ν,k.a.s.\displaystyle\nu_{*,k}.\quad{\rm a.s.}

(a)(a) is true using the definition of ν^t,k\hat{\nu}_{t,k}. (b)(b) is true because the first limit exists surely and because the second limit exists almost surely. (c)(c) is true limt(1bb+t1)=1\lim_{t\to\infty}\left(1-\frac{b}{b+t-1}\right)=1 and because limtν~t,k\lim_{t\to\infty}\tilde{\nu}_{t,k} exists almost surely. (d)(d) uses consistency of ν~t,k\tilde{\nu}_{t,k}.

C.1 Proof of Lemma C.1

Proof.

Recall that (u^t1(k),^t1(k))(\hat{u}^{(k)}_{t-1},\hat{\ell}^{(k)}_{t-1}) is a minimizer of the following function, where the minimization happens over values (u,)K1×+(u,\ell)\in\Re^{K-1}\times\Re_{+}.

fY1:t1,Λ,λ,u¯,¯(k)(u,)=\displaystyle f^{(k)}_{Y_{1:t-1},\Lambda,\lambda_{\ell},\overline{u},\overline{\ell}}(u,\ell)= 12(uu¯𝟏)Λ(uu¯𝟏)+log(|lΛ1|)Normal prior,𝒩(u¯𝟏,Λ1)+(λ2+1)log+(λ+K+1)¯2Inverse-Gamma prior\displaystyle\underbrace{\frac{1}{2\ell}\left(u-\overline{u}\mathbf{1}\right)^{\top}\Lambda\left(u-\overline{u}\mathbf{1}\right)+\log\left(\left|l\Lambda^{-1}\right|\right)}_{\textrm{Normal prior},~{}\mathcal{N}(\overline{u}\mathbf{1},\ell\Lambda^{-1})}+\underbrace{\left(\frac{\lambda_{\ell}}{2}+1\right)\log\ell+\frac{(\lambda_{\ell}+K+1)\overline{\ell}}{2\ell}}_{\textrm{Inverse-Gamma prior}}
+τ<t(Yτ,kuYτ,k)22+(t1)2log()loglikelihood+𝔠,\displaystyle\underbrace{+\sum_{\tau<t}\frac{(Y_{\tau,k}-u^{\top}Y_{\tau,-k})^{2}}{2\ell}+\frac{(t-1)}{2}\log(\ell)}_{{\rm log\ likelihood}}+\mathfrak{c},

where 𝔠\mathfrak{c} is an additive constant that does not depend on uu and \ell.

We claim that the following pair is the unique minimizer of fY1:t1,Λ,λ,u¯,¯(k)f^{(k)}_{Y_{1:t-1},\Lambda,\lambda_{\ell},\overline{u},\overline{\ell}} for all tt if Λ0\Lambda\succ 0 and ¯>0\overline{\ell}>0.

u^t1(k)=\displaystyle\hat{u}^{(k)}_{t-1}= (Λ+τ<tYτ,kYτ,k)1(Λ𝟏u¯+τ<tYτ,kYτ,k)\displaystyle\left(\Lambda+\sum_{\tau<t}Y_{\tau,-k}Y_{\tau,-k}^{\top}\right)^{-1}\left(\Lambda\mathbf{1}\overline{u}+\sum_{\tau<t}Y_{\tau,k}Y_{\tau,-k}\right)
^t1(k)=\displaystyle\hat{\ell}^{(k)}_{t-1}= λ+K+1λ+K+t¯(k)+1λ+K+t(u^t1(k)u¯𝟏)Λ(u^t1(k)u¯𝟏)\displaystyle\frac{\lambda_{\ell}+K+1}{\lambda_{\ell}+K+t}\overline{\ell}^{(k)}+\frac{1}{\lambda_{\ell}+K+t}(\hat{u}^{(k)}_{t-1}-\overline{u}\mathbf{1})^{\top}\Lambda(\hat{u}^{(k)}_{t-1}-\overline{u}\mathbf{1})
+τ<t(Yτ,k(u^t1(k))Yτ,k)2λ+K+t\displaystyle+\frac{\sum_{\tau<t}\big{(}Y_{\tau,k}-(\hat{u}^{(k)}_{t-1})^{\top}Y_{\tau,-k}\big{)}^{2}}{\lambda_{\ell}+K+t}

To establish this uniqueness, we find the Hessian matrix for fY1:t1,Λ,λ,u¯,¯(k)f^{(k)}_{Y_{1:t-1},\Lambda,\lambda_{\ell},\overline{u},\overline{\ell}} at (u^t1(k),^t1(k))(\hat{u}^{(k)}_{t-1},\hat{\ell}^{(k)}_{t-1}) and show that it is positive definite for all tt. This matrix is equal to

[2^t1(k)(Λ+τ<tYτ,kYτ,k)𝟎𝟎λ+K+t(^t1(k))2].\displaystyle\begin{bmatrix}\frac{2}{\hat{\ell}^{(k)}_{t-1}}\left(\Lambda+\sum_{\tau<t}Y_{\tau,-k}Y_{\tau,-k}^{\top}\right)&{\bf 0}^{\top}\\ {\bf 0}&\frac{\lambda_{\ell}+K+t}{\left(\hat{\ell}^{(k)}_{t-1}\right)^{2}}\end{bmatrix}.

First, this matrix is well-defined because ^t1(k)>0\hat{\ell}^{(k)}_{t-1}>0 for all tt (consequence of ¯\overline{\ell} being positive). Second, this matrix is positive-definite because Λ+τ<tYτ,kYτ,k0\Lambda~{}+~{}\sum_{\tau<t}Y_{\tau,-k}Y_{\tau,-k}^{\top}~{}\succ~{}0 and λ+K+t>0\lambda_{\ell}+K+t>0.

Now we will show that for each kk, u^t1(k)a.s.u(k)\hat{u}^{(k)}_{t-1}\overset{a.s.}{\to}u^{(k)}_{*}.

limtu^t1(k)=\displaystyle\lim_{t\to\infty}\hat{u}^{(k)}_{t-1}= limt(Λ+τ<tYτ,kYτ,k)1(Λu¯(k)+τ<tYτ,kYτ,k)\displaystyle\lim_{t\to\infty}\left(\Lambda+\sum_{\tau<t}Y_{\tau,-k}Y_{\tau,-k}^{\top}\right)^{-1}\left(\Lambda\overline{u}^{(k)}+\sum_{\tau<t}Y_{\tau,k}Y_{\tau,-k}\right)
=\displaystyle= limt(Λt1+τ<tYτ,kYτ,kt1)1(Λu¯(k)t1+τ<tYτ,kYτ,kt1)\displaystyle\lim_{t\to\infty}\left(\frac{\Lambda}{t-1}+\frac{\sum_{\tau<t}Y_{\tau,-k}Y_{\tau,-k}^{\top}}{t-1}\right)^{-1}\left(\frac{\Lambda\overline{u}^{(k)}}{t-1}+\frac{\sum_{\tau<t}Y_{\tau,k}Y_{\tau,-k}}{t-1}\right)
=(a)\displaystyle\overset{(a)}{=} limt(Λt1+τ<tYτ,kYτ,kt1)1limt(Λu¯(k)t1+τ<tYτ,kYτ,kt1)a.s.\displaystyle\lim_{t\to\infty}\left(\frac{\Lambda}{t-1}+\frac{\sum_{\tau<t}Y_{\tau,-k}Y_{\tau,-k}^{\top}}{t-1}\right)^{-1}\lim_{t\to\infty}\left(\frac{\Lambda\overline{u}^{(k)}}{t-1}+\frac{\sum_{\tau<t}Y_{\tau,k}Y_{\tau,-k}}{t-1}\right)\quad{\rm a.s.}
=(b)\displaystyle\overset{(b)}{=} limt(Λt1+τ<tYτ,kYτ,kt1)1S,k,ka.s.\displaystyle\lim_{t\to\infty}\left(\frac{\Lambda}{t-1}+\frac{\sum_{\tau<t}Y_{\tau,-k}Y_{\tau,-k}^{\top}}{t-1}\right)^{-1}S_{*,k,-k}\quad{\rm a.s.}
=(c)\displaystyle\overset{(c)}{=} S,k,k1S,k,ka.s.\displaystyle S_{*,-k,-k}^{-1}S_{*,k,-k}\quad{\rm a.s.}
=\displaystyle= u(k)a.s.\displaystyle u^{(k)}_{*}\quad{\rm a.s.}

(a)(a) is a valid step because both limits exist almost surely. (b)(b) is true because Λu¯(k)t10\frac{\Lambda\overline{u}^{(k)}}{t-1}\to 0 and law of large numbers implies that τ<tYτ,kYτ,kt1S,k,k\frac{\sum_{\tau<t}Y_{\tau,k}Y_{\tau,-k}}{t-1}\to S_{*,k,-k} almost surely. (c)(c) is true because Λt10\frac{\Lambda}{t-1}\to 0 and law of large numbers implies that τ<tYτ,kYτ,kt1S,k,k\frac{\sum_{\tau<t}Y_{\tau,-k}Y_{\tau,-k}}{t-1}\to S_{*,-k,-k} almost surely.

Now we will show that for each kk, ^t1(k)a.s.(k)\hat{\ell}^{(k)}_{t-1}\overset{a.s.}{\to}\ell^{(k)}_{*}.

limt^t1(k)=\displaystyle\lim_{t\to\infty}\hat{\ell}^{(k)}_{t-1}= limt(λ+K+1λ+K+t¯(k)+1λ+K+t(u^t1(k)u¯(k))Λ(u^t1(k)u¯(k))\displaystyle\lim_{t\to\infty}\bigg{(}\frac{\lambda_{\ell}+K+1}{\lambda_{\ell}+K+t}\overline{\ell}^{(k)}+\frac{1}{\lambda_{\ell}+K+t}(\hat{u}^{(k)}_{t-1}-\overline{u}^{(k)})^{\top}\Lambda(\hat{u}^{(k)}_{t-1}-\overline{u}^{(k)})
+τ<t(Yτ,k(u^t1(k))Yτ,k)2λ+K+t)\displaystyle+\frac{\sum_{\tau<t}\big{(}Y_{\tau,k}-(\hat{u}^{(k)}_{t-1})^{\top}Y_{\tau,-k}\big{)}^{2}}{\lambda_{\ell}+K+t}\bigg{)}
=(a)\displaystyle\overset{(a)}{=} limtλ+K+1λ+K+t¯(k)+limt1λ+K+t(u^t1(k)u¯(k))Λ(u^t1(k)u¯(k))\displaystyle\lim_{t\to\infty}\frac{\lambda_{\ell}+K+1}{\lambda_{\ell}+K+t}\overline{\ell}^{(k)}+\lim_{t\to\infty}\frac{1}{\lambda_{\ell}+K+t}(\hat{u}^{(k)}_{t-1}-\overline{u}^{(k)})^{\top}\Lambda(\hat{u}^{(k)}_{t-1}-\overline{u}^{(k)})
+limtτ<t(Yτ,k(u^t1(k))Yτ,k)2λ+K+ta.s.\displaystyle+\lim_{t\to\infty}\frac{\sum_{\tau<t}\big{(}Y_{\tau,k}-(\hat{u}^{(k)}_{t-1})^{\top}Y_{\tau,-k}\big{)}^{2}}{\lambda_{\ell}+K+t}\quad{\rm a.s.}
=\displaystyle= 0+0+limtτ<t(Yτ,k(u^t1(k))Yτ,k)2λ+K+ta.s.\displaystyle 0+0+\lim_{t\to\infty}\frac{\sum_{\tau<t}\big{(}Y_{\tau,k}-(\hat{u}^{(k)}_{t-1})^{\top}Y_{\tau,-k}\big{)}^{2}}{\lambda_{\ell}+K+t}\quad{\rm a.s.}
=(b)\displaystyle\overset{(b)}{=} limtt1λ+K+t×(limtτ<tYτ,k2t12limt(u^t1(k))×limtτ<tYτ,kYτ,kt1\displaystyle\lim_{t\to\infty}\frac{t-1}{\lambda_{\ell}+K+t}\times\bigg{(}\lim_{t\to\infty}\frac{\sum_{\tau<t}Y_{\tau,k}^{2}}{t-1}-2\lim_{t\to\infty}(\hat{u}^{(k)}_{t-1})^{\top}\times\lim_{t\to\infty}\frac{\sum_{\tau<t}Y_{\tau,k}Y_{\tau,-k}}{t-1}
+limt(u^t1(k))×limtτ<tYτ,kYτ,kt1×limt(u^t1(k)))\displaystyle+\lim_{t\to\infty}(\hat{u}^{(k)}_{t-1})^{\top}\times\lim_{t\to\infty}\frac{\sum_{\tau<t}Y_{\tau,-k}Y_{\tau,-k}}{t-1}\times\lim_{t\to\infty}(\hat{u}^{(k)}_{t-1})\bigg{)}
=(c)\displaystyle\overset{(c)}{=} S,k,k2(u(k))S,k,k+(u(k))S,k,ku(k)\displaystyle S_{*,k,k}-2\left(u^{(k)}_{*}\right)^{\top}S_{*,k,-k}+\left(u^{(k)}_{*}\right)^{\top}S_{*,-k,-k}u^{(k)}_{*}
=(d)\displaystyle\overset{(d)}{=} S,k,kS,k,kS,k,kS,k,k\displaystyle S_{*,k,k}-S_{*,k,-k}S_{*,-k,-k}S_{*,-k,k}
=(e)\displaystyle\overset{(e)}{=} (k).\displaystyle\ell^{(k)}_{*}.

(a)(a) is valid because each limit exists surely or almost surely. (b)(b) is valid because each limit exists surely. (c)(c) is true because limtt1λ+K+t=1\lim_{t\to\infty}\frac{t-1}{\lambda_{\ell}+K+t}=1, u^t1(k)a.s.u(k)\hat{u}^{(k)}_{t-1}\overset{a.s.}{\to}u^{(k)}_{*}, and τ<tYτYτt1a.s.S\frac{\sum_{\tau<t}Y_{\tau}Y_{\tau}^{\top}}{t-1}\overset{a.s.}{\to}S_{*}. (d)(d) and (e)(e) follow from definitions of u(k)u^{(k)}_{*} and (k)\ell^{(k)}_{*}.

Appendix D Simulation Study Details

D.1 Data-Generating Process Satisfies Problem-Formulation Assumptions

Recall that we generate the data in the following manner. For each crowdworker k++k\in\mathbb{Z}_{++},

Y~t,k=Z~t+n=1NCk,nXt,n.\displaystyle\tilde{Y}_{t,k}=\tilde{Z}_{t}+\sum_{n=1}^{N}C_{k,n}X_{t,n}.

Here, Z~t𝒩(0,1)\tilde{Z}_{t}\sim\mathcal{N}(0,1), (Xt:t++)(X_{t}:t\in\mathbb{Z}_{++}) is an iid sequence of standard Gaussian vectors that is independent of Z~t𝒩(0,1)\tilde{Z}_{t}\sim\mathcal{N}(0,1), and Ck,nC_{k,n} satisfies the following. Ck,n𝒩(0,nq)C_{k,n}\sim\mathcal{N}(0,n^{-q}), Ck,nC_{k,n} is sampled independently from Z~t\tilde{Z}_{t}, the factor XtX_{t}, and any other coefficient Ck,nC_{k^{\prime},n^{\prime}}, where (k,n)(k,n)(k^{\prime},n^{\prime})\neq(k,n).

Let us verify if this data-generating process satisfies the assumptions that we made in the problem formulation. First, we will check that for any value of tt, the elements of (Y~t,k:k=1,2,)(\tilde{Y}_{t,k}:k=1,2,\cdots) are exchangeable. As the elements of (Y~t,k:k=1,2,)(\tilde{Y}_{t,k}:k=1,2,\cdots) are iid conditioned on Z~t\tilde{Z}_{t} and XtX_{t}, by de Finetti’s theorem, they are exchangeable. Second, we will check that (Y~t:t=1,2,)(\tilde{Y}_{t}:t=1,2,\cdots) are exchangeable. This is true because conditioned on coefficients CC, (Y~t:t=1,2,)(\tilde{Y}_{t}:t=1,2,\cdots) are iid. Next, we will check that k=1KY~t,k/K\sum_{k=1}^{K}\tilde{Y}_{t,k}/K converges almost surely.

limKk=1KY~t,kK=(a)\displaystyle\lim_{K\to\infty}\sum_{k=1}^{K}\frac{\tilde{Y}_{t,k}}{K}\overset{(a)}{=} limK1K(Z~t+k=1Kn=1NCk,nXt,n)\displaystyle\lim_{K\to\infty}\frac{1}{K}\left(\tilde{Z}_{t}+\sum_{k=1}^{K}\sum_{n=1}^{N}C_{k,n}X_{t,n}\right)
=(b)\displaystyle\overset{(b)}{=} Z~t+limK1Kk=1Kn=1NCk,nXt,na.s.\displaystyle\tilde{Z}_{t}+\lim_{K\to\infty}\frac{1}{K}\sum_{k=1}^{K}\sum_{n=1}^{N}C_{k,n}X_{t,n}\quad{\rm a.s.}
=\displaystyle= Z~t+limKn=1NXt,n1Kk=1KCk,na.s.\displaystyle\tilde{Z}_{t}+\lim_{K\to\infty}\sum_{n=1}^{N}X_{t,n}\frac{1}{K}\sum_{k=1}^{K}C_{k,n}\quad{\rm a.s.}
=(c)\displaystyle\overset{(c)}{=} Z~t+n=1NXt,nlimK1Kk=1KCk,na.s.\displaystyle\tilde{Z}_{t}+\sum_{n=1}^{N}X_{t,n}\lim_{K\to\infty}\frac{1}{K}\sum_{k=1}^{K}C_{k,n}\quad{\rm a.s.}
=(d)\displaystyle\overset{(d)}{=} Z~t.\displaystyle\tilde{Z}_{t}.

Here, (a)(a) follows from the definition of Y~t,k\tilde{Y}_{t,k}. (b)(b) follows because Z~t\tilde{Z}_{t} exists surely, and the second term will be shown to exist almost surely. (c)(c) is valid because elements of {Xt,n}n=1N\{X_{t,n}\}_{n=1}^{N} are finite almost surely and because N<N<\infty. (d)(d) is true because of the strong law of large numbers.

D.2 Hyperparameter Tuning for Predict-Each-Worker

We describe how we do hyperparameter tuning for this predict-each-worker here. For each KK, we do a separate hyperparameter search. We tune the hyperparameters for the SSL step, and then we tune the hyperparameter for the aggregation step using the best hyperparameters from the SSL step.

The performance metric that we use to select hyperparameters in the SSL step judges the quality of the learned SSL models. This metric is an expectation of a loss function. The intuition behind this loss function is as follows. Given a history Y1:t1Y_{1:t-1}, the SSL models approximate the clairvoyant joint distribution over YtY_{t}. The loss function is equal to the KL divergence between the clairvoyant distribution over YtY_{t} and the distribution implied by these SSL models (details in Appendix D.4).

Empirically, when selecting hyperparameters, we noticed that there is no hyperparameter setting, that dominates across different values of tt. Therefore, we choose the hyperparameters in two steps. First, we evaluate performance for t{1,K,10×K,100×K}t\in\{1,K,10\times K,100\times K\} and filter out all hyperparameter combinations for which the performance does not monotontically improve as tt increases. Then, amongst the remaining hyperparameter combinations, we select the hyperparameters that result in the best performance for some large value of tt which we call tt_{*}. We let t=100×Kt_{*}=100\times K.

We let u¯=1K+1\overline{u}=\frac{1}{K+1} and ¯=2+2K+1\overline{\ell}=2+\frac{2}{K+1}. We let Λ=λ((1ρ)I+ρ𝟏𝟏)\Lambda=\lambda((1-\rho)I+\rho\mathbf{1}\mathbf{1}^{\top}) where λ\lambda is the diagonal element and ρλ\rho\lambda is the off-diagonal element. We search over ρ{0,0.2,0.4,0.6,0.8}\rho\in\{0,0.2,0.4,0.6,0.8\}, λ{0×K5,2×K5,,10×K5}\lambda\in\{\frac{0\times K}{5},\frac{2\times K}{5},\cdots,\frac{10\times K}{5}\} and λ{0,2,4,6}\lambda_{\ell}\in\{0,2,4,6\}.

When tuning the aggregation hyperparameter rr, we use the best SSL parameters, and we select based on an estimate of the MSE (estimation procedure discussed in Appendix D.3). As with SSL parameters, we enforce that performance should monotonically improve as tt increases. Specifically, we evaluate for t{1,K,3×K,5×K,7×K,10×K}t\in\{1,K,3\times K,5\times K,7\times K,10\times K\}. Then, finally, we choose the best hyperparameters based on performance at t=10×Kt_{*}=10\times K. Note that the values tt we evaluate for and the value of tt_{*} is different for the SSL step and the aggregation step.

For each value of tt, we estimate MSE by averaging across multiple training seeds. We vary r{2.5×K,5×K,,20×K}r\in\{2.5\times K,5\times K,\cdots,20\times K\}. The final hyperparameters can be found in Table 1.

𝑲\boldsymbol{K} 𝝀\boldsymbol{\lambda} 𝝆\boldsymbol{\rho} 𝝀\boldsymbol{\lambda_{\ell}} 𝒓\boldsymbol{r}
1010 1616 0.40.4 0 7575
2020 2424 0.60.6 0 150150
3030 3636 0.60.6 0 300300
Table 1: Final hyperparameters for different values of KK

To tune hyperparameters for both EM and predict-each-worker, we assume access to the clairvoyant distribution (Yt|θ)\mathbb{P}(Y_{t}\in\cdot|\theta). We do this for simplicity. In Appendix D.5, we discuss how hyperparameter tuning can be performed without access to (Yt|θ)\mathbb{P}(Y_{t}\in\cdot|\theta).

D.3 Estimating MSE

The MSE is given by 𝔼[(ZtZ^tπ)2]\mathbb{E}[(Z_{t}-\hat{Z}_{t}^{\pi})^{2}] where Z^tπ\hat{Z}_{t}^{\pi} is the group estimate generated by a policy π\pi after observing Y1:tY_{1:t}. Computing MSE exactly is typically computationally expensive and needs to be estimated. Here, we discuss how we estimate MSE in the context of our experiments. We utilize three features of our experiments to make it easier to estimate MSE. First, we use the fact that the data is drawn from the Gaussian data-generating process. Second, we use the knowledge of the noise covariance matrices from which the data is drawn from. Third, we use the fact that all policies we consider generate a group estimate that is linear in YtY_{t}, i.e., Z^tπ=ν^tYt\hat{Z}_{t}^{\pi}=\hat{\nu}_{t}^{\top}Y_{t} and ν^t\hat{\nu}_{t} depends only on Y1:t1Y_{1:t-1}.

𝔼[(ZtZ^tπ)2]=(a)\displaystyle\mathbb{E}[(Z_{t}-\hat{Z}_{t}^{\pi})^{2}]\overset{(a)}{=} 𝔼[𝔼[(ZtZ^tπ)2|Y1:t,Σ]];\displaystyle\mathbb{E}\left[\mathbb{E}\left[(Z_{t}-\hat{Z}_{t}^{\pi})^{2}|Y_{1:t},\Sigma_{*}\right]\right];
=(b)\displaystyle\overset{(b)}{=} 𝔼[𝕍[Zt|Y1:t,Σ]]+𝔼[(𝔼[Zt|Y1:t,Σ]Z^tπ)2];\displaystyle\mathbb{E}\left[\mathbb{V}[Z_{t}|Y_{1:t},\Sigma_{*}]\right]+\mathbb{E}[(\mathbb{E}[Z_{t}|Y_{1:t},\Sigma_{*}]-\hat{Z}_{t}^{\pi})^{2}];
=(c)\displaystyle\overset{(c)}{=} 𝔼[1v¯1+𝟏Σ1𝟏]+𝔼[(𝔼[Zt|Y1:t,Σ]Z^tπ)2];\displaystyle\mathbb{E}\left[\frac{1}{\overline{v}^{-1}+\mathbf{1}^{\top}\Sigma_{*}^{-1}\mathbf{1}}\right]+\mathbb{E}[(\mathbb{E}[Z_{t}|Y_{1:t},\Sigma_{*}]-\hat{Z}_{t}^{\pi})^{2}];
=(d)\displaystyle\overset{(d)}{=} 𝔼[1v¯1+𝟏Σ1𝟏]+𝔼[𝔼[(𝔼[Zt|Y1:t,Σ]Z^tπ)2|Σ,Y1:t1]];\displaystyle\mathbb{E}\left[\frac{1}{\overline{v}^{-1}+\mathbf{1}^{\top}\Sigma_{*}^{-1}\mathbf{1}}\right]+\mathbb{E}\left[\mathbb{E}[(\mathbb{E}[Z_{t}|Y_{1:t},\Sigma_{*}]-\hat{Z}_{t}^{\pi})^{2}|\Sigma_{*},Y_{1:t-1}]\right];
=\displaystyle= 𝔼[1v¯1+𝟏Σ1𝟏]+𝔼[𝔼[(νYtν^tYt)2|Σ,Y1:t1]];\displaystyle\mathbb{E}\left[\frac{1}{\overline{v}^{-1}+\mathbf{1}^{\top}\Sigma_{*}^{-1}\mathbf{1}}\right]+\mathbb{E}\left[\mathbb{E}[(\nu_{*}^{\top}Y_{t}-\hat{\nu}_{t}^{\top}Y_{t})^{2}|\Sigma_{*},Y_{1:t-1}]\right];
=(e)\displaystyle\overset{(e)}{=} 𝔼[1v¯1+𝟏Σ1𝟏]+𝔼[(νν^t)𝔼[YtYt|Σ,Y1:t1](νν^t)];\displaystyle\mathbb{E}\left[\frac{1}{\overline{v}^{-1}+\mathbf{1}^{\top}\Sigma_{*}^{-1}\mathbf{1}}\right]+\mathbb{E}\left[(\nu_{*}-\hat{\nu}_{t})^{\top}\mathbb{E}\left[Y_{t}Y_{t}^{\top}|\Sigma_{*},Y_{1:t-1}\right](\nu_{*}-\hat{\nu}_{t})\right];
=\displaystyle= 𝔼[1v¯1+𝟏Σ1𝟏]+𝔼[(νν^t)(Σ+v¯𝟏𝟏)(νν^t)]\displaystyle\mathbb{E}\left[\frac{1}{\overline{v}^{-1}+\mathbf{1}^{\top}\Sigma_{*}^{-1}\mathbf{1}}\right]+\mathbb{E}\left[(\nu_{*}-\hat{\nu}_{t})^{\top}(\Sigma_{*}+\overline{v}\mathbf{1}\mathbf{1}^{\top})(\nu_{*}-\hat{\nu}_{t})\right]

(a)(a) is true using tower property. (b)(b) uses the law of total variance. (c)(c) uses the fact that the estimates are drawn from the Gaussian data-generating process. (d)(d) uses tower property. (e)(e) is true because ν\nu_{*} is determined by Σ\Sigma_{*} and ν^t\hat{\nu}_{t} is determined by Y1:t1Y_{1:t-1}.

The calculation above implies that

𝔼[(ZtZ^tπ)2]=𝔼[1v¯1+𝟏Σ1𝟏+(νν^t)(Σ+v¯𝟏𝟏)(νν^t)].\mathbb{E}[(Z_{t}-\hat{Z}_{t}^{\pi})^{2}]=\mathbb{E}\left[\frac{1}{\overline{v}^{-1}+\mathbf{1}^{\top}\Sigma_{*}^{-1}\mathbf{1}}+(\nu_{*}-\hat{\nu}_{t})^{\top}(\Sigma_{*}+\overline{v}\mathbf{1}\mathbf{1}^{\top})(\nu_{*}-\hat{\nu}_{t})\right]. (7)

We use Monte-Carlo simulation to estimate the right-hand side. Specifically, we sample a set of Σ\Sigma_{*}, for each Σ\Sigma_{*} we sample one history Y1:t1Y_{1:t-1}, and then compute the quantity inside the expectation.

Number of seeds.

  • To perform hyperparameter tuning for EM and predict-each-worker, we sample sixty values of Σ\Sigma_{*}.

  • To generate plots in Figure 3, we sample fifty values of Σ\Sigma_{*}. These values are different from the values of Σ\Sigma_{*} that are used in tuning.

D.4 Metric for Tuning SSL Hyperparameters

Here, we discuss the metric that we use for tuning SSL Hyperparameters. This metric is an expectation of a loss function. The loss function is equal to the KL divergence between the clairvoyant distribution over YtY_{t} and the distribution implied by the learned SSL models. For the Gaussian data-generating process, recall that the clairvoyant distribution satisfies (Yt|θ)=d𝒩(0,S)\mathbb{P}(Y_{t}|\theta)\overset{{\rm d}}{=}\mathcal{N}(0,S_{*}). This implies that estimating SS_{*} is equivalent to estimating (Yt|θ)\mathbb{P}(Y_{t}|\theta). Matrix SS_{*} can be estimated from the learned SSL models, and we provide details of the estimation procedure below.

For the Gaussian data-generating process, our SSL models are parameterized by {u^t1(k),^t1(k)}k=1K\{\hat{u}^{(k)}_{t-1},\hat{\ell}^{(k)}_{t-1}\}_{k=1}^{K}. We provide a two-step transformation to go from {u^t1(k),^t1(k)}k=1K\{\hat{u}^{(k)}_{t-1},\hat{\ell}^{(k)}_{t-1}\}_{k=1}^{K} to a covariance matrix. The first step is to apply a function h:{(K1)×+}KK×Kh:\{\Re^{(K-1)}\times\Re_{+}\}^{K}\to\Re^{K\times K}. Inputs to hh are parameters of SSL models, and the output is a matrix. Suppose we input {u(k),(k)}k=1K\{u^{(k)},\ell^{(k)}\}_{k=1}^{K}, then the output h({u(k),(k)}k=1K)h\left(\{u^{(k)},\ell^{(k)}\}_{k=1}^{K}\right) satisfies the following.

[h({u(k),(k)}k=1K)]k,k={1/(k)if k=k;u(k,k)/(k)if kk.\displaystyle\left[h\left(\{u^{(k)},\ell^{(k)}\}_{k=1}^{K}\right)\right]_{k,k^{\prime}}=\begin{cases}1/\ell^{(k)}&\text{if }k=k^{\prime};\\ -u^{(k,k^{\prime})}/\ell^{(k)}&\text{if }k\neq k^{\prime}.\end{cases}

The output of this function can be a covariance matrix. For example, if the inputs are the prescient SSL parameters {u(k),(k)}k=1K\{u^{(k)}_{*},\ell^{(k)}_{*}\}_{k=1}^{K}, the output is SS_{*}. However, it is not necessary that the output of hh will be a covariance matrix. The second step of our transformation addresses this.

In the second step, we compute a covariance matrix S^t\hat{S}_{t} that we treat as the estimate of SS_{*}. If the determinant |h({u(k),(k)}k=1K)||h\left(\{u^{(k)},\ell^{(k)}\}_{k=1}^{K}\right)| is negative, then we set S^t\hat{S}_{t} to be the all zeros matrix. If |h({u(k),(k)}k=1K)|\left|h\left(\{u^{(k)},\ell^{(k)}\}_{k=1}^{K}\right)\right| is non-negative, then we find a covariance matrix that has the same determinant as the output of hh and is closest to this output in 2-norm sense. We will discuss why such a covariance matrix must exist. However, before we discuss this, we formally define S^t\hat{S}_{t}. S^t\hat{S}_{t} is a minimizer of this optimization problem

minS𝒮++K\displaystyle\min_{S\in\mathcal{S}^{K}_{++}} h({u(k),(k)}k=1K)S2\displaystyle\left\|h\left(\{u^{(k)},\ell^{(k)}\}_{k=1}^{K}\right)-S\right\|_{2}
subject to |S|=|h({u(k),(k)}k=1K)|.\displaystyle|S|=\left|h\left(\{u^{(k)},\ell^{(k)}\}_{k=1}^{K}\right)\right|.

The optimization problem is feasible because the diagonal matrix with elements {|h({u(k),(k)}k=1K)|,1,,1}\left\{\left|h\left(\{u^{(k)},\ell^{(k)}\}_{k=1}^{K}\right)\right|,1,\cdots,1\right\} satisfies the constraint.

Having computed S^t\hat{S}_{t}, we state the loss function to judge the quality of SSL parameters

𝐝KL(𝒩(0,S)||𝒩(0,S^t)).\displaystyle\mathbf{d}_{\mathrm{KL}}\left(\mathcal{N}(0,S_{*})||\mathcal{N}(0,\hat{S}_{t})\right).

Number of seeds. To perform hyperparameter tuning, we sample sixty values of SS_{*} and average the loss function above to estimate the performance metric.

D.5 Hyperparameter Tuning with Unobserved Outcomes

For our simulation study, we tuned hyperparameters assuming the knowledge of the clairvoyant distribution (Yt|θ)\mathbb{P}(Y_{t}\in\cdot|\theta). Specifically, we used this knowledge to estimate metrics that judged the quality of SSL models and group estimates. The clairvoyant distribution (Yt|θ)\mathbb{P}(Y_{t}\in\cdot|\theta) will be known in any practical situation. We discuss how to tune hyperparameters in such a situation. Though our ideas generalize, we make two simplifying assumptions. First, we assume the data is drawn from a Gaussian data-generating process. Second, we assume the estimates are zero-mean, i.e., 𝔼[Yt]=0\mathbb{E}[Y_{t}]=0.

Tuning SSL hyperparameters. Suppose we have a dataset of estimates 𝒟\mathcal{D}. We will divide this dataset into a training dataset 𝒟train\mathcal{D}_{\rm train} and a test dataset 𝒟test\mathcal{D}_{\rm test}. Given a set of SSL hyperparameters, first we compute {u^t1(k),^t1(k)}k=1K\{\hat{u}^{(k)}_{t-1},\hat{\ell}^{(k)}_{t-1}\}_{k=1}^{K} based on 𝒟train\mathcal{D}_{\rm train} and then use the procedure in Appendix D.4 to compute S^train\hat{S}_{\rm train} – the estimate of SS_{*} based on SSL models. We then compute the sample covariance matrix S^test\hat{S}_{\rm test} based on 𝒟test\mathcal{D}_{\rm test} and judge the SSL hyperparameters based on

𝐝KL(𝒩(0,S^train)||𝒩(0,S^test)).\displaystyle\mathbf{d}_{\mathrm{KL}}\left(\mathcal{N}(0,\hat{S}_{\rm train})||\mathcal{N}(0,\hat{S}_{\rm test})\right).

Instead of doing one train-test split, one could also perform the so-called kk-fold cross-validation.

Tuning aggregation hyperparameters. The aggregation step has two hyperparameters: v¯\overline{v} and rr. Hyperparameter v¯\overline{v} indicates the prior variance 𝕍[Zt]\mathbb{V}[Z_{t}] of true outcomes, and rr controls the rate of decay of regularization towards the prior aggregation weights. We first discuss a method to set v¯\overline{v} and then discuss two methods to tune rr.

To set v¯\overline{v}, we use the fact that we defined ZtZ_{t} to be the average of estimates produced by a large population of crowdworkers. Hence, v¯\overline{v} can be estimated by computing the sample variance of the group estimates produced by averaging. Ideally, this variance should be computed on a dataset where both KK and tt are large. However, this typically would not be practical. Hence, one would have to choose large enough values of KK and tt that are also practically feasible.

We now discuss two methods to tune rr. The first method involves estimating MSE using a so-called golden dataset. This dataset contains crowdsourcing tasks for which the true outcomes can be estimated with high accuracy. For our problem, the golden dataset can be constructed by having many crowdworker estimates for each outcome. This dataset would only be used for evaluation and not for training. As it would only be used for evaluation, it should be okay to have a limited number of engagements with each crowdworker when constructing this dataset. The rationale is that it takes more data to train but less data to evaluate.

The second method to tune rr involves estimating a quantity that differs from MSE by an additive constant. Suppose we want to do an evaluation on crowdsourcing tasks in 𝒟eval\mathcal{D}_{\rm eval}. We will first compute a group estimate for each crowdsourcing task in 𝒟eval\mathcal{D}_{\rm eval}. Then, for each crowdsourcing task, we will sample a crowdworker uniformly randomly from a large pool of out-of-sample crowdworkers, get an estimate from the sampled crowdworker, and treat their estimate as the ground truth. For each task, the crowdworker must be sampled independently. Formally, this procedure allows us to estimate

𝔼[(Y~t,K+1Z^tπ)2],\displaystyle\mathbb{E}[(\tilde{Y}_{t,K+1}-\hat{Z}_{t}^{\pi})^{2}],

where Y~t,K+1\tilde{Y}_{t,K+1} is the estimate of an out-of-sample crowdworker and Z^tπ\hat{Z}_{t}^{\pi} is the group estimate. The index K+1K+1 acts as a placeholder for a random out-of-sample crowdworker, and one should not interpret this as some fixed crowdworker’s estimate. The squared error above differs from MSE by an additive constant because Y~t,K+1\tilde{Y}_{t,K+1} is a single-sample Monte-Carlo estimate of limKk=K+1K+KY~t,kK\lim_{K^{\prime}\to\infty}\frac{\sum_{k=K+1}^{K+K^{\prime}}\tilde{Y}_{t,k}}{K^{\prime}} and because ZtZ_{t} is equal to limKk=K+1K+KY~t,kK\lim_{K^{\prime}\to\infty}\frac{\sum_{k=K+1}^{K+K^{\prime}}\tilde{Y}_{t,k}}{K^{\prime}} when the limit exists. We give more details below.

𝔼[(Y~t,K+1Z^tπ)2]=(a)\displaystyle\mathbb{E}[(\tilde{Y}_{t,K+1}-\hat{Z}_{t}^{\pi})^{2}]\overset{(a)}{=} 𝔼[𝔼[(Y~t,K+1Z^tπ)2|Y1:t,Σ]]\displaystyle\mathbb{E}\left[\mathbb{E}\left[(\tilde{Y}_{t,K+1}-\hat{Z}_{t}^{\pi})^{2}|Y_{1:t},\Sigma_{*}\right]\right]
=(b)\displaystyle\overset{(b)}{=} 𝔼[𝕍[Y~t,K+1|Y1:t,Σ]]+𝔼[(𝔼[Y~t,K+1|Y1:t,Σ]Z^tπ)2]\displaystyle\mathbb{E}[\mathbb{V}[\tilde{Y}_{t,K+1}|Y_{1:t},\Sigma_{*}]]+\mathbb{E}\left[\left(\mathbb{E}\left[\tilde{Y}_{t,K+1}|Y_{1:t},\Sigma_{*}\right]-\hat{Z}_{t}^{\pi}\right)^{2}\right]
=(c)\displaystyle\overset{(c)}{=} 𝔼[𝕍[Y~t,K+1|Y1:t,Σ]]+𝔼[(𝔼[limKk=K+1K+KY~t,kK|Y1:t,Σ]Z^tπ)2]\displaystyle\mathbb{E}[\mathbb{V}[\tilde{Y}_{t,K+1}|Y_{1:t},\Sigma_{*}]]+\mathbb{E}\left[\left(\mathbb{E}\left[\lim_{K^{\prime}\to\infty}\frac{\sum_{k=K+1}^{K^{\prime}+K}\tilde{Y}_{t,k}}{K^{\prime}}|Y_{1:t},\Sigma_{*}\right]-\hat{Z}_{t}^{\pi}\right)^{2}\right]
=(d)\displaystyle\overset{(d)}{=} 𝔼[𝕍[Y~t,K+1|Y1:t,Σ]]+𝔼[(𝔼[Zt|Y1:t,Σ]Z^tπ)2]\displaystyle\mathbb{E}[\mathbb{V}[\tilde{Y}_{t,K+1}|Y_{1:t},\Sigma_{*}]]+\mathbb{E}\left[\left(\mathbb{E}\left[Z_{t}|Y_{1:t},\Sigma_{*}\right]-\hat{Z}_{t}^{\pi}\right)^{2}\right]
=\displaystyle= 𝔼[𝕍[Y~t,K+1|Y1:t,Σ]]𝔼[𝕍[Zt|Y1:t,Σ]]+𝔼[(ZtZ^tπ)2].\displaystyle\mathbb{E}[\mathbb{V}[\tilde{Y}_{t,K+1}|Y_{1:t},\Sigma_{*}]]-\mathbb{E}[\mathbb{V}[Z_{t}|Y_{1:t},\Sigma_{*}]]+\mathbb{E}\left[\left(Z_{t}-\hat{Z}_{t}^{\pi}\right)^{2}\right].

Here, (a)(a) is true because of the tower property. (b)(b) uses the law of total variance. (c)(c) uses the fact that (Y~t,k:k=K+1,K+2,)(\tilde{Y}_{t,k}:k=K+1,K+2,\cdots) are exchangeable a priori and remain exchangeable even when conditioned on Y1:tY_{1:t} and Σ\Sigma_{*}. (d)(d) uses the assumption that limKk=K+1K+KY~t,kK\lim_{K^{\prime}\to\infty}\frac{\sum_{k=K+1}^{K+K^{\prime}}\tilde{Y}_{t,k}}{K^{\prime}} exists almost surely and ZtZ_{t} is defined to be equal to this limit when it exists. The final equality makes it clear that 𝔼[(Y~t,K+1Z^tπ)2]\mathbb{E}[(\tilde{Y}_{t,K+1}-\hat{Z}_{t}^{\pi})^{2}] and 𝔼[(ZtZ^tπ)2]\mathbb{E}\left[\left(Z_{t}-\hat{Z}_{t}^{\pi}\right)^{2}\right], the MSE, differ by an additive constant.

Appendix E Extension of Predict-Each-Worker for Settings with Observed Outcomes

Here, we describe an extension for our algorithm that can be used when some or all outcomes are observed. However, if a large fraction of outcomes are observed, then one might be better off using standard supervised learning methods to learn a mapping from estimates to an outcome.

In our extension, we keep the SSL step the same, but we change the aggregation step. First, we train the SSL models on all estimate vectors. Then, for all estimate vectors whose outcomes are known, and for each SSL model, we compute the following statistics: expected error and SSL gradient – this is similar to what we do in Section 3.1. These SSL statistics and the corresponding outcomes are then used to train a neural network. This neural network takes a set of expected errors and SSL gradients as input and produces an estimate of the true outcome. The rationale for using these SSL statistics instead of the estimate vectors as inputs is that it allows leveraging all observed estimates, even the ones for which the outcomes are not observed. We will denote the neural network obtained after training as 𝔣\mathfrak{f}.

Now, we discuss how to produce a group estimate for a new estimate vector YtY_{t}. We first compute a set of expected errors and SSL gradients based on YtY_{t}. Then, we input these SSL statistics to neural network 𝔣\mathfrak{f} to produce an estimate Z^t(𝔣)\hat{Z}_{t}^{(\mathfrak{f})}\in\Re of the outcome. Additionally, we use these statistics to compute ν^tYt\hat{\nu}_{t}^{\top}Y_{t}, where ν^t\hat{\nu}_{t} is based on our aggregation formula (2). Finally, we compute the group estimate by taking a convex combination of Z^t(𝔣)\hat{Z}_{t}^{(\mathfrak{f})} and ν^tYt\hat{\nu}_{t}^{\top}Y_{t},

Z^taa+tZ^t(𝔣)+(1aa+t)ν^tYt.\hat{Z}_{t}\leftarrow\frac{a}{a+t}\hat{Z}_{t}^{(\mathfrak{f})}+\left(1-\frac{a}{a+t}\right)\hat{\nu}_{t}^{\top}Y_{t}.

Here, a+a\in\Re_{+} is a hyperparameter that can be tuned based on a validation set. We should expect aa to increase with tot_{o}, the number of estimates for which the outcome is known. Reason being that larger the value of tot_{o}, the more we can trust Z^t(𝔣)\hat{Z}_{t}^{(\mathfrak{f})} .