Adaptive Crowdsourcing Via Self-Supervised Learning
Abstract
Common crowdsourcing systems average estimates of a latent quantity of interest provided by many crowdworkers to produce a group estimate. We develop a new approach – predict-each-worker – that leverages self-supervised learning and a novel aggregation scheme. This approach adapts weights assigned to crowdworkers based on estimates they provided for previous quantities. When skills vary across crowdworkers or their estimates correlate, the weighted sum offers a more accurate group estimate than the average. Existing algorithms such as expectation maximization can, at least in principle, produce similarly accurate group estimates. However, their computational requirements become onerous when complex models, such as neural networks, are required to express relationships among crowdworkers. Predict-each-worker accommodates such complexity as well as many other practical challenges. We analyze the efficacy of predict-each-worker through theoretical and computational studies. Among other things, we establish asymptotic optimality as the number of engagements per crowdworker grows.
1 Introduction
Aggregating opinions from a diverse group often yields more accurate estimates than relying on a single individual. Applications are wide-ranging. Intelligence agencies query crowds to help predict uncertain and significant events, showing that a collective effort can give better results (Tetlock and Gardner, 2016). Aggregating inputs has also been shown to improve answers to questions in the single-question crowd wisdom problem (Prelec et al., 2017) and in financial forecasting (Da and Huang, 2020). Modern artificial intelligence (AI) also relies on aggregation of input from human annotators for the development of image classifiers (Vaughan, 2017) and chatbots (Ouyang et al., 2022; OpenAI, 2023; Touvron et al., 2023; Google, 2023; Anthropic, 2023).
The recent explosion of AI has led to considerable discussion of situations where computers could partly or wholly replace humans for crowdsourcing tasks (Zhu et al., 2023; Törnberg, 2023; Ollion et al., 2023; Boussioux et al., 2023). The algorithms we discuss in this paper could apply to such settings but were conceived to apply to groups of people in the spirit of initial work on crowdsourcing. The mid-nineteenth-century Smithsonian Institution Meteorological Project processed localized data into national weather maps. One hundred fifty volunteer respondents provided information by telegraph (Smithsonian Institution Archives, 2024), which was processed by a dozen-plus humans to create a national data map. By 1860, five hundred local weather stations were reporting (National Weather Service, 2024).
Many traditional crowdsourcing examples involve large numbers of participants. However, the underlying principles apply even with quite small groups, as would be the norm in many contexts, such as with boards of directors or R&D research teams.
A conventional approach to aggregation assigns equal importance to each crowdworker’s input. This approach, which we refer to as averaging, is simple and enjoys wide use. With a sufficiently large number of crowdworkers, averaging performs well. However, engaging so many crowdworkers can be costly or infeasible. Because of this, practical crowdsourcing systems work with a limited number of crowdworkers, and averaging leaves substantial room for improvement. We discuss two simple cases where averaging can require too many crowdworkers in order to produce accurate estimates. One arises when skills vary across crowdworkers; those who are more skilled ought to be assigned greater weight. Another example arises when crowdworkers share sources of information, perhaps because they watch the same news channels or follow similar accounts on social media, or utilize the same data sets. In this case, each crowdworker’s estimate does not provide independent information, and the crowdworker’s weight should decrease with the degree of dependence.
In principle, if each crowdworker’s level of skill and independence are known, it should be possible to derive weights that improve group estimates relative to averaging. Our work introduces an approach that approximates these weights given estimates of past outcomes provided by the same crowdworkers. The weights reflect what is learned about skills and independence.
If patterns among crowdworker estimates are expressed by simple – for example, linear – models, existing algorithms offer effective means for fitting to past observations. Expectation maximization (EM), in particular, offers a popular option used for this purpose (Zhang et al., 2016a; Zheng et al., 2017). However, in many contexts, the relationships are complex and call for flexible machine learning models such as neural networks. Computational requirements of existing algorithms such as EM become onerous.
To address the limitations of existing methods, we introduce a new approach called predict-each-worker, which leverages self-supervised learning and a novel aggregation scheme. This approach uses self-supervised learning (SSL) (Liu et al., 2021) to infer patterns among crowdworker estimates. In particular, for each crowdworker, predict-each-worker produces an SSL model that predicts the crowdworker’s estimate based on past observations and estimates of other crowdworkers. This enables modeling of complex patterns among crowdworker estimates, for example, by using neural network SSL models. The aggregation scheme leverages these SSL models to weight crowdworkers to a degree that increases with the crowdworker’s skill and independence.
In short, the predict-each-worker approach employs each individual crowdworker’s estimates, after processing by the center, as the basis for producing a group estimate. These individual estimates represent an atomic approach for building a group estimate. This approach contrasts sharply with other methods that instead proceed from a series of group estimates. For example, the famed Delphi method (Dalkey and Helmer, 1963) involves a panel of experts or crowdworkers who, through several rounds of interactions, refine their estimates. After each round, a facilitator provides an anonymous summary of the crowdworkers’ estimates. The crowdworkers are then encouraged to revise their earlier estimates in light of the replies of other members of their panel. This process is repeated until the group converges on a consensus. This method has been applied, for example, to military forecasting (Dalkey and Helmer, 1963), health research (de Meyrick, 2003), and as a method to do graduate research (Skulmoski et al., 2007). While problem settings where crowdworkers could update their estimates are important and interesting, our approach, predict-each-worker, is not designed for such settings, and we leave it for future work to develop an extension.
To investigate the performance of our approach, we perform both computational and theoretical studies. For a Gaussian data-generating process, we prove that our algorithm is asymptotically optimal, and through simulations, we find that this method outperforms averaging crowdworker estimates. Moreover, our empirical results indicate that when the dataset is large, predict-each-worker performs as well as an EM-based algorithm. These studies offer a ‘sanity check’ for our approach and motivate future work for more complex data-generating processes as well as real-world problem settings.
2 Problem Formulation
We consider a crowdsourcing system as illustrated in Figure 1. In each round, indexed by positive integers , distinct crowdworkers provide estimates for an independent outcome . A center observes estimates to produce a group estimate for the outcome . We assume that the center does not observe the outcomes.
To model uncertainty from the center’s perspective, we take and to be random variables. These and all random variables we consider are defined with respect to a probability space . We assume that is exchangeable, and for simplicity, we assume that .
While we could consider such a process for any sequence of outcomes , we will restrict attention to the case where is the consensus of an infinite population of crowdworkers, in a sense we now define. For each , we model crowdworker estimates as an exchangeable stochastic process. We assume that the limit exists almost surely and the consensus is given by
(1) |
We define this way because we believe that such a consensus estimate is often useful – this is motivated by work on the wisdom of crowds (Larrick and Soll, 2006). Another benefit of such a definition is that one can assess performance using observable estimates from out-of-sample crowdworkers without relying on subjective beliefs about the relationship between estimates and outcomes. The notion of assessment based on out-of-sample crowdworker estimates can be used for hyperparameter tuning, as explained in Appendix D.5.
We take to be the first components . Hence, each of the crowdworkers can be thought of as sampled uniformly and independently from the infinite population. The center’s objective is to produce an accurate estimate of the consensus based on estimates supplied by the finite sample of crowdworkers.

Estimates depend on unknown characteristics of the crowdworkers, such as their skills and relationships. By de Finetti’s Theorem, exchangeability across implies existence of a random variable such that, conditioned on , is iid. Intuitively, this latent random variable encodes all relevant information about the crowdworkers and their relative behavior. While multiple latent variables can serve this purpose, we take to be one that is minimal. By this we mean that determines the distribution and vice versa.
Before producing a group estimate , the center observes past crowdworker estimates as well as current estimates . The manner in which the center produces its estimate can be expressed in terms of a policy, which is a function that identifies a group estimate based on the observed history of crowdworker estimates. When particulars about the policy under discussion matter or when we consider alternatives, we express the dependence of group estimates on the policy using superscripts. In particular, . For each , our objective is to minimize the expected loss for some loss function . With real-valued group estimates, the squared error offers a natural loss function.
3 Crowdsourcing Based on Self-Supervised Learning
This section formally introduces our approach, predict-each-worker. We will do so in three steps. First, we present the algorithm in an abstract form, which is not efficiently implementable but applies across real-valued estimate distributions. Then, we describe an implementable instance specialized to a Gaussian data-generating process. Finally, we explain how our approach extends and scales to complex settings.
3.1 The General Approach
Our approach first learns via self-supervised learning (SSL) to predict each crowdworker’s estimate based on the estimates of all other crowdworkers. The resulting models are then used to produce weights for aggregating crowdworker estimates. We now describe each of these steps in greater detail.
Self-supervised learning
In the self-supervised learning (SSL) step, we aim to learn patterns exhibited by crowdworkers. This is accomplished by learning models, each of which predicts the estimate of a crowdworker given estimates supplied by all other crowdworkers. Let denote the th model learned from . Given , this model generates a predictive distribution of . The intention is for these predictive distributions to accurately estimate a gold standard: the clairvoyant conditional probability .
As can be used to produce a distribution over one crowdworker’s estimate given the estimates of all other crowdworkers, clearly, it should depend on patterns among crowdworker estimates. However, it may not immediately be clear that the set of models ought to encode all observable patterns. It turns out that this is indeed the case under weak technical conditions. The following term will be helpful in expressing these conditions.
Definition 3.1 (product-form support).
A probability density function over has product form support if , where is the marginal probability density function of the th component.
The following theorem establishes sufficiency.
Theorem 3.1 (sufficiency of SSL).
If is absolutely continuous with respect to the Lebesgue measure and the corresponding density has product-form support, then determines .
While we defer the proof for this theorem to Appendix A, we now provide some intuition for why it should be true. It is well-known that Gibbs sampling can be used to sample from a joint distribution based only on its “leave-one-out” conditionals. Hence, one could use Gibbs sampling to sample from – the joint distribution – using the corresponding “leave-one-out” conditionals . This implies that the set determines . Because is minimal, in the sense that determines , determines .
Aggregation
To compute the aggregation weights, we first compute two quantities:
-
1.
Expected Error. For each , we compute the expected error . The expected error tells us how predictable crowdworker is.
-
2.
SSL Gradient. For each , we compute the mean . Then, we compute the gradient of with respect to . This gradient estimates how much any other crowdworker contributed to the predictability of crowdworker .
Finally, we compute the aggregation weights as follows. For each ,
(2) |
where is a hyperparameter that indicates the prior variance of true outcomes. If the variance is not known, this hyperparameter can be tuned based on observed data. We further discuss in Appendix D.5 how to tune . In Section 3.2, we establish that this algorithm is asymptotically optimal for a Gaussian data-generating process.
Two principles may help in interpreting the aggregation formula:
-
1.
All else equal, estimate should be assigned greater weight if it is more predictable. Consider the SSL gradient and error of estimate . Assume we assign an optimal aggregation weight to this estimate. Fixing the gradient, if the error were to decrease, this aggregation weight ought to increase because skilled crowdworkers tend be more predictable. Equivalently, error-prone crowdworkers tend to be less predictable.
-
2.
All else equal, estimate should be assigned less weight if it is more sensitive to estimate . Consider the SSL gradient and error of estimate . Assume we assign an optimal aggregation weight to this estimate. An increase in the th component of the gradient indicates increased sensitivity of to . This implies an increase in either the covariance with or skill of . In either case, this ought to reduce the aggregation weight.
3.2 A Concrete Algorithm for Gaussian Data-Generating Processes
In this section, we specialize our algorithm to a Gaussian data-generating process and establish that the algorithm is asymptotically optimal. Recall that represents a minimal expression of all useful information the center can glean from observing crowdworker estimates . Let denote the vector of crowdworker errors. By Gaussian data-generating process, we mean that is iid Gaussian, is independent from , and, conditioned on , is iid zero-mean Gaussian. To interpret this data-generating process, it can be useful to imagine first sampling , then for each th crowdworker, adding an independent perturbation to arrive at an estimate . To simplify our analysis, we will further assume that the covariance matrix is positive definite. Our methods and analysis can be extended to relax this assumption.
For any Gaussian process, the distribution of any specific variable conditioned on others is Gaussian with expectation linear in the other variables. It follows that, for our Gaussian data-generating process, conditioned on , the dependence of any th estimate on others is perfectly expressed by a linear model with Gaussian noise. In particular, for each , determines coefficients and a noise variance such that
(3) |
with . This linear dependence motivates use of linear SSL models in implementing our approach to crowdsourcing. With such models, for each and , the center would produce estimates of .
Aggregation with Linear SSL Models
With linear SSL models, our general aggregation formula (2) simplifies. This is because the gradient with respect to is simply . Consequently, the aggregation weights satisfy
(4) |
where, as defined in Section 3.1, . We explained intuition in Section 3.1 to motivate our general aggregation formula. For the special case of a Gaussian data-generating process, it is easy to corroborate this intuition with a formal mathematical result, which we now present.
No policy can produce a better estimate than , which benefits from knowledge of . In the Gaussian case, it is easy to show this estimate is attained by the weight vector
(5) |
where is the covariance matrix of conditioned on . The following result, which is proved in Appendix B, offers an alternative characterization of that shares the form of Equation 4.
Theorem 3.2.
For the Gaussian data-generating process, for each ,
From this result, we see that if the linear SSL model parameters converge to then the aggregation weights converge to .
Fitting Linear SSL Models
The center can generate the estimates of coefficients and variances via Bayesian linear regression (BLR). This is accomplished by minimizing the loss function
(6) |
to obtain an estimate . The regularization term can be interpreted as inducing a prior, which is specified by four hyperparameters: , , , . The term is the negative log density of a Gaussian random variable with mean and covariance matrix , while is the negative log density of an inverse gamma distribution with shape parameter and scale parameter . Note that each of the models shares the same hyperparameters. This means that, before the center has observed any data, each crowdworker will receive equal aggregation weight.
While BLR produces the posterior distribution over weights for each of the models, it ignores interdependencies across models. In the asymptotic regime of large , this is inconsequential because the posterior concentrates. However, when is small, this imperfection induces error. To ameliorate this error, we introduce a form of aggregation weight regularization. Let be a vector of equal weights produced by our algorithm if applied before learning from any data. For any , we generate aggregation weights by taking a convex combination , where is the vector of unregularized aggregation weights. The parameter decays with . Specifically, , where is a hyperparameter governs the rate of decay. All steps we have described to produce an estimate are presented in Algorithm 1.
The following result formalizes our claim of asymptotic optimality.
Theorem 3.3.
If , , and , then, for any ,
3.3 Extensions to Complex Settings using Neural Networks
Before conducting an empirical study with the Gaussian data-generating process, we first discuss how our general approach extends beyond the Gaussian data-generating process. In particular, with neural network SSL models, our approach can capture complex characteristics of and relationships among crowdworkers. Moreover, with neural network models, our approach extends to accommodate scenarios with missing estimates or where additional context is provided, for example, a language model prompt or crowdworker profile.
Concretely, this process involves training neural networks, where the th network predicts the th crowdworker using others’ estimates. Additional information, such as a context vector, can just be appended to the neural network’s input. Training can be done via standard supervised learning algorithms, such as stochastic gradient descent (SGD).
To reduce computational demands, rather than training separate models, we can train one neural network with so-called output heads. The th head would output an estimate of the mean and variance of the th crowdworker. The input to the neural network is the vector of estimates with the th crowdworker masked out. When training, we would use SGD and randomly choose which crowdworker to mask at each SGD step. This training procedure, where we try and predict a masked out component, is prototypical of self-supervised learning.
Incorporating Context
In practice, crowdworkers respond to specific inputs or contexts, such as images or text from a language model. To extend to such settings, we can transform the context into a vector. Such a transformation is naturally available for images. With text, we can use embedding vectors, which can be generated using established techniques (Kenter and De Rijke, 2015; Mars, 2022). By appending such context vectors to crowdworker estimates, the SSL step can learn context-specific patterns. For example, if we learn that certain crowdworkers are more skilled for particular types of inputs, it should be possible to produce better group estimates.
Including Crowdworker Metadata
Metadata about crowdworkers, like their educational background or expertise, can be used to improve group estimates. For example, if a crowdworker’s metadata takes the form of a feature vector, our approach can leverage this data. By concatenating crowdworker estimates with their feature vectors, the neural network can then account for their distinguishing characteristics and not merely the estimates that they provide. This can prove particularly beneficial when managing a transient crowdworker pool, as it allows us to generalize quickly to previously unseen crowdworkers.
Addressing Missing Estimates
With a sizable pool of crowdworkers, each task might only be assigned to a small subset. As such, each crowdworker may or may not supply an estimate for each th outcome. To learn from incomplete data gathered in this mode, we can use a neural network architecture where the input associated with each crowdworker indicates whether the crowdworker has supplied an estimate, and if so, what that estimate is. Through this approach, the neural network can discern intricate patterns and interdependencies among the crowdworkers, such as when the significance of one’s input may be contingent on the absence of another due to overlapping expertise.
Aggregating the Predictions
The general aggregation formula presented in Section 3.1 applies to neural network models. For each crowdworker , we first use the neural network to estimate the mean and variance of their prediction. Then, we compute the gradient of the mean with respect to . Using the estimated variance and the gradient, (2) produces aggregation weights.
We primarily designed our approach for problem settings where outcomes are not observed. However, this approach can be extended to settings where some or all outcomes are observed. Please see Appendix E for a brief discussion.
4 A Simulation Study
To understand how our algorithm compares to benchmarks, we perform a simulation study. We use a Gaussian data-generating process and three benchmarks: averaging, an EM-based policy, and the clairvoyant policy. The section first introduces the data-generating process, followed by the three policies, and finally presents the simulation results.
4.1 A Gaussian Data-Generating Process
In a prototypical crowdsourcing application, some crowdworkers may be more skilled than others. Further, because the crowdworkers may share information sources, their errors may end up being correlated. We consider a model that is simple yet captures these characteristics. For each crowdworker , this model generates a sequence of estimates according to
where , and is additive noise that is independent from . The additive noise is generated as follows:
where is an iid sequence of standard Gaussian vectors, and modulates the influence of on . It may be helpful to think of each component as a random factor and the coefficient as a factor loading.
For each crowdworker and factor index , we sample . Each is sampled independently from , the factor , and any other coefficient , where .
We show in Appendix D.1 that this data-generating process satisfies the three assumptions of our general problem formulation, as presented in Section 2. In particular, is exchangeable, and for each , is exchangeable, with a sample mean that converges almost surely. It is easy to verify that the sample mean is almost surely equal to .
Recall that we denote by a minimal random variable conditioned on which is iid. For any , conditioned on , and are distributed Gaussian. It is helpful to consider the covariance matrix of conditioned on , which we denote by . Each diagonal element of expresses the reciprocal of a crowdworker’s skill. Off-diagonal elements indicate how noise covaries across crowdworkers.
At this point, this data-generating process can feel abstract and non-intuitive. In Section 4.2, after we define a few benchmarks, we will provide some intuition about our data-generating process using these benchmarks.
4.2 Benchmarks
We compare our method against three other methods: averaging, a clairvoyant policy, and an EM-based policy.
Averaging
Our first benchmark averages across crowdworker estimates to produce a group estimate , where . For the Gaussian data-generating process, group estimates produced in this manner converge to almost surely as grows.
Clairvoyant Policy
The clairvoyant policy offers an upper bound on the performance. This policy has access to , which encapsulates all useful information that can be garnered from any number of observations . Equivalently, the policy has access to , and it produces a group estimate . For the Gaussian data-generating process, this means having access to noise covariance matrix introduced earlier. In particular, the group estimate can be written as . This attains mean-squared error
This level of error is lower than what is achievable by any implementable policy, which would not have the privileged access to that is granted to the clairvoyant policy. Yet this represents a useful target in policy design as this level of error is approachable as grows.
Having defined averaging and clairvoyant policies, we will now provide some perspective on the benefits of clairvoyance or, equivalently, learning from many interactions with the same crowdworkers. In Figure 2, we compare these two policies and a third, which we refer to as only-skills-clairvoyant. As the name suggests, in order to generate aggregation weights, only-skills-clairvoyant uses diagonal elements (inverse-skill) of but not the off-diagonal elements (error covariances). The clairvoyant policy offers the greatest benefit over averaging, though only-skills-clairvoyant also offers some benefit. For example, to match the performance attained by averaging over one hundred crowdworkers, the clairvoyant policy requires about twenty crowdworkers, while only-skills-clairvoyant needs about seventy. Moreover, the concavity of the plot indicates that the percentage reduction afforded by clairvoyance in the number of crowdworkers required grows with the desired level of performance. Hence, for a data-generating process like ours, it is beneficial to learn leverage patterns among crowdworker estimates.

EM-Based Policy
We now discuss a policy based on the EM framework. At a high level, EM iterates between estimating the covariance matrix and estimating past outcomes . Upon termination of this iterative procedure, the estimate of is used to produce a group estimate . Here, indicates that the expectation is evaluated as though were the realized value of .
Each iteration of the EM algorithm proceeds in two steps. In the first step, the algorithm computes values and , which represent estimates of and , respectively. These estimates are generated as though the estimated covariance matrix , from the previous iteration, is ; in particular, and . The initial estimate of is taken to be , which is a hyperparameter that can be interpreted as the prior mean for .
In the second step, estimates and , from the previous step, are used to compute an approximation of the log posterior of . This approximation is known as the evidence lower bound (ELBO). The goal is to find the matrix that maximizes the ELBO. For the Gaussian data-generating process, in particular, the ELBO is
where is the Kullback-Leibler divergence between distributions and . The scalar is a hyperparameter that expresses the concentration of the prior distribution of . It is easy to verify that the maximizer of admits a closed-form solution, and this maximizer is stated in step 11 of Algorithm 2.
The iterative EM process continues until either of two stopping conditions is satisfied. The first stopping condition is triggered when consecutive estimates and are sufficiently similar, in the sense that . This is a natural stopping condition, as it represents a measure of near-convergence. The second stopping condition places a cap on the number of iterations. This prevents the algorithm from looping indefinitely. The EM steps are presented concisely in Algorithm 2.
Selecting EM Hyperparameters
In our experiments, we set and . For each value of and , we perform a grid search to choose the values of and . For each value in the grid search, we run the EM algorithm over multiple seeds, where each seed corresponds to one train set. We choose the values of and that result in the highest estimate of MSE (estimation procedure discussed in Appendix D.3). We let be a matrix with diagonal elements and correlation . We vary and . We vary .
4.3 Performance Comparisons
We now compare our method to the three benchmarks described before. We do so using the Gaussian data-generating process, with factors and factor concentration parameter . This choice of parameters results in expected noise variance . For each policy , we use mean squared error, defined , as the evaluation metric.
To study how robust our algorithm is to the number of crowdworkers and the dataset size, we evaluate performance for crowdworkers and multiple dataset sizes . We plot results in Figure 3.
We find that predict-each-worker, our algorithm outperforms averaging for all and . Moreover, our algorithm performs equally well compared to the clairvoyant, as well as EM policies, for large . This corroborates Theorem 3.3. For smaller values of , EM offers some advantage, though, in the extreme case of , our algorithm again is competitive with EM. The details of how the hyperparameters of predict-each-worker are tuned can be found in Appendix D.2.
Our results demonstrate that, like EM, predict-each-worker performs as well as possible as the dataset grows. Another important feature, as discussed in Section 3.3, is that predict-each-worker can learn from complex patterns using neural network models. Taken together, these two features make predict-each-worker an attractive choice relative to EM.



5 Literature Review
Our literature review is structured around two main categories of prior work. Initially, we examine research directly related to our problem setting, where true outcomes are not observed, and a center interacts with the same crowdworkers multiple times. We aim to succinctly outline the range of algorithms developed for this setting and delineate how these differ from our proposed method, predict-each-worker. For a more comprehensive treatment of this problem setting, see the survey papers by Zhang et al. (2016a); Sheng and Zhang (2019); Zhang et al. (2022). Following this, we address research pertaining to different but related settings, such as those where true outcomes could be observed or interaction with crowdworkers is limited to a single round. This part of the review highlights that our assumptions do not hold for all problem settings of interest and recognizes methods more suited to these alternative settings. Such a comparison is useful, as it informs the reader about the boundaries of applicability of our method and aids in assessing its suitability for different application contexts.
Our problem setting. One of the most popular approaches for this problem setting is to posit a probabilistic model that captures crowdworker patterns and then approximate the maximum a posteriori (MAP) estimates of the parameters that characterize this probabilistic model. The EM algorithm is one way to approximate these MAP estimates and has been applied to a variety of problems. Li et al. (2014), Kara et al. (2015), and Cai et al. (2020) use EM to learn a noise covariance matrix for the case where crowdworker estimates are continuous whereas Dawid and Skene (1979) and Zhang et al. (2016b) use EM to learn a so-called confusion matrix for the case where crowdworker estimates are categorical. Other algorithms to approximate the MAP estimates have been considered in (Moreno et al., 2015; Bach et al., 2017; Li et al., 2019). These algorithms estimate correlations when crowdworker estimates are categorical. Some other works assume that linear models sufficiently capture patterns between crowdworkers conditioned on relevant contexts like images or text (Raykar et al., 2010; Rodrigues and Pereira, 2018; Li et al., 2021; Chen et al., 2021). The algorithms proposed in these works can be interpreted as estimating the MAP estimates of the parameters of crowdworker models conditioned on the context. However, unlike our approach, all the algorithms discussed here are either difficult to extend or become computationally onerous if representing complex patterns requires flexible machine learning models like neural networks.
Cheng et al. (2022) consider fitting parameters of a stylized model that characterizes skills and dependencies among crowdworkers and then using the model to improve group estimates. They demonstrate that resulting group estimates can turn out to be worse than averaging if the model is misspecified. This casts doubt on the practicality of approaches that aim to improve on averaging. Predict-each-worker overcomes this limitation by enabling the use of neural networks, which behave similarly to nonparametric models and thus do not suffer from misspecification to the extent that stylized models do.
Other problem settings. Some prior works consider a problem setting where some or all the outcomes could be observed. Tang and Lease (2011), Atarashi et al. (2018), and Gaunt et al. (2016) develop so-called semi-supervised techniques to improve aggregation for machine learning problems. Another line of work (Budescu and Chen, 2015; Merkle et al., 2017; Satopää et al., 2021) focuses on generating improved forecasts of consequential future events. By having the ability to observe some or all true outcomes, these works propose algorithms that make fewer restrictive assumptions about how crowdworkers relate. Hence, for settings where ground truth information is available for a large number of crowdsourcing tasks, methods in these prior works could be more appropriate, but if ground truth information is sparse, then our approach predict-each-worker could be more helpful.
Now, we discuss a setting where the center only gets to interact with crowdworkers for one round. To gauge the skill level in this setting, prior works (Prelec et al., 2017; Palley and Soll, 2019; Palley and Satopää, 2023) let each crowdworker not only guess the true quantity but also guess what the mean of the other crowdworkers’ estimates will be. The intuition is that crowdworkers who can accurately predict how others will respond will tend to be more skilled. In problem settings where each crowdsourcing task is very different from the others, using methods in the works above might be more beneficial, but if crowdsourcing tasks are similar and allow generalization, then our method could perform better.
While we treat the mechanism for seeking crowdworker estimates as given, there are several works that try to improve the method for collecting crowdworker estimates. We believe that the suggestions given in these works should be used when applying our method. There are three bodies of work that we discuss. First, works by Da and Huang (2020) and Frey and Van de Rijt (2021) suggest that crowdworkers should not get access to estimates of others before they generate their own estimates. These works, using theoretical models and experiments, show that if this requirement is not met, errors made by crowdworkers can covary a lot and may not average out to zero, even with many crowdworkers. As our work relies on the usefulness of averaging, it may be necessary to blind crowdworkers to each other’s estimates.
Second, most crowdworkers get paid. Accordingly, some studies examine how monetary incentives can raise the quality of crowdworkers’ estimates. The work by Mason and Watts (2009) observed that increasing the pay did not increase the amount of quality but only increased the number of tasks that were done. Later, the work by Ho et al. (2015) pointed out that the observation made by Mason and Watts (2009) is usually true when the tasks are easy but would not hold if tasks are “effort responsive.” Their experimental study verified this claim.
6 Concluding Remarks
In this work, we proposed a novel approach to crowdsourcing that we call predict-each-worker. We showed that it performs better than sample averaging, matches the performance of EM with a large dataset, and is asymptotically optimal for a Gaussian data-generating process. Additionally, we discussed how our algorithm can accommodate complex patterns between crowdworkers using neural networks. Taken together, our algorithm’s performance and flexibility make it an attractive alternative to EM and other prior work.
Our work motivates many new directions for future work. First, our aggregation rule is limited to cases where the estimates take continuous values. We leave it for future work to develop simple aggregation rules that can be applied when the estimates are categorical, thereby making our approach directly applicable to many machine-learning problems like supervised learning (Zhang et al., 2016a) and fine-tuning language models (Ouyang et al., 2022). It would also be interesting to extend our approach to handle cases where estimates take complex forms, like in the single-question crowdsourcing problem (Prelec et al., 2017) or when they take forms like text or bounding boxes (Braylan and Lease, 2020). Second, for simplicity and clarity, we restricted our focus to simulated data. We leave it for future work to conduct a more thorough experimental evaluation using real-world datasets.
Acknowledements
This work was partly inspired by conversations with Morteza Ibrahimi and John Maggs. The paper also benefited from discussions with Saurabh Kumar and Wanqiao Xu. We gratefully acknowledge the Robert Bosch Stanford Graduate Fellowship, the NSF GRFP Fellowship, and financial support from the Army Research Office through grant W911NF2010055 and SCBX through the Stanford Institute for Human-Centered Artificial Intelligence.
References
- Anthropic (2023) Anthropic. Introducing claude. https://www.anthropic.com/index/introducing-claude, 2023.
- Atarashi et al. (2018) Atarashi, K., Oyama, S., and Kurihara, M. Semi-supervised learning from crowds using deep generative models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
- Bach et al. (2017) Bach, S. H., He, B., Ratner, A., and Ré, C. Learning the structure of generative models without labeled data. In International Conference on Machine Learning, pages 273–282. PMLR, 2017.
- Boussioux et al. (2023) Boussioux, L., Lane, J. N., Zhang, M., Jacimovic, V., and Lakhani, K. R. The Crowdless Future? How Generative AI is Shaping the Future of Human Crowdsourcing. Technical Report 24-005, Harvard Business School Technology & Operations Management Unit, 2023. URL https://ssrn.com/abstract=4533642.
- Braylan and Lease (2020) Braylan, A. and Lease, M. Modeling and aggregation of complex annotations via annotation distances. In Proceedings of The Web Conference 2020, pages 1807–1818, 2020.
- Budescu and Chen (2015) Budescu, D. V. and Chen, E. Identifying expertise to extract the wisdom of crowds. Management science, 61(2):267–280, 2015.
- Cai et al. (2020) Cai, D., Nguyen, D. T., Lim, S. H., and Wynter, L. Variational bayesian inference for crowdsourcing predictions. In 2020 59th IEEE Conference on Decision and Control (CDC), pages 3166–3172. IEEE, 2020.
- Chen et al. (2021) Chen, Z., Wang, H., Sun, H., Chen, P., Han, T., Liu, X., and Yang, J. Structured probabilistic end-to-end learning from crowds. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pages 1512–1518, 2021.
- Cheng et al. (2022) Cheng, C., Asi, H., and Duchi, J. How many labelers do you have? A closer look at gold-standard labels. arXiv preprint arXiv:2206.12041, 2022.
- Cosma and Evers (2010) Cosma, I. A. and Evers, L. Markov chains and Monte Carlo methods. African Institute for Mathematical Sciences, 2010.
- Da and Huang (2020) Da, Z. and Huang, X. Harnessing the wisdom of crowds. Management Science, 66(5):1847–1867, 2020.
- Dalkey and Helmer (1963) Dalkey, N. and Helmer, O. An experimental application of the Delphi method to the use of experts. Management science, 9(3):458–467, 1963.
- Dawid and Skene (1979) Dawid, A. P. and Skene, A. M. Maximum likelihood estimation of observer error-rates using the em algorithm. Journal of the Royal Statistical Society: Series C (Applied Statistics), 28(1):20–28, 1979.
- de Meyrick (2003) de Meyrick, J. The Delphi method and health research. Health education, 103(1):7–16, 2003.
- Draws et al. (2021) Draws, T., Rieger, A., Inel, O., Gadiraju, U., and Tintarev, N. A checklist to combat cognitive biases in crowdsourcing. In Proceedings of the AAAI conference on human computation and crowdsourcing, volume 9, pages 48–59, 2021.
- Eickhoff (2018) Eickhoff, C. Cognitive biases in crowdsourcing. In Proceedings of the eleventh ACM international conference on web search and data mining, pages 162–170, 2018.
- Faltings et al. (2014) Faltings, B., Jurca, R., Pu, P., and Tran, B. D. Incentives to counter bias in human computation. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, volume 2, pages 59–66, 2014.
- Frey and Van de Rijt (2021) Frey, V. and Van de Rijt, A. Social influence undermines the wisdom of the crowd in sequential decision making. Management science, 67(7):4273–4286, 2021.
- Gallier (2010) Gallier, J. H. Notes on the Schur complement. 2010.
- Gaunt et al. (2016) Gaunt, A., Borsa, D., and Bachrach, Y. Training deep neural nets to aggregate crowdsourced responses. In Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence. AUAI Press, volume 242251, 2016.
- Geiger et al. (2020) Geiger, R. S., Yu, K., Yang, Y., Dai, M., Qiu, J., Tang, R., and Huang, J. Garbage in, garbage out? Do machine learning application papers in social computing report where human-labeled training data comes from? In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 325–336, 2020.
- Google (2023) Google. Bard, 2023. https://bard.google.com/, 2023.
- Ho et al. (2015) Ho, C.-J., Slivkins, A., Suri, S., and Vaughan, J. W. Incentivizing high quality crowdwork. In Proceedings of the 24th International Conference on World Wide Web, pages 419–429, 2015.
- Kara et al. (2015) Kara, Y. E., Genc, G., Aran, O., and Akarun, L. Modeling annotator behaviors for crowd labeling. Neurocomputing, 160:141–156, 2015.
- Kenter and De Rijke (2015) Kenter, T. and De Rijke, M. Short text similarity with word embeddings. In Proceedings of the 24th ACM international on conference on information and knowledge management, pages 1411–1420, 2015.
- Kwan (2014) Kwan, C. C. A regression-based interpretation of the inverse of the sample covariance matrix. Spreadsheets in Education, 7(1), 2014.
- Larrick and Soll (2006) Larrick, R. P. and Soll, J. B. Intuitions about combining opinions: Misappreciation of the averaging principle. Management science, 52(1):111–127, 2006.
- Li et al. (2014) Li, Q., Li, Y., Gao, J., Su, L., Zhao, B., Demirbas, M., Fan, W., and Han, J. A confidence-aware approach for truth discovery on long-tail data. Proceedings of the VLDB Endowment, 8(4):425–436, 2014.
- Li et al. (2021) Li, S.-Y., Huang, S.-J., and Chen, S. Crowdsourcing aggregation with deep Bayesian learning. Science China Information Sciences, 64:1–11, 2021.
- Li et al. (2019) Li, Y., Rubinstein, B., and Cohn, T. Exploiting worker correlation for label aggregation in crowdsourcing. In International conference on machine learning, pages 3886–3895. PMLR, 2019.
- Liu et al. (2021) Liu, X., Zhang, F., Hou, Z., Mian, L., Wang, Z., Zhang, J., and Tang, J. Self-supervised Learning: Generative or Contrastive. IEEE transactions on knowledge and data engineering, 35(1):857–876, 2021.
- Mars (2022) Mars, M. From word embeddings to pre-trained language models: A state-of-the-art walkthrough. Applied Sciences, 12(17):8805, 2022.
- Mason and Watts (2009) Mason, W. and Watts, D. J. Financial incentives and the ”performance of crowds”. In Proceedings of the ACM SIGKDD workshop on human computation, pages 77–85, 2009.
- Merkle et al. (2017) Merkle, E. C., Steyvers, M., Mellers, B., and Tetlock, P. E. A neglected dimension of good forecasting judgment: The questions we choose also matter. International Journal of Forecasting, 33(4):817–832, 2017.
- Moreno et al. (2015) Moreno, P. G., Artés-Rodríguez, A., Teh, Y. W., and Perez-Cruz, F. Bayesian nonparametric crowdsourcing. Journal of Machine Learning Research, 16(48):1607–1627, 2015.
- National Weather Service (2024) National Weather Service. Timeline of the national weather service. https://www.weather.gov/timeline, 2024. Accessed: 2024-01-28.
- Ollion et al. (2023) Ollion, E., Shen, R., Macanovic, A., and Chatelain, A. ChatGPT for Text Annotation? Mind the Hype! 2023.
- OpenAI (2023) OpenAI. GPT-4 technical report. arXiv preprint arXiv:2303.08774, 2:13, 2023.
- Ouyang et al. (2022) Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744, 2022.
- Palley and Satopää (2023) Palley, A. B. and Satopää, V. A. Boosting the wisdom of crowds within a single judgment problem: Weighted averaging based on peer predictions. Management Science, 2023.
- Palley and Soll (2019) Palley, A. B. and Soll, J. B. Extracting the wisdom of crowds when information is shared. Management Science, 65(5):2291–2309, 2019.
- Prelec et al. (2017) Prelec, D., Seung, H. S., and McCoy, J. A solution to the single-question crowd wisdom problem. Nature, 541(7638):532–535, 2017.
- Raykar et al. (2010) Raykar, V. C., Yu, S., Zhao, L. H., Valadez, G. H., Florin, C., Bogoni, L., and Moy, L. Learning from crowds. Journal of machine learning research, 11(4), 2010.
- Rodrigues and Pereira (2018) Rodrigues, F. and Pereira, F. Deep learning from crowds. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
- Satopää et al. (2021) Satopää, V. A., Salikhov, M., Tetlock, P. E., and Mellers, B. Bias, information, noise: The bin model of forecasting. Management Science, 67(12):7599–7618, 2021.
- Sheng and Zhang (2019) Sheng, V. S. and Zhang, J. Machine learning with crowdsourcing: A brief summary of the past research and future directions. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 9837–9843, 2019.
- Skulmoski et al. (2007) Skulmoski, G. J., Hartman, F. T., and Krahn, J. The Delphi method for graduate research. Journal of Information Technology Education: Research, 6(1):1–21, 2007.
- Smithsonian Institution Archives (2024) Smithsonian Institution Archives. Meteorology: Joseph henry’s forecasts. https://siarchives.si.edu/history/featured-topics/henry/meteorology, 2024. Accessed: 2024-01-28.
- Tang and Lease (2011) Tang, W. and Lease, M. Semi-supervised consensus labeling for crowdsourcing. In SIGIR 2011 workshop on crowdsourcing for information retrieval (CIR), pages 1–6, 2011.
- Tetlock and Gardner (2016) Tetlock, P. E. and Gardner, D. Superforecasting: The art and science of prediction. Random House, 2016.
- Törnberg (2023) Törnberg, P. ChatGPT-4 outperforms experts and crowd workers in annotating political twitter messages with zero-shot learning. arXiv preprint arXiv:2304.06588, 2023.
- Touvron et al. (2023) Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
- Vaughan (2017) Vaughan, J. W. Making better use of the crowd: How crowdsourcing can advance machine learning research. J. Mach. Learn. Res., 18(1):7026–7071, 2017.
- Yuan (2010) Yuan, M. High dimensional inverse covariance matrix estimation via linear programming. The Journal of Machine Learning Research, 11:2261–2286, 2010.
- Zhang et al. (2022) Zhang, J., Hsieh, C.-Y., Yu, Y., Zhang, C., and Ratner, A. A survey on programmatic weak supervision. arXiv preprint arXiv:2202.05433, 2022.
- Zhang et al. (2016a) Zhang, J., Wu, X., and Sheng, V. S. Learning from crowdsourced labeled data: a survey. Artificial Intelligence Review, 46:543–576, 2016a.
- Zhang et al. (2016b) Zhang, Y., Chen, X., Zhou, D., and Jordan, M. I. Spectral methods meet EM: A provably optimal algorithm for crowdsourcing. The Journal of Machine Learning Research, 17(1):3537–3580, 2016b.
- Zheng et al. (2017) Zheng, Y., Li, G., Li, Y., Shan, C., and Cheng, R. Truth inference in crowdsourcing: Is the problem solved? Proceedings of the VLDB Endowment, 10(5):541–552, 2017.
- Zhu et al. (2023) Zhu, Y., Zhang, P., Haq, E.-U., Hui, P., and Tyson, G. Can ChatGPT reproduce human-generated labels? A study of social computing tasks. arXiv preprint arXiv:2304.10145, 2023.
Appendix A Proof of Theorem 3.1
To prove Theorem 3.1, we first state the Hammersley-Clifford Theorem. It basically says that under minor technical conditions, “leave-one-out” conditional distributions specify the joint distribution.
Lemma A.1 (Hammersley-Clifford Theorem).
Let be a random variable. Suppose its probability density function has product-form support. Then, for all and ,
The proof is given below. It is adapted from Cosma and Evers (2010).
Proof.
Here, and follow from Bayes rule. The product-form support ensures that for all , and are both well-defined and non-zero. ∎
Proof.
As is absolutely continuous, and the corresponding density has product-form support, Lemma A.1 implies that determine . Moreover, as is minimal, in the sense that, determines , we have that determines .
∎
Appendix B Proof of Theorem 3.2
The proof is based on the following lemma, a well-known result in the literature for covariance matrix estimation (Yuan, 2010; Kwan, 2014). It shows how a covariance matrix can be expressed in terms of coefficients and errors of certain linear regression problems.
Lemma B.1.
Consider a covariance matrix . Suppose . For each , let
Then,
We defer the proof to Appendix B.1.
Proof.
As for any , , we have
Next, by Lemma B.1,
Second, recall that . By substituting from above in , we get the result. ∎
B.1 Proof of Lemma B.1
To prove Lemma B.1, we state another lemma that allows us to write the inverse of a block matrix in terms of its blocks.
Lemma B.2.
Let be an invertible matrix. Suppose it has the following block form:
If is invertible, then
Please refer to pages 1 and 2 of (Gallier, 2010) on Schur complements for a proof of this lemma.
Proof.
To prove this Lemma, we begin by writing as follows.
As is positive definite, it is easy to show that is unique. This unique value and the corresponding error can be expressed in terms of elements of . Specifically, we have
Then, we apply Lemma B.2 on matrix to get that the first row of is the following.
In other words, for ,
With some simple algebra, it is straightforward to extend this argument for other values of .
∎
See 3.2
Appendix C Proof of Theorem 3.3
To prove Theorem 3.3, we use two results. First, Theorem 3.2, a theorem that offers a characterization of in terms of clairvoyant SSL parameters. Second, a result that shows that Bayesian linear regression is consistent for our problem.
The following lemma formalizes our claim that Bayesian linear regression converges for our problem.
Lemma C.1.
If , , and , then for each , and .
The proof is deferred to Appendix C.1. We assume and to make the proof simpler, but it is possible to prove a similar result by only assuming and .
Proof.
As , to prove the theorem, it suffices to prove that almost surely.
Before we prove this, we show that almost surely.
uses the definition of . is true because of Lemma C.1. is true because of Theorem 3.2.
Now, we show .
is true using the definition of . is true because the first limit exists surely and because the second limit exists almost surely. is true and because exists almost surely. uses consistency of .
∎
C.1 Proof of Lemma C.1
Proof.
Recall that is a minimizer of the following function, where the minimization happens over values .
where is an additive constant that does not depend on and .
We claim that the following pair is the unique minimizer of for all if and .
To establish this uniqueness, we find the Hessian matrix for at and show that it is positive definite for all . This matrix is equal to
First, this matrix is well-defined because for all (consequence of being positive). Second, this matrix is positive-definite because and .
Now we will show that for each , .
is a valid step because both limits exist almost surely. is true because and law of large numbers implies that almost surely. is true because and law of large numbers implies that almost surely.
Now we will show that for each , .
is valid because each limit exists surely or almost surely. is valid because each limit exists surely. is true because , , and . and follow from definitions of and .
∎
Appendix D Simulation Study Details
D.1 Data-Generating Process Satisfies Problem-Formulation Assumptions
Recall that we generate the data in the following manner. For each crowdworker ,
Here, , is an iid sequence of standard Gaussian vectors that is independent of , and satisfies the following. , is sampled independently from , the factor , and any other coefficient , where .
Let us verify if this data-generating process satisfies the assumptions that we made in the problem formulation. First, we will check that for any value of , the elements of are exchangeable. As the elements of are iid conditioned on and , by de Finetti’s theorem, they are exchangeable. Second, we will check that are exchangeable. This is true because conditioned on coefficients , are iid. Next, we will check that converges almost surely.
Here, follows from the definition of . follows because exists surely, and the second term will be shown to exist almost surely. is valid because elements of are finite almost surely and because . is true because of the strong law of large numbers.
D.2 Hyperparameter Tuning for Predict-Each-Worker
We describe how we do hyperparameter tuning for this predict-each-worker here. For each , we do a separate hyperparameter search. We tune the hyperparameters for the SSL step, and then we tune the hyperparameter for the aggregation step using the best hyperparameters from the SSL step.
The performance metric that we use to select hyperparameters in the SSL step judges the quality of the learned SSL models. This metric is an expectation of a loss function. The intuition behind this loss function is as follows. Given a history , the SSL models approximate the clairvoyant joint distribution over . The loss function is equal to the KL divergence between the clairvoyant distribution over and the distribution implied by these SSL models (details in Appendix D.4).
Empirically, when selecting hyperparameters, we noticed that there is no hyperparameter setting, that dominates across different values of . Therefore, we choose the hyperparameters in two steps. First, we evaluate performance for and filter out all hyperparameter combinations for which the performance does not monotontically improve as increases. Then, amongst the remaining hyperparameter combinations, we select the hyperparameters that result in the best performance for some large value of which we call . We let .
We let and . We let where is the diagonal element and is the off-diagonal element. We search over , and .
When tuning the aggregation hyperparameter , we use the best SSL parameters, and we select based on an estimate of the MSE (estimation procedure discussed in Appendix D.3). As with SSL parameters, we enforce that performance should monotonically improve as increases. Specifically, we evaluate for . Then, finally, we choose the best hyperparameters based on performance at . Note that the values we evaluate for and the value of is different for the SSL step and the aggregation step.
For each value of , we estimate MSE by averaging across multiple training seeds. We vary . The final hyperparameters can be found in Table 1.
To tune hyperparameters for both EM and predict-each-worker, we assume access to the clairvoyant distribution . We do this for simplicity. In Appendix D.5, we discuss how hyperparameter tuning can be performed without access to .
D.3 Estimating MSE
The MSE is given by where is the group estimate generated by a policy after observing . Computing MSE exactly is typically computationally expensive and needs to be estimated. Here, we discuss how we estimate MSE in the context of our experiments. We utilize three features of our experiments to make it easier to estimate MSE. First, we use the fact that the data is drawn from the Gaussian data-generating process. Second, we use the knowledge of the noise covariance matrices from which the data is drawn from. Third, we use the fact that all policies we consider generate a group estimate that is linear in , i.e., and depends only on .
is true using tower property. uses the law of total variance. uses the fact that the estimates are drawn from the Gaussian data-generating process. uses tower property. is true because is determined by and is determined by .
The calculation above implies that
(7) |
We use Monte-Carlo simulation to estimate the right-hand side. Specifically, we sample a set of , for each we sample one history , and then compute the quantity inside the expectation.
Number of seeds.
-
•
To perform hyperparameter tuning for EM and predict-each-worker, we sample sixty values of .
-
•
To generate plots in Figure 3, we sample fifty values of . These values are different from the values of that are used in tuning.
D.4 Metric for Tuning SSL Hyperparameters
Here, we discuss the metric that we use for tuning SSL Hyperparameters. This metric is an expectation of a loss function. The loss function is equal to the KL divergence between the clairvoyant distribution over and the distribution implied by the learned SSL models. For the Gaussian data-generating process, recall that the clairvoyant distribution satisfies . This implies that estimating is equivalent to estimating . Matrix can be estimated from the learned SSL models, and we provide details of the estimation procedure below.
For the Gaussian data-generating process, our SSL models are parameterized by . We provide a two-step transformation to go from to a covariance matrix. The first step is to apply a function . Inputs to are parameters of SSL models, and the output is a matrix. Suppose we input , then the output satisfies the following.
The output of this function can be a covariance matrix. For example, if the inputs are the prescient SSL parameters , the output is . However, it is not necessary that the output of will be a covariance matrix. The second step of our transformation addresses this.
In the second step, we compute a covariance matrix that we treat as the estimate of . If the determinant is negative, then we set to be the all zeros matrix. If is non-negative, then we find a covariance matrix that has the same determinant as the output of and is closest to this output in 2-norm sense. We will discuss why such a covariance matrix must exist. However, before we discuss this, we formally define . is a minimizer of this optimization problem
subject to |
The optimization problem is feasible because the diagonal matrix with elements satisfies the constraint.
Having computed , we state the loss function to judge the quality of SSL parameters
Number of seeds. To perform hyperparameter tuning, we sample sixty values of and average the loss function above to estimate the performance metric.
D.5 Hyperparameter Tuning with Unobserved Outcomes
For our simulation study, we tuned hyperparameters assuming the knowledge of the clairvoyant distribution . Specifically, we used this knowledge to estimate metrics that judged the quality of SSL models and group estimates. The clairvoyant distribution will be known in any practical situation. We discuss how to tune hyperparameters in such a situation. Though our ideas generalize, we make two simplifying assumptions. First, we assume the data is drawn from a Gaussian data-generating process. Second, we assume the estimates are zero-mean, i.e., .
Tuning SSL hyperparameters. Suppose we have a dataset of estimates . We will divide this dataset into a training dataset and a test dataset . Given a set of SSL hyperparameters, first we compute based on and then use the procedure in Appendix D.4 to compute – the estimate of based on SSL models. We then compute the sample covariance matrix based on and judge the SSL hyperparameters based on
Instead of doing one train-test split, one could also perform the so-called -fold cross-validation.
Tuning aggregation hyperparameters. The aggregation step has two hyperparameters: and . Hyperparameter indicates the prior variance of true outcomes, and controls the rate of decay of regularization towards the prior aggregation weights. We first discuss a method to set and then discuss two methods to tune .
To set , we use the fact that we defined to be the average of estimates produced by a large population of crowdworkers. Hence, can be estimated by computing the sample variance of the group estimates produced by averaging. Ideally, this variance should be computed on a dataset where both and are large. However, this typically would not be practical. Hence, one would have to choose large enough values of and that are also practically feasible.
We now discuss two methods to tune . The first method involves estimating MSE using a so-called golden dataset. This dataset contains crowdsourcing tasks for which the true outcomes can be estimated with high accuracy. For our problem, the golden dataset can be constructed by having many crowdworker estimates for each outcome. This dataset would only be used for evaluation and not for training. As it would only be used for evaluation, it should be okay to have a limited number of engagements with each crowdworker when constructing this dataset. The rationale is that it takes more data to train but less data to evaluate.
The second method to tune involves estimating a quantity that differs from MSE by an additive constant. Suppose we want to do an evaluation on crowdsourcing tasks in . We will first compute a group estimate for each crowdsourcing task in . Then, for each crowdsourcing task, we will sample a crowdworker uniformly randomly from a large pool of out-of-sample crowdworkers, get an estimate from the sampled crowdworker, and treat their estimate as the ground truth. For each task, the crowdworker must be sampled independently. Formally, this procedure allows us to estimate
where is the estimate of an out-of-sample crowdworker and is the group estimate. The index acts as a placeholder for a random out-of-sample crowdworker, and one should not interpret this as some fixed crowdworker’s estimate. The squared error above differs from MSE by an additive constant because is a single-sample Monte-Carlo estimate of and because is equal to when the limit exists. We give more details below.
Here, is true because of the tower property. uses the law of total variance. uses the fact that are exchangeable a priori and remain exchangeable even when conditioned on and . uses the assumption that exists almost surely and is defined to be equal to this limit when it exists. The final equality makes it clear that and , the MSE, differ by an additive constant.
Appendix E Extension of Predict-Each-Worker for Settings with Observed Outcomes
Here, we describe an extension for our algorithm that can be used when some or all outcomes are observed. However, if a large fraction of outcomes are observed, then one might be better off using standard supervised learning methods to learn a mapping from estimates to an outcome.
In our extension, we keep the SSL step the same, but we change the aggregation step. First, we train the SSL models on all estimate vectors. Then, for all estimate vectors whose outcomes are known, and for each SSL model, we compute the following statistics: expected error and SSL gradient – this is similar to what we do in Section 3.1. These SSL statistics and the corresponding outcomes are then used to train a neural network. This neural network takes a set of expected errors and SSL gradients as input and produces an estimate of the true outcome. The rationale for using these SSL statistics instead of the estimate vectors as inputs is that it allows leveraging all observed estimates, even the ones for which the outcomes are not observed. We will denote the neural network obtained after training as .
Now, we discuss how to produce a group estimate for a new estimate vector . We first compute a set of expected errors and SSL gradients based on . Then, we input these SSL statistics to neural network to produce an estimate of the outcome. Additionally, we use these statistics to compute , where is based on our aggregation formula (2). Finally, we compute the group estimate by taking a convex combination of and ,
Here, is a hyperparameter that can be tuned based on a validation set. We should expect to increase with , the number of estimates for which the outcome is known. Reason being that larger the value of , the more we can trust .