Learning Losses for Strategic Classification
Abstract
Strategic classification, i.e. classification under possible strategic manipulations of features, has received a lot of attention from both the machine learning and the game theory community. Most works focus on analysing properties of the optimal decision rule under such manipulations. In our work we take a learning theoretic perspective, focusing on the sample complexity needed to learn a good decision rule which is robust to strategic manipulation. We perform this analysis by introducing a novel loss function, the strategic manipulation loss, which takes into account both the accuracy of the final decision rule and its vulnerability to manipulation. We analyse the sample complexity for a known graph of possible manipulations in terms of the complexity of the function class and the manipulation graph. Additionally, we initialize the study of learning under unknown manipulation capabilities of the involved agents. Using techniques from transfer learning theory, we define a similarity measure for manipulation graphs and show that learning outcomes are robust with respect to small changes in the manipulation graph. Lastly, we analyse the (sample complexity of) learning of the manipulation capability of agents with respect to this similarity measure, providing novel guarantees for strategic classification with respect to an unknown manipulation graph.
1 Introduction
In many scenarios where a decision rule is learned from data, the publication of this decision rule has an effect on the distribution of the underlying population that may harm the quality of the rule. For example, applicants for a loan may change details in their bank account to receive a better score, people may join a gym or sports club without ever intending to participate, in order to get a better health insurance policy, or students may employ different strategies such as registering to volunteer, or joining rare clubs (without attending either) to appear better on college applications.
Effects and incentives resulting from strategic behavior in classification scenarios have received substantial attention from both machine learning and game-theoretic perspectives in recent years (Hardt et al. 2016; Milli et al. 2019; Haghtalab et al. 2020; Tsirtsis and Rodriguez 2020; Zhang and Conitzer 2021). Most works study this as a two-player game between an institution that publishes a decision rule and a population of best responding agents to be classified. Given the classifier, these agents may change their feature representations in order to obtain a more favorable classification outcome. To prevent the induced additional classification error the institution will publish a modified predictor, not transparently reflecting the underlying intent and potentially causing additional harm to sub-populations that may be less equipped to perform the required changes to their representations (Hu, Immorlica, and Vaughan 2019).
In this work, we propose a learning theoretic take on this scenario. In machine learning, it is common to model desiderata for a learning outcome in form of a loss function. The goal of the learning process is then to identify a predictor that minimizes this loss in expectation over a data-generating distribution. Thus, we here define a novel loss function for learning under strategic manipulations. The aim of this loss is to induce a combination of two (potentially competing) requirements: achieving low classification error taking into account that individuals being classified may manipulate their features, and discouraging such feature manipulations overall. Prior work has shown that these may be conflicting requirements (Zhang and Conitzer 2021). Our proposed loss function thus aims to induce a balanced combination of these requirements rather than strictly enforcing one, and then only observing the effect on the other (as is implied by frameworks that aim to minimize classification error under best-responding agents (Hardt et al. 2016; Milli et al. 2019) or enforcing incentive-compatibility (Zhang and Conitzer 2021)).
To define our strategic manipulation loss we employ an abstraction of the plausible feature manipulations in form of a manipulation graph (Zhang and Conitzer 2021). An edge in this graph indicates that an individual with feature vector may change their features to present as if this leads to a positive classification, for example since the utility of this change in classification exceeds the cost of the change between these vectors. We define our strategic loss in dependence of this graph and carefully motivate the proposed loss in terms of requirements and effects from previous literature. We then analyze the sample complexity of learning with this loss function. We identify sufficient conditions for proper learnability that take into account the interplay between a hypothesis class and an underlying manipulation graph. Moreover, we show that every class that has finite VC-dimension is learnable with respect to this loss by drawing a connection to results in the context of learning under adversarial perturbations (Montasser, Hanneke, and Srebro 2019). This effect may be surprising, since it presents a contrast to learning VC-classes with the sole requirement of minimizing classification error under strategic feature manipulations, which has been shown can lead to some VC-classes not being learnable (Zhang and Conitzer 2021). Thus, our analysis shows that balancing classification error with disincentivizing feature manipulations can reduce the complexity of the learning problem.
Moreover, we show that the quality of learning outcomes under our loss function is robust to inaccuracies in the manipulation graph. Such a robustness property is important, since an assumed graph might not exactly reflect agents’ responses. In fact, it has recently been argued that the model of best-responding agents is not backed up by empirical observations on agent distributions after strategic responses (Jagadeesan, Mendler-Dünner, and Hardt 2021). Moreover, different sub-populations may have differences in their manipulation graphs (different capabilities to manipulate their features) or a manipulation graph may be inferred from data and therefore exhibit statistical errors. We introduce a novel distance measure between manipulation graphs by drawing connections to learning bounds in transfer learning (Ben-David et al. 2010; Mansour, Mohri, and Rostamizadeh 2009) and show that the strategic loss of a learned predictor when employing a different manipulation graph can be bounded in terms of this distance measure. Finally, we present some initial results on how manipulation graphs may be learned from data.
1.1 Related work
That learning outcomes might be compromised by agents responding to published classification rules with strategic manipulations of their feature vectors was first pointed out over a decade ago (Dalvi et al. 2004; Brückner and Scheffer 2011) and has received substantial interest from the research community in recent years initiated by a study by Hardt et al. that differentiated the field from the more general context of learning under adversarial perturbations (Hardt et al. 2016). That study considered strategic responses being induced by separable cost functions for utility maximizing agents and studied the resulting decision boundaries for certain classes of classifiers. Recent years have seen a lot of interest in better understanding the interplay of various incentives in settings where a decision rule is published and thereby has an effect on how the entities that are to be classified might present themselves to the decision-maker. In particular, various externalities to this scenario have been analyzed. A general cost to society formalized in form of a “social burden” incurred by the costs of enforced feature manipulation, has been shown to occur when institutions anticipate strategic responses (Milli et al. 2019; Jagadeesan, Mendler-Dünner, and Hardt 2021). Further, it has been demonstrated how such a burden may be suffered to differing degrees by various subgroups of a population that may differ in their capabilities to adapt their features in ways that are favorable to them (Milli et al. 2019; Hu, Immorlica, and Vaughan 2019), raising concerns over fairness in such scenarios.
Recent studies have extended the original game-theoretic model of a classifier publishing intuition and best responding subjects. For example, a recent work studied how strategic modification can also be a positive effect and how that should be taken into consideration by the institution (Haghtalab et al. 2020). Such a perspective has been connected to underlying causal relations between features and classification outcome and resulting strategic recommendations (Miller, Milli, and Hardt 2020; Tsirtsis and Rodriguez 2020). Further, a very recent study has explored how the model of a best responding agent may be relaxed to better reflect empirically observed phenomena (Jagadeesan, Mendler-Dünner, and Hardt 2021).
Much of previous work considers the scenario of classification with strategic agents on a population level. A few recent studies have also analyzed how phenomena observed on samples reflect the underlying population events (Haghtalab et al. 2020). Notably, very recent studies provided first analyses of learning with strategically responding agents in a PAC framework (Zhang and Conitzer 2021; Sundaram et al. 2021). The former work studied the sample complexity of learning VC-classes in this setup and analyzed effects on sample complexity of enforcing incentive compatibility for the learned classification rules. Our work can be viewed as an extension of this analysis. We propose to combine aspects of incentive compatibility and minimizing negative externalities such as social burden in form of a novel loss function that may serve as a learning objective when strategic responses are to be expected.
Our sample complexity analysis is then hinging on techniques developed in the context of learning under adversarial perturbations, a learning scenario which has received considerable research attention in recent years (Feige, Mansour, and Schapire 2015; Cullina, Bhagoji, and Mittal 2018; Montasser, Hanneke, and Srebro 2019, 2021). While the learning problems are not identical, we present how strategic behaviour can be modeled as a form of “one-sided adversarial perturbation” and inheritance of resulting learning guarantees.
1.2 Overview on contributions
In Section 2 we review our notation and then introduce our new notion of strategic loss and motivate it. Our main contributions can be summarized as follows:
- Strategic manipulation loss
-
We propose a novel loss function for learning in the presence of strategic feature manipulations. We carefully motivate this loss by relating it to concepts of social burden and incentive compatibility (and their potential trade-offs with accuracy) in prior literature.
- Sample complexity analysis
-
We analyze (PAC type) learnability of VC-classes with the strategic loss. We provide sufficient conditions (and examples of when they are satisfied) for learnability with a proper learner. By drawing connections and adapting results from learning under adversarial perturbations to our setup, we also show that, while proper learnability can not always be guaranteed, every VC-class is learnable under the strategic loss with an improper learner.
- Robustness to inaccurate manipulation information
-
We investigate the impact of using an approximate manipulation graph to yield a surrogate strategic loss function for cases where the true manipulation graph is not known or not accessible. For this, we introduce a novel similarity measure on graphs and show that if graphs are similar with respect to our notion then they yield reasonable surrogate strategic losses for each other (Theorem 6).
- Learning the manipulation graph
-
We explore the question of whether it is possible to learn a manipulation graph that yields a good surrogate strategic loss. We identify a sufficient condition for a class of graphs being learnable with respect to our previously defined similarity measure for graphs (Theorem 7), which in turn guaranteed the learning of a reasonable surrogate loss.
All proofs can be found in the appendix of the full version.
2 Setup
2.1 Basic Learning Theoretic Notions for Classification
We employ a standard setup of statistical learning theory for classification. We let denote the domain and (mostly ) a (binary) label space. We model the data generating process as a distribution over and let denote the marginal of over . We use the notation to indicate that is a sample from distribution and to indicate that set is a sequence (for example a training or test data set) of i.i.d. samples from . Further, we use notation to denote the regression or conditional labeling function of . We say that the distribution has deterministic labels if for all .
A classifier or hypothesis is a function . A classifier can naturally be viewed a subset of , namely . We let denote the set of all Borel measurable functions from to (or all functions in case of a countable domain). A hypothesis class is a subset of , often denoted by . For a loss function we denote the expected loss for a distribution as and the empirical loss for a sample as . We use standard definitions like PAC learnability, sample complexity, and approximation error. For further elaborations on these definitions, we refer the reader to the appendix for an extended definitions section or a textbook (Shalev-Shwartz and Ben-David 2014).
2.2 Strategic Classification
Learning objectives in prior work
The possibilities for strategic manipulations of a feature vector are often modeled in terms of a cost function , so that indicates how expensive it is for an individual with feature vector to present as . A natural minimal assumption a cost function should satisfy is for all feature vectors . It is then typically assumed that instances best-respond to a published classifier, in that the individual would choose to pay the cost of presenting as as long as the cost doesn’t exceed the utility that would be gained from the difference in classification outcome. Assuming the benefit of individual receiving classification over classification is , the manipulation would happen if and while for a given classifier. That is, we can define the best response of an individual with feature vector facing classifier as
with ties broken arbitrarily, and assuming that, if the original feature vector is among those maximizing the above, then the individual would choose to maintain the original features. An often assumed learning goal is then performative optimality (Perdomo et al. 2020; Jagadeesan, Mendler-Dünner, and Hardt 2021), which stipulates that a learner should aim to maximize accuracy on the distribution it induces via the agent responses. That is, this objective can be phrased as minimizing
An alternative view on this setup, if the agent responses are deterministic, is to view the above as minimizing the binary loss of the effective hypothesis that is induced by and the agents’ best responses (Zhang and Conitzer 2021), defined as
(1) |
The goal of performative optimality has been combined with the notion of social burden that is induced by a classifier (Milli et al. 2019; Jagadeesan, Mendler-Dünner, and Hardt 2021). This notion reflects that it is undesirable for a (truly) positive instance to be forced to manipulate its features to obtain a (rightfully) positive classification. This is modeled by considering the burden on a positive individual to be the cost that is incurred by reaching for a positive classification and the social burden incurred by a classifier to be the expectation with respect to the data-generating process over these costs:
It has been shown that optimizing for performative optimality (under the assumption of deterministic best-responses) also incurs maximal social burden (Jagadeesan, Mendler-Dünner, and Hardt 2021).
A new loss function for learning under strategic manipulations
Arguably, to seek performative optimality (or minimize the binary loss over the effective hypothesis class) the cost function as well as the value (or function ) of positive classification needs to be known (or at least approximately known). To take best responses into account, a learner needs to know what these best responses may look like. In that case, we may ignore the details of the cost function and value , and simply represent the collection of plausible manipulations as a directed graph structure over the feature space (Zhang and Conitzer 2021). The edge-set consists of all pairs with , and we will also use the notation for , and write . We note that this formalism is valid for both countable (discrete) and uncountable domains.
Given the information in the so obtained manipulation graph , we now design a loss function for classification in the presence of strategic manipulation that reflects both classification errors and the goal of disincentivizing manipulated features as much as possible. Our proposed loss function below models that it is undesirable for a classifier to assign and if feature vector can present as . This is independent of a true label (e.g. if is sampled from the data generating process). If the label is not positive, the point gets misclassified when presents as . On the other hand, if the true label is , then either a true positive instance is forced to manipulate their features to obtain a rightly positive outcome (and this contributes to the social burden), or, if the choice is to not manipulate the features, the instance will be misclassified (prior work has also considered models where true positive instance are “honest” and will not manipulate their features (Dong et al. 2018)). Here, we propose to incorporate both misclassification and contributions to social burden into a single loss function that a learner may aim to minimize.
Definition 1.
We define the strategic loss with respect to manipulation graph as follows:
Note that the first two cases are not mutually exclusive. The above loss function discretizes the social burden by assigning a loss of whenever a positive individual is required to manipulate features. As for the standard classification loss, the above point-wise definition of a loss function allows to define the true strategic loss and empirical strategic loss of a classifier with respect to a distribution or a data sample .
2.3 Comparison with Alternative Formalisms for Strategic Classification
To motivate our proposed loss, we here discuss several scenarios where, we’d argue, minimizing the strategic loss leads to a more desirable learning outcome than learning with a binary loss, while taking strategic manipulations into account. As discussed above, a common approach to modeling classification in a setting where strategic manipulations may occur is to assume that all agents will best-respond to a published classifier. That is, if , and , then the agent with initial feature vector will effectively receive classification . A natural modeling is then to consider the effective hypothesis induced (see Equation 1) and aim to minimize the classification error with the effective class (Zhang and Conitzer 2021). However it has been shown, that the VC-dimension of may be arbitrarily larger than the VC-dimension of , and may even become infinite (Zhang and Conitzer 2021). When learning this effective class with respect to the binary loss (which corresponds to aiming for performative optimality), this will imply that the class is not learnable. By contrast, we will show below that any class of finite VC-dimension remains learnable with respect to the strategic loss (Theorem 5).
It has also been shown that the negative effects in terms of sample complexity of considering the effective hypothesis class can be avoided by considering only incentive-compatible hypotheses in , that is outputting only such hypotheses that will not induce any feature manipulations in response to the published classifier (Zhang and Conitzer 2021). While this avoids the growths in terms of VC-dimension, it may prohibitively increase the approximation error of the resulting (pruned) class as we show in the example below. We would argue that this illustrates how low sample complexity, in itself, is not a sufficient criterion for learning success.
Example 1.
Consider and a manipulation graph that includes edges and for all . This is a reasonable structure, considering that the cost of moving the (one-dimensional) feature by is worth a positive classification outcome. However, the only two hypotheses that are incentive compatible in this case are the two constant functions and . Thus, requiring incentive compatiblity forces the learner to assign all points in the space with the same label. This class, in fact, has low sample complexity. However, arguably, restricting the learning to such a degree (and suffering the resulting classification error, which will be close to for distributions with balanced classes), is, in most cases not a reasonable price to pay for dis-incentivising feature manipulations.
The following example illustrates how our loss function can be viewed as incorporating the notion of social burden directly into the loss.
Example 2.
Let’s again consider a domain and a manipulation graph with edges for all . We consider distributions that have support , thus only these four points have positive probability mass and a hypothesis class of thresholds , with . The true labeling on these distributions is . On all distributions, where all four points have positive mass the performatively optimal hypothesis (or effective hypothesis of minimal binary loss) however is . The social burden incurred then is . It is important to note that the performative optimality of is independent of the distribution over the points. A learner that minimizes the strategic loss, on the other hand, will take the distribution into account and output if , while outputting if . If the difference in mass of these points (or the margin areas in a more general setting) is significant, then minimizing the strategic loss will opt for allowing a small amount of manipulation in turn for outputting a correct classification rule in case ; and it will opt for changing the classification rule, accept a small amount of social burden in exchange for preventing a large amount of manipulations and resulting classification errors, in case . We would argue that this reflects a desirable learning outcome.
3 Learnability with the Strategic Loss
3.1 Warm up: Loss Classes and Learnability
It is well known that a class is learnable (with respect to the set of all distributions) if the loss class induced by a -valued loss function has finite VC-dimension. In the case of the binary classification loss, this is in fact a characterization for learnability (and the VC-dimension of the loss class is identical to the VC-dimension of the hypothesis class ). In general, bounded VC-dimension of the loss class is a sufficient condition for learnability (the VC-dimension provides an upper bound on the sample complexity), but it is not a necessary condition (it doesn’t, in general, yield a lower bound on the sample complexity of learning a class with respect to some loss ). We start by reviewing these notions for the classification loss and then take a closer look at the loss class induced by the strategic loss.
Let be a loss function and be a classifier. We define the loss set as the set of all labeled instances on which suffers loss . The loss class is the collection of all loss sets (in the literature, the loss class is often described as the function class of indicator functions over these sets). In the case of binary classification loss , the loss set of a classifier is exactly the complement of in . That is, in this case the loss set of is also a binary function over the domain (namely the function ). For the strategic loss on the other hand, the loss set of a classifier is not a function, since it can contain both and for some points , namely if and there exists an with and . For a class we let denote the loss class with respect to the binary loss and the loss class with respect to the strategic loss.
Definition 2.
Let be some set and be a collection of subsets of . We say that a set is shattered by if
that is, every subset of can be obtained by intersecting with some set from the collection . The VC-dimension of is the largest size of a set that is shattered by (or if can shatter arbitrarily large sets).
It is easy to verify that for the binary loss, the VC-dimension of as a collection of subsets of is identical with the VC-dimension of (and this also coincides with the VC-dimension of as a binary function class (Shalev-Shwartz and Ben-David 2014); VC-dimension is often defined for binary functions rather than for collection of subsets, however, this is limiting for cases where the loss class is not a class of functions).
We now show that the VC-dimension of a class and its loss class with respect to the strategic loss can have an arbitrarily large difference. Similar results have been shown for the binary loss class of the effective class induced by a manipulation graph (Zhang and Conitzer 2021). However the binary loss class of is different from the strategic loss class of and, as we will see, the implications for learnability are also different.
Observation 1.
For any there exists a class and a manipulation graph with and .
On the other hand, we prove that the VC-dimension of the strategic loss class is always at least as large as the VC-dimension of the original class.
Observation 2.
For any hypothesis class and any manipulation graph , we have .
Standard VC-theory tells us that, for the binary classification loss, any learner that acts according to the ERM (Empirical Risk Minimization) principle is a successful learner for classes of bounded VC-dimension . For a brief recap of the underpinnings of this result, we refer the reader to the supplementary material or for further details to (Shalev-Shwartz and Ben-David 2014). In the case of general loss classes with values in , the VC-dimension does not characterize learnability. In particular, we next show that the VC-dimension of the strategic loss class does not imply a lower bound on the sample complexity.
Theorem 3.
For every , there exists a hypothesis class with that is learnable with sample complexity in the realizable case.
3.2 Sufficient Conditions for Strategic Loss Learnability
In the previous section, we have seen that the loss class having a finite VC-dimension is a sufficient (but not necessary) condition for learnability with respect to the strategic loss. We have also seen that the VC-dimension of can be arbitrarily larger than the VC-dimension of . To start exploring what determines learnability under the strategic loss, we provide a sufficient condition for a class to be properly learnable with respect to the strategic loss.
Note that for a hypothesis , the strategic loss set can be decomposed into the loss set of with respect to the binary loss and the component that comes from the strategic manipulations. Formally, we can define the strategic component loss.
Definition 3.
We let the strategic component loss with respect to manipulation graph be defined as
We note that . We will denote the true strategic component loss with respect to marginal distribution as .
For the loss sets, we then get
and
This implies
for all classifiers , and thereby
By standard counting arguments on the VC-dimension of such unions (see, for example Chapter 6 of (Shalev-Shwartz and Ben-David 2014) and exercises in that chapter), it can be shown this decomposition implies that for . Thus, if both the class itself and the class of strategic components have finite VC-dimension, then is properly learnable by any learner that is an ERM for the strategic loss:
Theorem 4.
Let be a hypothesis class with finite . Then is properly PAC learnable with respect to the strategic loss (both in the realizable and the agnostic case).
Whether the class of strategic components has finite VC-dimension intrinsically depends on the interplay between the hypothesis class and the graph structure of the manipulation graph. In Observation 1, we have seen that the graph structure can yield the strategic component sets to have much larger complexity than the original class. In the appendix, Section B, we provide a few natural examples, where the VC-dimension of the strategic components is finite.
Theorem 4 provides a strong sufficient condition under which any empirical risk minimizer for the strategic loss is a successful agnostic learner for a class of finite VC-dimension. We believe, in many natural situations the conditions in that theorem will hold, and analyzing in more detail which graph structure, combinations of graphs structures and hypothesis classes or classes of cost function lead to the strategic component sets having finite VC-dimension is an intriguing direction for further research.
We close this section with two results, both stated in Theorem 5, on the learnability under the strategic loss in the general case where the VC-dimension of the strategic component sets may be infinite. First, there are classes and manipulation graphs for which no proper learner is (PAC-) successful, even in the realizable case. Second, for any class of finite VC-dimension and any manipulation graph, there exists an improper PAC learner. These results follow by drawing a connection from learning under the strategic loss to learning under an adversarial loss (Montasser, Hanneke, and Srebro 2019). In the general adversarial loss setup, every domain instance is assigned a set of potential perturbation , and the adversarial loss of a hypothesis is then defined as
The strategic loss can be viewed as a one-sided version of the adversarial loss, where the perturbation sets differ conditional on the label of a point, and where , while . The following results on learnability with the strategic loss then follow by slight modifications of the corresponding proofs for learning under adversarial loss.
Theorem 5.
(Adaptation of Theorem 1 and Theorem 4 in (Montasser, Hanneke, and Srebro 2019))
There exists a hypothesis class with that is not learnable with respect to the strategic loss by any proper learner for even in the realizable case. On the other hand, every class of finite VC-dimension is learnable (by some improper learner).
4 Strategic loss with respect to an approximate manipulation graph
In many situations one might not have direct access to the true manipulation graph , but only to some approximate graph . In this section, we will investigate how this change of manipulation graph impacts the corresponding loss function. We define a criterion for measuring the similarity of graphs with respect to hypothesis class and show that similar graphs will yield similar strategic losses. That is, we show an upper bound on the true strategic loss of a hypothesis (i.e., strategic loss with respect to the true manipulation graph) in terms of the graph similarity and the surrogate strategic loss of (i.e., the strategic loss with respect to the approximate graph). We will use to denote . As the set of vertices is always equal to in our setting, the graphs and are uniquely defined by and respectively. We will therefore use and , as well as and interchangeably.
We now define the distance between graphs with respect to a hypothesis class by the impact a change of manipulation graph has on the strategic component loss of elements of . This definition and its later use are inspired by works on domain adaptation (Ben-David et al. 2010; Mansour, Mohri, and Rostamizadeh 2009).
Definition 4.
For two manipulation graphs, given by and , we let their --distance be defined as
We will now bound the strategic manipulation loss with respect to the true graph in terms of the strategic manipulation loss with respect to the approximate graph and the --distance between and .
Theorem 6.
Let be any hypothesis class and two manipulation graphs. Then for any distribution over and any we have
Furthermore, by rearranging the result, we get
We note that the expression is independent of the labeling and can therefore be estimated using data without any label information. Furthermore, we have seen that small tightens the upper as well as the lower bound on . Therefore, is a suitable distance measure for approximating the structure of the manipulation graph. In the following section, we will explore learning with low from finite samples.
5 Learning a manipulation graph
In the last section, we have assumed to be given an approximate manipulation graph which we can use to learn a classifier with low strategic loss. We now want to go one step further and pose the goal of learning a manipulation graph from a predefined class of graphs such that serves as a good strategic surrogate loss for . From Theorem 6 we already know that is a good surrogate loss for if is small. This section will thus focus on learning an approximate manipulation graph with small .
In order to further specify our learning problem, we will now describe what the input of such a learning procedure will look like. For a manipulation graph , let be the function that maps an instance to its set of children, i.e., . We note that a manipulation graph is uniquely defined by . Thus we will sometimes use and interchangeably. The input to our learning procedure will be of the form of samples from the true manipulation graph .
As a next step in formulating our learning problem, we will need to define a loss function. As stated above, our goal is to learn with small . As the definition of contains a supremum over all , we cannot use it as a loss directly (as a loss needs to be defined point-wise). However, we can formulate a loss that is closely related and will serve to guarantee low . Let the graph loss for a manipulation graph , a domain point , a manipulation set and a hypothesis as
This loss is indeed closely related to the --distance as .
The true graph loss with respect to some marginal and true manipulation graph is then defined by
Furthermore for a sample we define the empirical graph loss as
Similar to previous sections, we now want to define a loss class for . We define to be the set of all pairs on which . Then the graph loss class of is defined as
We will now show that if the VC-dimension of the loss class is finite, we can indeed learn with respect to . For some examples and more discussion on the VC-dimension with respect to the loss class , we refer the reader to the appendix.
Lemma 1.
Let . Then there is , such that for any marginal distribution and any manipulation graph for a sample of size , we have with probability at least over the sample generation for any and any
Furthermore, .
We note that the above lemma is agnostic in the sense that it did not require . We will now introduce an empirical version of the --distance. This will allow us to state the main theorem of this section and show that it is indeed possible to learn with low if is finite.
Definition 5.
Given a sample of domain elements and two manipulation graphs and we can define the empirical --distance as
Theorem 7.
Let . Then there is , such that for every marginal distribution and every manipulation graph for a sample of size , we have with probability at least over the sample generation for any
Furthermore, .
6 Conclusion
In this paper, we introduced a new strategic loss, which incentivizes correct classification under strategic manipulations. We also incorporate the idea of social burden into our notion of loss. We differentiated this loss from previous formal frameworks designed to mitigate strategic manipulation. In particular, we showed that optimizing for our strategic loss can yield satisfactory classification rules, even if there is no incentive-compatible hypothesis in the class that performs well on the classification task at hand. In addition, the loss formulation yields desirable effects in terms of sample complexity. Our work opens various avenues for further investigations and we hope it will inspire follow-up studies on the connections between a hypothesis class and the underlying manipulation graphs, effects of these connections, as well as learnability of the manipulation graph.
7 Acknowledgements
Ruth Urner is also a faculty affiliate member of Toronto’s Vector Institute. Her research is supported by an NSERC Discovery grant. Tosca Lechner is also a graduate student of the Vector Institute and was supported by a Vector Research Grant.
References
- Ben-David et al. (2010) Ben-David, S.; Blitzer, J.; Crammer, K.; Kulesza, A.; Pereira, F.; and Vaughan, J. W. 2010. A theory of learning from different domains. Mach. Learn., 79(1-2): 151–175.
- Blumer et al. (1989) Blumer, A.; Ehrenfeucht, A.; Haussler, D.; and Warmuth, M. K. 1989. Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM (JACM), 36(4): 929–965.
- Brückner and Scheffer (2011) Brückner, M.; and Scheffer, T. 2011. Stackelberg games for adversarial prediction problems. In Proceedings of the 17th ACM International Conference on Knowledge Discovery and Data Mining SIGKDD, 547–555.
- Cullina, Bhagoji, and Mittal (2018) Cullina, D.; Bhagoji, A. N.; and Mittal, P. 2018. PAC-learning in the presence of adversaries. In Advances in Neural Information Processing Systems, 230–241.
- Dalvi et al. (2004) Dalvi, N. N.; Domingos, P. M.; Mausam; Sanghai, S. K.; and Verma, D. 2004. Adversarial classification. In Proceedings of the Tenth ACM International Conference on Knowledge Discovery and Data Mining SIGKDD, 99–108.
- Dong et al. (2018) Dong, J.; Roth, A.; Schutzman, Z.; Waggoner, B.; and Wu, Z. S. 2018. Strategic Classification from Revealed Preferences. In Proceedings of the 2018 ACM Conference on Economics and Computation, EC, 55–70.
- Feige, Mansour, and Schapire (2015) Feige, U.; Mansour, Y.; and Schapire, R. 2015. Learning and inference in the presence of corrupted inputs. In Conference on Learning Theory, 637–657.
- Haghtalab et al. (2020) Haghtalab, N.; Immorlica, N.; Lucier, B.; and Wang, J. Z. 2020. Maximizing Welfare with Incentive-Aware Evaluation Mechanisms. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI, 160–166.
- Hardt et al. (2016) Hardt, M.; Megiddo, N.; Papadimitriou, C. H.; and Wootters, M. 2016. Strategic Classification. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science, ITCS, 111–122.
- Haussler and Welzl (1987) Haussler, D.; and Welzl, E. 1987. epsilon-Nets and Simplex Range Queries. Discret. Comput. Geom., 2: 127–151.
- Hu, Immorlica, and Vaughan (2019) Hu, L.; Immorlica, N.; and Vaughan, J. W. 2019. The Disparate Effects of Strategic Manipulation. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT*, 259–268.
- Jagadeesan, Mendler-Dünner, and Hardt (2021) Jagadeesan, M.; Mendler-Dünner, C.; and Hardt, M. 2021. Alternative Microfoundations for Strategic Classification. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 4687–4697.
- Mansour, Mohri, and Rostamizadeh (2009) Mansour, Y.; Mohri, M.; and Rostamizadeh, A. 2009. Domain Adaptation: Learning Bounds and Algorithms. In The 22nd Conference on Learning Theory, COLT.
- Miller, Milli, and Hardt (2020) Miller, J.; Milli, S.; and Hardt, M. 2020. Strategic Classification is Causal Modeling in Disguise. In Proceedings of the 37th International Conference on Machine Learning, ICML, 6917–6926.
- Milli et al. (2019) Milli, S.; Miller, J.; Dragan, A. D.; and Hardt, M. 2019. The Social Cost of Strategic Classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT*, 230–239.
- Montasser, Hanneke, and Srebro (2019) Montasser, O.; Hanneke, S.; and Srebro, N. 2019. VC Classes are Adversarially Robustly Learnable, but Only Improperly. In Conference on Learning Theory, COLT, 2512–2530.
- Montasser, Hanneke, and Srebro (2021) Montasser, O.; Hanneke, S.; and Srebro, N. 2021. Adversarially Robust Learning with Unknown Perturbation Sets. In Conference on Learning Theory, COLT 2021, 3452–3482.
- Moran and Yehudayoff (2016) Moran, S.; and Yehudayoff, A. 2016. Sample compression schemes for VC classes. Journal of the ACM (JACM), 63(3): 1–10.
- Perdomo et al. (2020) Perdomo, J. C.; Zrnic, T.; Mendler-Dünner, C.; and Hardt, M. 2020. Performative Prediction. In Proceedings of the 37th International Conference on Machine Learning, ICML, 7599–7609.
- Shalev-Shwartz and Ben-David (2014) Shalev-Shwartz, S.; and Ben-David, S. 2014. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press.
- Sundaram et al. (2021) Sundaram, R.; Vullikanti, A.; Xu, H.; and Yao, F. 2021. PAC-Learning for Strategic Classification. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 9978–9988.
- Tsirtsis and Rodriguez (2020) Tsirtsis, S.; and Rodriguez, M. G. 2020. Decisions, Counterfactual Explanations and Strategic Behavior. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems NeurIPS.
- Valiant (1984) Valiant, L. G. 1984. A Theory of the Learnable. Commun. ACM, 27(11): 1134–1142.
- Vapnik and Chervonenkis (1971) Vapnik, V. N.; and Chervonenkis, A. Y. 1971. On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities. Theory of Probability & Its Applications, 16(2): 264–280.
- Zhang and Conitzer (2021) Zhang, H.; and Conitzer, V. 2021. Incentive-Aware PAC Learning. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI, 5797–5804.
Appendix A Statistical Learning Theory basics
In this section, we detail the standard setup of statistical learning theory for classification which we employ in our work.
A.1 Learning theoretic setup and definitions
We let denote the domain and (mostly ) a (binary) label space. We model the data generating process as a distribution over and let denote the marginal of over . We use the notation to indicate that is a sample from distribution and to indicate that set is a sequence (for example a training or test data set) of i.i.d. samples from . Further, we use notation to denote the regression or conditional labeling function of . We say that the distribution has deterministic labels if for all .
A classifier or hypothesis is a function . A classifier can naturally be viewed a subset of , namely . We let denote the set of all Borel measurable functions from to (or all functions in case of a countable domain). A hypothesis class is a subset of , often denoted by .
The quality of prediction of a hypothesis on an input/output pair is measured by a loss function . For classification problems, the quality of prediction is typically measured with the binary or classification loss
where denotes the indicator function for predicate .
We denote the expected loss (or true loss) of a hypothesis with respect to the distribution and loss function by
In particular, we will denote the true binary loss of a classifier by . The quality of an output classifier is typically compared with the best possible (binary) loss achievable on distribution with hypotheses from a fixed class . This quantity is referred to as the approximation error of with respect to and denoted by
The empirical loss of a hypothesis with respect to loss function and a sample is defined as
A learner is a function that takes in a finite sequence of labeled instances and outputs a hypothesis . The following is a standard notion of (PAC-)learnability of a hypothesis class from finite samples (Vapnik and Chervonenkis 1971; Valiant 1984; Blumer et al. 1989; Shalev-Shwartz and Ben-David 2014).
Definition 6 (PAC learnability).
We say that a learner is PAC learns (or simply learns) hypothesis class with respect to a set of distributions and loss function if, for every there is a sample-size such that, for all , we have
We say that is agnostically learns , if the above holds with respect to the class of all data-generating distributions (subject to some mild, standard measurability conditions). We say that learns in the realizable case if the above holds with respect to the class of distributions for which there exists a classifier with .
The smallest function for which there exists a learner that learns class in the above sense is called the sample complexity of .
Definition 7 (Proper versus improper learning).
If a learner always outputs a function from the hypothesis class , we call a proper learner for (and otherwise we call and improper learner for ). If Definition 6 for learnability holds with a proper learner for , we call the class proper (PAC) learnable.
It is well known that a binary hypothesis class is learnable (with a proper learner; both agnostically and in the realizable case) in the above sense if and only if has finite VC-dimension (see Definition 2 in the main part of the paper, and more details on the background of this in Section on loss classes); and that the sample complexity is in the realizable case and in the agnostic case (Shalev-Shwartz and Ben-David 2014).
A.2 The role of VC-dimension of loss classes for learnability
Standard VC-theory tells us that, for the binary classification loss, the sample complexity of learning a hypothesis class is determined by the VC-dimension of the class (and thus the VC-dimension of the loss class since these are identical for the binary loss). Any learner that acts according to the ERM (Empirical Risk Minimization) principle is a successful learner with respect to the binary loss for classes of bounded VC-dimension . We here briefly recap the underpinnings of this result. For the realizable case, a sample of size at least is an -net for the loss class (with probability at least ) (Haussler and Welzl 1987), thus, for every hypothesis that has loss at least , there would be a sample point indicating that. Choosing a hypothesis of zero empirical loss, then guarantees true loss bounded by . In the agnostic case, a sample of size is an -approximation for the loss class, that is, we have for every with high probability, which in turn implies that any ERM learner is a successful agnostic PAC learner. For the binary loss there are complementing lower bounds for the sample complexity of learning VC-classes. However, unlike for the binary loss, the VC-dimension of the strategic loss class does not imply a lower bound on the sample complexity (as we see in Theorem 3).
Appendix B Example for bounded VC dimension of strategic component
To illustrate the conditions in Theorem 4, we here provide a few natural examples, where the VC-dimension of the strategic components can be bounded:
Example 8.
-
1.
being complete, implies .
-
2.
If corresponds to a partial order over (that is, in particular is acyclic) and the class is a subset of complements of initial segments in (that is if, for some and we have , then we also have for all that precede in the partial order) , then the VC-dimension of is bounded by the size of a largest antichain in .
-
3.
If , consists of linear classifiers and the plausible manipulations are determined by two points being close in some standard norm, then the VC-dimension of is bounded by .
-
4.
If , consists of linear classifiers and we have plausible manipulations if and only if and differ on only one coordinate. Then the VC-dimension of the resulting class of strategic components is (since for every , for every point such that , there is a such that is an edge and . Thus, for every half-space , sets of zeros of that are connected to a is exactly , and therefore .
Appendix C The VC dimension of the graph loss class
In Section 5 we analyse the learnability of an approximate manipulation graph from a predefined set of manipulation graphs in --distance in terms of the VC dimension of the graph loss class . We now want to give some examples for when this VC dimension is indeed finite.
Example 9.
-
1.
If for every and every , we have and then .
-
2.
Let be the class of linear separators and be the graph class consistent of graphs such that all their neighborhood sets are balls whose radius may depend linearly on the feature vector. Then the VC-dimension of is finite.
Appendix D Proofs
Observation 1.
For any there exists a class and a manipulation graph with and .
Proof of Observation 1: Let be some infinite domain. We treat the case first. We consider a set a set of domain points and a disjoint set of domain points. We associate each subset of with exactly one point in . We design a manipulation graph by adding an edge if and only if is a member of the subset associated with , and include no other edges. Now we consider the class of singletons over , that is consists of functions and we have . That is assigns label to every point in the domain except for . Then we have . However the loss class shatters the set (and also the set ). Thus, . For the case , we will use the same construction with the set and being an uncountable set, again indexed by the power-set of . ∎
Observation 2.
For any hypothesis class and any manipulation graph , we have .
Proof of Observation 2: Let be a set of points shattered by the hypothesis class . Since all elements of are functions, we can assume for all . The same set of domain points with all labels set to is then shattered by the loss class . To see this, note that for a function and point , if , then is not in the loss set. Thus, if the class can shatter those points, the loss class will shatter them as well. ∎
Theorem 3.
For every , there exists a hypothesis class with that is learnable with sample complexity in the realizable case.
Proof of Theorem 3: We consider the class of singletons from the proof of Observation 1. Note that, if the class is realizable with respect to the strategic loss, then there exists at most one such that this point with label has a positive weight under this distribution. Further, the distribution can not assign any weight to points , and it can only assign a positive weight to points if is not a member of the subset corresponding to . Further, if the point is a member of the training sample, the learner can output hypothesis and this hypothesis then has true strategic loss . If the training sample does not contain any point with label , the learner can output a hypothesis that assigns label to all points. In that case, we get . If the probability was larger than , then, by the standard argument, a sample of the stated size being an -net for singletons would have contained the point (with high probability at least ). ∎
Theorem 5.
(Adaptation of Theorem 1 and Theorem 4 in (Montasser, Hanneke, and Srebro 2019))
There exists a hypothesis class with that is not learnable with respect to the strategic loss by any proper learner for even in the realizable case. On the other hand, every class of finite VC-dimension is learnable (by some improper learner).
Proof of Theorem 5: The construction to prove the first claim can be taken exactly as is in the proof of Theorem 1 by Montasser et al.(Montasser, Hanneke, and Srebro 2019) by setting the label there to 0 and the label there to . The positive result on general learnability with an improper learner is derived by Montasser et al.(Montasser, Hanneke, and Srebro 2019) by means of adapting a general compression scheme for VC-classes developed by Moran and Yehudayoff (Moran and Yehudayoff 2016). We can use the same adaptation to show learnability with respect to the strategic loss with the following modification: instead of just compressing a data sample , the compression scheme for the adversarial loss first creates an inflated sample which adds the perturbation sets (and then discretizing these using Sauer’s lemma to obtain a new finite data set). For the strategic loss, we would add the label conditioned perturbation sets (as described above), that is we inflate the sample by adding points , for being the data point of smallest index with . The discretization step and remaining compression and decompression technique can be used identically as in the adversarial loss case. ∎
Theorem 6.
Let be any hypothesis class and two manipulation graphs. Then for any distribution over and any we have
Furthermore, by rearranging the result, we get
Lemma 1.
Let . Then there is , such that for any marginal distribution and any manipulation graph for a sample of size , we have with probability at least over the sample generation for any and any
Furthermore, .
Proof of Lemma 1: The result follows directly from the finite VC-dimension of and the fact that a finite VC-dimension of a class yields uniform convergence of the class (Theorem 6.7 and Theorem 6.8 from (Shalev-Shwartz and Ben-David 2014)). ∎
Theorem 7.
Let . Then there is , such that for any marginal distribution and any manipulation graph for a sample of size , we have with probability at least over the sample generation for any
Furthermore, .