Optimizing the Performative Risk under
Weak Convexity Assumptions
Abstract
In performative prediction, a predictive model impacts the distribution that generates future data, a phenomenon that is being ignored in classical supervised learning. In this closed-loop setting, the natural measure of performance named performative risk (), captures the expected loss incurred by a predictive model after deployment. The core difficulty of using the performative risk as an optimization objective is that the data distribution itself depends on the model parameters. This dependence is governed by the environment and not under the control of the learner. As a consequence, even the choice of a convex loss function can result in a highly non-convex minimization problem. Prior work has identified a pair of general conditions on the loss and the mapping from model parameters to distributions that implies the convexity of the performative risk. In this paper, we relax these assumptions and focus on obtaining weaker notions of convexity, without sacrificing the amenability of the minimization problem for iterative optimization methods.
1 Introduction
Predictions in the real world are often performative. This means predictions because they are made public, or because they inform downstream actions, and influence the distribution of the features or the target variable they aim to predict. Thus, shifting the data distribution the model has been trained on. Prominent examples include stock price prediction, credit scoring, and predictive policing.
As a result, the performance of a model should not be measured with respect to a fixed underlying distribution, as in classical supervised learning, but instead on the very distribution, the model induces. This has been conceptualized by [Perdomo et al. (2020)] through the framework of performative prediction. As a dynamic setting gives rise to interesting and new optimization challenges, the objective function in performative prediction is the risk incurred by the learner on the distribution induced after deploying the model:
(1) |
where is a closed and convex set in . And is referred to as the performative risk and measures the loss of a predictor on the distribution resulting from its deployment. We focus on finding the minima of the performative risk (1), known as performative optimal points. The main challenge of performative risk minimization from an optimization perspective is that the dependence of the performative risk on the model parameters is two-fold: it appears both in the distribution and the loss . To decouple these two dependencies, let us define the decoupled performative risk
to denote the loss of measured over the distribution . According to this definition we have
Optimizing the performative risk directly is challenging because the structure of the objective is not under the control of the learner and can be highly non-convex. Thus, for tackling performative risk minimization by building on classical tools from the optimization literature we need to better understand the structure of the objective function, and how properties of the distribution map and the loss interact.
As an important first step, Miller et al. (2021) identified a natural set of properties of the loss function and model-induced distribution shift under which the performative risk is convex. In particular, they showed that linear dependence of the distribution map on the model parameters , together with strong convexity of the loss function implies global convexity of . In this work, we are interested in studying problems beyond this specific setting and we intend to address the following question:
How and under what conditions can we optimize the performative risk?
In particular, we study how to optimize the performative risk through first-order information and analyze the suboptimality gap of performative risks. Noticing the fact that convexity can be relaxed to weaker conditions while the same convergence guarantees are preserved by gradient-based optimizer, in Section 3, we show weaker conditions on still imply desirable local properties of . The properties enjoy generality since they do not rely on global structural restrictions. In Section 4, we discuss what information about we could have, if starting from a stable point. The underlying idea is to make use of previous retraining methods (Perdomo et al., 2020; Mendler-Dünner et al., 2020) converging to while the ultimate goal is still reaching .
2 Background
Performativity is a phenomenon that is widely recognized in finance (Clarke, 2012), economics (Callon et al., 2007; Cochoy et al., 2010) and other social sciences (Roberts, 2009). Following the seminal work by Perdomo et al. (2020) performativity has increasingly gained attention from the machine learning community. Performative distribution shifts constitute an instance of a more general distribution shift problem, where the shift is endogenous to the problem and entirely caused by the deployed predictive model. In other words, the deployment of a predictive model represents an intervention to the population (Mendler-Dünner et al., 2022). The resulting causal feedback effects invalidate the standard assumption of supervised learning where there is a static and model-independent data distribution describing the population, surfacing as distribution shifts.
2.1 Key concepts
We first give an overview of two existing concepts characterizing whether is a desirable solution of performative prediction.
Performative stability.
Performative prediction can be viewed as a game between the decision-maker and the population that responds to the deployed model. The first concept of optimality in this context is performative stability
(2) |
The expected loss over is minimized at . As such stable points constitute a fixed point of so-called retraining methods, which are heuristically adopted in dealing with distribution shifts. More specifically, at the -th iteration, these methods find a set of parameters that minimizes the risk on the previously-generated distribution . It can be formally described as Several works have studied the convergence of population-based and stochastic retraining methods to stable points in the standard framework (Perdomo et al., 2020; Mendler-Dünner et al., 2020; Drusvyatskiy and Xiao, 2022) as well as state-dependent extensions (Brown et al., 2022; Li and Wai, 2022; Harris et al., 2021).
Performative optimality.
An alternative solution concept is performative optimality. Performative optimal points describe models that achieve the minimal loss on the distribution they induce. In this sense, they are globally optimal in the following sense:
It follows from the definition that while in general, stable points are not necessarily optimal and optimal points are not necessarily stable. Furthermore, the two solutions and can be arbitrarily far apart from each other, see (Perdomo et al., 2020; Miller et al., 2021). Thus, retraining does not, in general, serve as a reliable heuristic for finding optimal points. That being said, we focus on finding performative optima in this work. There is only a handful of works that study approaches for finding optimal points. The first to tackle performative optimality was Miller et al. (2021). They proposed an iterative optimization method for minimizing the performative risk directly by first learning a model of the distribution map and then applying a gradient-based procedure to the resulting estimation of the performative risk. To guarantee the global convergence of such an approach they studied structural assumptions under which the performative risk is convex. Around the same time, Izzo et al. (2021) studied performative gradient descent, an algorithm that directly estimates the gradient of the performative risk. Technically, they assumed the existence of an estimator so that one can use samples from the distribution to infer , an assumption we will revisit later in this manuscript. To prove convergence to optimal points they directly assume convexity of the performative risk, an assumption that is hard to verify.
A different approach has recently been taken by Jagadeesan et al. (2022) who leveraged tools from the bandits’ literature for minimizing regret under performativity without making structural assumptions on the performative risk apart from weak regularity assumptions. Technically, they use data collected from experiments to build performative confidence bounds to characterize the regret of unexplored parameters. Their method achieves sublinear regret and is the first to find optimal points for non-convex performative risk functions. However, a significant drawback of such a bandit approach is the need for global exploration, i.e., one needs to explore the entire parameter space to find the minimizer, something that is often not feasible in practical applications. Therefore, the focus of this work is on iterative descent methods that gradually approach optimal points.
2.2 Assumptions on the loss function
The loss function in performative prediction takes the same role as the loss in supervised learning. We discuss classical assumptions from the optimization literature such as strong convexity and smoothness on the loss function.
Assumption 1 (Smoothness)
The loss is -smooth if for all ,
Assumption 2 (Strong convexity)
The loss is -strongly convex in if for all ,
Note that this condition reduces to convexity when .
These two assumptions characterize the dependence of the loss function on the model parameters and they are assumed to hold for any realization of . We will adopt the smoothness assumption in this work and we will discuss the strong-convexity assumption as well as weaker notions of convexity.
In addition, we will need a regularity assumption for relating changes in the data distribution to changes in the loss function. Therefore, we will adopt the Lipschitzness condition w.r.t. , which is common when dealing with performative distribution shifts (Perdomo et al., 2020; Miller et al., 2021), namely
Assumption 3 (Lipschitzness)
The loss is -Lipschitz in if for all ,
2.3 Characterizing the performative shift
Since performative effects are not under the control of the learner and are typically unknown, we need assumptions on these distribution shifts for studying the problem and establishing convergence guarantees of algorithmic approaches. In contrast to classical supervised learning where optimization techniques are commonly adopted under conditions on the loss function, we need additional assumptions for performative optimization. This observation is at the core of the main difficulty in performative prediction and the art is to couple conditions on the loss function with conditions on the distribution shift to obtain favorable properties of the resulting performative risk minimization problem.
We shall first introduce the concept of Wasserstein distance, then we proceed to the key sensitivity assumption that relates changes in the model parameters to changes in the distribution.
Definition 2.1 (Wasserstein Distance).
The Wasserstein-1 distance, also called Earth-Mover (EM) distance, is defined as
(3) |
where denotes the set of all joint distributions whose marginals are respectively and .
Intuitively, indicates how much mass must be transported from to in order to transform the distributions into the distribution . The EM distance then is the ”cost” of the optimal transport plan. The infimum in Eq. 3 is highly intractable. On the other hand, the Kantorovich-Rubinstein duality tells us that
This allows us to relate the distance of the expectation of a Lipschitz function over two different distributions by their Wasserstein distance.
Sensitivity.
One commonly used assumption is a weak regularity condition that posits Lipschitzness of the distribution shift relative to changes in the model parameters. Intuitively, the idea is, if decisions are made according to similar predictive models, then the resulting distributions should also be close. More precisely, the sensitivity condition introduced by Perdomo et al. (2020) relates changes in the model parameter as measured in Euclidean distance to changes in the distribution as measured in Wasserstein distance.
Assumption 4 (Sensitivity)
The distribution map is -sensitive if for all ,
Mixture dominance.
Below we present a stronger and more structural assumption on the distribution shift introduced by Miller et al. (2021) which guarantees the convexity of w.r.t. the first argument .
Assumption 5 (Mixture dominance)
A set of losses and functions satisfy the mixture dominance condition if is convex w.r.t. , thus for all ,
It has been demonstrated by Miller et al. (2021) that this assumption is satisfied in performative environments by the well-known location-scale distribution family, which is a family of distributions formed by translation and re-scaling of a standard family member. For example, let the underlying distribution be , data points satisfies , where the matrix is an unknown parameter, and is is a zero-mean random variable. Here the additive term represents the effects of performativity. Standard examples include Gaussian, Cauchy and uniform distributions. There have been wide applications for these families in economic (Wong and Ma, 2008) and statistics (Hazra et al., 2017). Examples of location families can also be found in strategic classification (Hardt et al., 2016).
3 Beyond convexity of the performative risk
In studying the convergence of gradient-based methods to performative optima, previous works on performative prediction have focused on a setting where the performative risk is convex. While some works such as Izzo et al. (2021) directly start from apriori hard-to-verify assumptions, the notable work by Miller et al. (2021) provides conditions on the loss function and the distribution map under which global convexity of emerges. More specifically they show that strong convexity of the loss function, together with Assumptions 4-5 to control the performativity effects of distributions, is sufficient to establish (strong) convexity of the performative risk. They obtain a zeroth-order algorithm that provably converges towards globally performative optimal points () (Miller et al., 2021), which could avoid generally difficult computing performative gradients.
In the following, we take a step further by demonstrating that the strong convexity assumption on the loss function can be relaxed while preserving the amenability of the performative risk minimization problem to gradient-based optimization. We build on the existing literature on first-order optimization that showed that convexity can be relaxed to weaker conditions while maintaining the same convergence guarantees of gradient-based optimizers.
3.1 Weaker notions of strong convexity
In the optimization literature, strongly-convex optimization objectives are playing an important role as they can be optimized using gradient-descent methods at a linear rate. In the context of finding performatively stable points, this linear rate can be maintained by retraining methods in the presence of weak performative shifts (Perdomo et al., 2020).
There are various works in the optimization literature that aim to relax the strong-convexity condition on the objective function while maintaining favorable convergence properties. Examples include error bounds (EB) (Luo and Tseng, 1993; Drusvyatskiy and Lewis, 2018), essential strong convexity (ESC) (Strömberg, 2011; Liu et al., 2014), weak strong-convexity (WSC) (Ma et al., 2016; Necoara et al., 2019), restricted secant inequality (RSI) (Zhang and Yin, 2013; Zhang, 2017), restricted strong-convexity (RSC) (Negahban and Wainwright, 2012; Zhang and Yin, 2013), Polyak-Lojasiewicz (PL) inequality (Polyak, 1963; Loizou et al., 2021) and quadratic growth (QG) condition (Anitescu, 2000; Drusvyatskiy and Lewis, 2018). The relations between these conditions have been studied in (Liu and Wright, 2015; Karimi et al., 2016; Bolte et al., 2017; Zhang, 2017). In particular, for a smooth function with Lipschitz-continuous gradients, Karimi et al. (2016) showed the following chain of implications:
(4) |
This implication chain is widely used in numerical optimization (Wang et al., 2017; Nouiehed et al., 2019; Vaswani et al., 2019).
We build on this work to analyze performative prediction from an optimization perspective. Previous works assumed the loss satisfies strong convexity (SC) globally which is the strongest condition according to the implication chain in Eq. 4. Our contribution is to show that weaker conditions on still imply desirable local properties of . Specifically, in this work, we prove that assuming WSC and RSI of is sufficient to establish WC and RSI of , respectively, following similar steps taken in Miller et al. (2021). These local conditions only need to hold around the minimizer and do not posit strong structural restrictions on the overall landscape such as SC, thus making the assumptions more realistic in practice.
3.2 Establishing weak convexity of the performative risk
First, we introduce the weak strong convexity (Karimi et al., 2016; Necoara et al., 2019) in order to relax the strong convexity conditions. We will follow the convention that denotes the projection of onto the optimal solution set throughout this paper.
Definition 3.1 (WSC).
A function is -WSC if for all we have
(5) |
where and .
We translate this definition to the decoupled performative risk as follows:
Assumption 6
For performative optimum and its induced distribution , suppose the optimal solution for minimizing is . We say satisfies WSC, if for any it holds that
(6) |
This definition requires that the minimizer for the expected loss on the distribution is unique: This is a mild assumption used by previous work focusing on repeated optimization in performative environments (Mendler-Dünner et al., 2020; Drusvyatskiy and Xiao, 2022; Wood et al., 2021). Intuitively, we posit a local property on the structure of over the specific distribution induced by . The quantity is measured over the domain of model parameter . We note that this condition is already weaker than the strong convexity Assumption 2 which is assumed to be true for any data point (Miller et al., 2021), thus depending heavily on the structure of the loss function.
Equipped with Assumption 6, we are ready to state the following new theorem.
Theorem 3.2.
We make justification on the assumption that is also performative stable. As mentioned, optimal points and stable points do not generally imply each other. Therefore, simply adopting retraining (e.g., (Perdomo et al., 2020)) does not return global optima. Nevertheless, the local properties of under this assumption would be really helpful in determining the optimality of a performative stable point. In the next section, we will extend to more results on what information about we could obtain at a specific stable point without the assumption.
Proof 3.3 (Proof sketch).
Technically, our derivation starts from Eq. 6. After expanding the inequality around the minimizer of , we split into two terms: gradient of the loss function , and gradient on the variable distribution , where is the probability distribution function of . We then use the sensitivity Assumption 4 to bound the first term by the quadratic distance and use the mixture dominance Assumption 5 to control the second term.
3.3 Establishing RSI of the performative risk
The second condition we consider is that Restricted Secant Inequality (Zhang and Yin, 2013). It is defined as follows:
Definition 3.4 (RSI).
A function satisfies Restricted Secant Inequality (RSI) if for all we have
(7) |
Again, is the projection of onto the optimal solution set .
Remark 3.5.
A function satisfies restricted strongly convex (RSC) if it is convex and also satisfies RSI.
Assumption 7
For performative optimum and its induced distribution , suppose the optimal solution for minimizing is unique and is denoted as . We say satisfies RSI, if for any it holds
(8) |
Theorem 3.6.
Note that compared to WSC, the RSI condition is weaker and thus generalizes more easily in first-order optimization. See details in Appendix A. We conjecture that specific implementations discussed by Negahban and Wainwright (2012); Zhang and Yin (2013); Zhang (2017) could be extended to optimize the in this setting.
4 When are local properties sufficient for stability and optimality?
As aforementioned, existing methods in the literature that seek stable points do not have global optimality guarantees. Meanwhile, current methods aiming at optimizing directly require local structural assumptions around the performative optima that may not hold in real settings.
In this section, we investigate relations between stable points and optimal points. We relate to previous work focusing on the convergence of retraining methods to stable points (Mendler-Dünner et al., 2020; Drusvyatskiy and Xiao, 2022; Perdomo et al., 2020; Wood et al., 2021; Bianchin et al., 2021) and argue that if stable points offer a good approximation for optimal points, then they can serve as a good starting point to ensure convergence to optimal points with only local assumption on the performative risk.
Suppose the Lipschitzness condition for loss (c.f. Assumption 3) holds throughout this section, which is standard and mild in learning theory or optimization theory literature. For any in the parameter domain, we could quantify the gap between and as
The second term on the RHS measures the distance in the function value between and over the distribution .
In the remainder of this section, we pick to be a stable point (i.e., ) so we can work with the suboptimality gap of the function. First rewrite the above equation as
(9) |
where is the suboptimality of measured over
Proposition 4.1.
The proof is straightforward by noticing that, when Eq. 10 holds, we have for all . Thus is performative optimal according to definition. Intuitively, Eq. 10 relates changes in the loss to changes in the distribution.
Starting from .
Suppose we have reached which is a performative stable point. To achieve , we may consider two directions. (1) How is the performance of ? What is the gap between and the optimal performative risk (denoted as )? (2) What geometric information about could we obtain at ? As we may believe that will serve as a good approximation for when the Euclidean distance is close. We believe investigating along the two directions is promising since it makes use of the large amount of work focusing on reaching .
Below, we give three preliminary examples on how some general structural conditions on the loss function and distribution maps could lead to useful results related to the two directions.
We show the suboptimality of under Lipschitzness and a type of boundedness.
Example 4.2.
Assume performative shifts are bounded by an absolute value, i.e.,
We have the following bound characterizing the suboptimality of
(11) |
Proof 4.3.
The proof is straightforward by noticing when being stable. From Eq. 9 we have .
The next two examples will show that could not be far from .
Example 4.4.
Assume (1) performative shifts are bounded by an absolute value and (2) satisfies quadratic growth, i.e., , we have that performative optimal point satisfies
Proof 4.5.
We proof by contradiction. Suppose there exists a performative optimal point which is close to . The quadratic growth shows
From Eq. 9 we have which contradicts with the optimality of .
Example 4.6.
Under Assumption 4, suppose satisfies quadratic growth, the performative optimal point satisfies
Proof 4.7.
To conclude, in this section we attempt to make use of stable points in finding optima points. Though and do not imply each other as discussed aforementioned, we aim at revealing when and how stable points could be served as starting points for finding optima points. In such scenario, previous work focusing on convergence of retraining methods to stable points would still be worthy.
Concretely, we have shown several conditions under which stable points offer a good approximation for optimal points. Then, with only local assumption on , we show that one could start from stable points in an optimization procedure, and finally, converge to optimal points.
Remark 4.8.
An interesting special case where stable points are optimal without requiring some sort of global optimality is when the Bayesian error of every distribution is the same. This means all minima are on one level-set. Hence, no other point can have a smaller loss.
5 Conclusion
This paper studies optimization aspects of the performative risk minimization problem.
First, we relax the standard strong convexity assumption on the loss function required in prior work to show convergence of iterative optimization methods in performative prediction. In particular, we study two weaker conditions on : WSC and RSI – both are sufficient to establish weak convexity of the performative risk under suitable assumptions on the distribution map. Our work takes a first step towards importing advanced assumptions from the classical optimization literature into performative prediction.
Second, we make a contribution by raising interesting questions about the meaning of local regularity assumptions in the context of performative prediction. If stable points can be found heuristically and serve as a good approximation for global optima, the local regularity of can be sufficient to find optimal points. We provide some bounds on the distance between optimal and stable points but it remains to better understand how they relate in practical settings.
As of future work, we note that a very interesting and important topic to study is the PL condition which is among the weakest assumptions according to the implication chain and is ubiquitous in over-parameterized neural networks (Zhou et al., 2021).
Suppose the following -PL condition holds
Naturally, we expand the RHS through Eq. 9
Intuitively, we raise several interesting questions related to finding performative optima.
-
1.
Understanding when and how (e.g., some structural properties of loss function or a natural set of distributions), it holds that where is a certain constant. Since the condition characterizes local properties of near performative stable points, it could be more common in practice. Notice that current performative prediction literature heavily rely on sensitivity relation (c.f. Assumption 4).
-
2.
What is the impact of data pre-processing steps on the implications of performative shifts? Can they influence performativity effects? For example, can data whitening, result in distribution with favorable properties, such as location-scale family?
Acknowledgements
The author gratefully acknowledges Aurelien Lucchi and Celestine Mendler-Dünner for numerous helpful discussions and advice during the internship at ETH Zürich.
References
- Anitescu (2000) Mihai Anitescu. Degenerate nonlinear programming with a quadratic growth condition. SIAM Journal on Optimization, 10(4):1116–1135, 2000.
- Bianchin et al. (2021) Gianluca Bianchin, Miguel Vaquero, Jorge Cortes, and Emiliano Dall’Anese. Online stochastic optimization for unknown linear systems: Data-driven synthesis and controller analysis. arXiv preprint arXiv:2108.13040, 2021.
- Bolte et al. (2017) Jérôme Bolte, Trong Phong Nguyen, Juan Peypouquet, and Bruce W Suter. From error bounds to the complexity of first-order descent methods for convex functions. Mathematical Programming, 165(2):471–507, 2017.
- Brown et al. (2022) Gavin Brown, Shlomi Hod, and Iden Kalemaj. Performative Prediction in a Stateful World. In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, volume 151, pages 6045–6061. PMLR, 2022.
- Callon et al. (2007) Michel Callon et al. What does it mean to say that economics is performative. Do economists make markets, pages 311–357, 2007.
- Clarke (2012) Chris Clarke. Financial engineering, not economic photography: Popular discourses of finance and the layered performances of the sub-prime crisis. Journal of Cultural Economy, 5(3):261–278, 2012.
- Cochoy et al. (2010) Franck Cochoy, Martin Giraudeau, and Liz McFall. Performativity, economics and politics: An overview. Journal of Cultural Economy, 3(2):139–146, 2010.
- Drusvyatskiy and Lewis (2018) Dmitriy Drusvyatskiy and Adrian S Lewis. Error bounds, quadratic growth, and linear convergence of proximal methods. Mathematics of Operations Research, 43(3):919–948, 2018.
- Drusvyatskiy and Xiao (2022) Dmitriy Drusvyatskiy and Lin Xiao. Stochastic optimization with decision-dependent distributions. Mathematics of Operations Research, 2022.
- Hardt et al. (2016) Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters. Strategic classification. In Proceedings of the 2016 ACM conference on innovations in theoretical computer science, pages 111–122, 2016.
- Harris et al. (2021) Keegan Harris, Hoda Heidari, and Steven Z Wu. Stateful strategic regression. Advances in Neural Information Processing Systems, 34:28728–28741, 2021.
- Hazra et al. (2017) Nil Kamal Hazra, Mithu Rani Kuiti, Maxim Finkelstein, and Asok K Nanda. On stochastic comparisons of maximum order statistics from the location-scale family of distributions. Journal of Multivariate Analysis, 160:31–41, 2017.
- Izzo et al. (2021) Zachary Izzo, Lexing Ying, and James Zou. How to learn when data reacts to your model: performative gradient descent. In International Conference on Machine Learning, pages 4641–4650. PMLR, 2021.
- Jagadeesan et al. (2021) Meena Jagadeesan, Celestine Mendler-Dünner, and Moritz Hardt. Alternative microfoundations for strategic classification. In International Conference on Machine Learning, pages 4687–4697. PMLR, 2021.
- Jagadeesan et al. (2022) Meena Jagadeesan, Tijana Zrnic, and Celestine Mendler-Dünner. Regret Minimization with Performative Feedback. In International Conference on Machine Learning, pages 9760–9785. PMLR, 2022.
- Karimi et al. (2016) Hamed Karimi, Julie Nutini, and Mark Schmidt. Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 795–811. Springer, 2016.
- Li and Wai (2022) Qiang Li and Hoi-To Wai. State dependent performative prediction with stochastic approximation. In International Conference on Artificial Intelligence and Statistics, pages 3164–3186. PMLR, 2022.
- Liu and Wright (2015) Ji Liu and Stephen J Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties. SIAM Journal on Optimization, 25(1):351–376, 2015.
- Liu et al. (2014) Ji Liu, Steve Wright, Christopher Ré, Victor Bittorf, and Srikrishna Sridhar. An asynchronous parallel stochastic coordinate descent algorithm. In International Conference on Machine Learning, pages 469–477. PMLR, 2014.
- Loizou et al. (2021) Nicolas Loizou, Sharan Vaswani, Issam Hadj Laradji, and Simon Lacoste-Julien. Stochastic polyak step-size for sgd: An adaptive learning rate for fast convergence. In International Conference on Artificial Intelligence and Statistics, pages 1306–1314. PMLR, 2021.
- Luo and Tseng (1993) Zhi-Quan Luo and Paul Tseng. Error bounds and convergence analysis of feasible descent methods: a general approach. Annals of Operations Research, 46(1):157–178, 1993.
- Ma et al. (2016) Chenxin Ma, Rachael Tappenden, and Martin Takáč. Linear convergence of randomized feasible descent methods under the weak strong convexity assumption. The Journal of Machine Learning Research, 17(1):8138–8161, 2016.
- Mendler-Dünner et al. (2020) Celestine Mendler-Dünner, Juan C. Perdomo, Tijana Zrnic, and Moritz Hardt. Stochastic Optimization for Performative Prediction. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS’20, 2020.
- Mendler-Dünner et al. (2022) Celestine Mendler-Dünner, Frances Ding, and Yixin Wang. Predicting from Predictions. arXiv preprint arXiv:2208.07331, 2022.
- Miller et al. (2021) John P Miller, Juan C Perdomo, and Tijana Zrnic. Outside the Echo Chamber: Optimizing the Performative Risk. In International Conference on Machine Learning, pages 7710–7720. PMLR, 2021.
- Necoara et al. (2019) Ion Necoara, Yu Nesterov, and Francois Glineur. Linear convergence of first order methods for non-strongly convex optimization. Mathematical Programming, 175(1):69–107, 2019.
- Negahban and Wainwright (2012) Sahand Negahban and Martin J Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. The Journal of Machine Learning Research, 13(1):1665–1697, 2012.
- Nouiehed et al. (2019) Maher Nouiehed, Maziar Sanjabi, Tianjian Huang, Jason D Lee, and Meisam Razaviyayn. Solving a class of non-convex min-max games using iterative first order methods. Advances in Neural Information Processing Systems, 32, 2019.
- Perdomo et al. (2020) Juan Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, and Moritz Hardt. Performative prediction. In International Conference on Machine Learning, pages 7599–7609. PMLR, 2020.
- Polyak (1963) Boris Teodorovich Polyak. Gradient methods for minimizing functionals. Zhurnal vychislitel’noi matematiki i matematicheskoi fiziki, 3(4):643–653, 1963.
- Roberts (2009) Brian Roberts. Performative social science: A consideration of skills, purpose and context. Historical Social Research/Historische Sozialforschung, pages 307–353, 2009.
- Strömberg (2011) Thomas Strömberg. Duality between Fréchet differentiability and strong convexity. Positivity, 15(3):527–536, 2011.
- Vaswani et al. (2019) Sharan Vaswani, Francis Bach, and Mark Schmidt. Fast and faster convergence of sgd for over-parameterized models and an accelerated perceptron. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1195–1204. PMLR, 2019.
- Wang et al. (2017) Di Wang, Minwei Ye, and Jinhui Xu. Differentially private empirical risk minimization revisited: Faster and more general. Advances in Neural Information Processing Systems, 30, 2017.
- Wong and Ma (2008) Wing-Keung Wong and Chenghu Ma. Preferences over location-scale family. Economic Theory, 37(1):119–146, 2008.
- Wood et al. (2021) Killian Wood, Gianluca Bianchin, and Emiliano Dall’Anese. Online projected gradient descent for stochastic optimization with decision-dependent distributions. IEEE Control Systems Letters, 6:1646–1651, 2021.
- Zhang (2017) Hui Zhang. The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth. Optimization Letters, 11(4):817–833, 2017.
- Zhang and Yin (2013) Hui Zhang and Wotao Yin. Gradient methods for convex minimization: better rates under weaker conditions. arXiv preprint arXiv:1303.4645, 2013.
- Zhou et al. (2021) Mo Zhou, Rong Ge, and Chi Jin. A local convergence theory for mildly over-parameterized two-layer neural network. In Conference on Learning Theory, pages 4577–4632. PMLR, 2021.
Appendix A Proofs for Section 3
[Proofs for Theorem 3.2]
Proof A.1.
At , since performative optimum is also performative stable, it is known .
Using WSC property we have
(12) |
Our goal is to characterize the function class of
Potentially, we expect it to be weak convex w.r.t. , i.e.
(13) |
or using the definition of :
(14) |
First, we separate the gradient of into
(15) |
We have
-
(i).
Here we use: .
-
(ii).
This is because -sensitive of distribution map .
-
(iii).
Mixture dominance implies: .
We conclude that we only need weak strong convexity of , local mixture dominance and local distribution sensitivity to guarantee that is weakly-convex to , instead of strongly convex everywhere.
[Proofs for Theorem 3.6]
Proof A.2.
At , because performative optimum is also performative stable, it is known .
Using RSI property we have
We derive back as following, we want to show is , i.e.,
because
-
(i).
Because we have -sensitiveness of distribution map .
-
(ii).
From mixture dominance we know .
-
(iii).
This is because of -Lipschitzness of loss function w.r.t. .
We have 1) If (close to optimum), then
(16) |
The retrodiction shows when assuming is -RSI, then our goal is achieved if
(17) |
2) If then
(18) |
The retrodiction shows: when assuming is -RSI, then our goal is achieved if
(19) |
Proof is completed.