Adaptive Batch Size for Privately Finding Second-Order Stationary Points
Abstract
There is a gap between finding a first-order stationary point (FOSP) and a second-order stationary point (SOSP) under differential privacy constraints, and it remains unclear whether privately finding an SOSP is more challenging than finding an FOSP. Specifically, Ganesh et al. (2023) demonstrated that an -SOSP can be found with , where is the dataset size, is the dimension, and is the differential privacy parameter. Building on the SpiderBoost algorithm framework, we propose a new approach that uses adaptive batch sizes and incorporates the binary tree mechanism. Our method improves the results for privately finding an SOSP, achieving . This improved bound matches the state-of-the-art for finding an FOSP, suggesting that privately finding an SOSP may be achievable at no additional cost.
1 Introduction
Privacy concerns have gained increasing attention with the rapid development of artificial intelligence and modern machine learning, particularly the widespread success of large language models. Differential privacy (DP) has become the standard notion of privacy in machine learning since it was introduced by Dwork et al. (2006). Given two neighboring datasets, and , differing by a single item, a mechanism is said to be -differentially private if, for any event , it holds that:
In this work, we focus on the stochastic optimization problem under the constraint of DP. The loss function is defined below:
where the functions may be non-convex, the underlying distribution is unknown, and we are given a dataset drawn i.i.d. from . Notably, our goal is to design a private algorithm with provable utility guarantees under the i.i.d. assumption.
Minimizing non-convex functions is generally challenging and often intractable, but most models used in practice are not guaranteed to be convex. How, then, can we explain the success of optimization methods in practice? One possible explanation is the effectiveness of Stochastic Gradient Descent (SGD), which is well-known to be able to find an -first-order stationary point (FOSP) of a non-convex function —that is, a point such that —within steps (Nesterov, 1998). However, FOSPs can include saddle points or even local maxima. Thus, we focus on finding second-order stationary points (SOSP), for the non-convex function .
Non-convex optimization has been extensively studied in recent years due to its central role in modern machine learning, and we now have a solid understanding of the complexity involved in finding FOSPs and SOSPs (Ghadimi and Lan, 2013; Agarwal et al., 2017; Carmon et al., 2020; Zhang et al., 2020). Variance reduction techniques have been shown to improve the theoretical complexity, leading to the development of several promising algorithms such as Spider (Fang et al., 2018), SARAH (Nguyen et al., 2017), and SpiderBoost (Wang et al., 2019c). More recently, private non-convex optimization has emerged as an active area of research (Wang et al., 2019b; Arora et al., 2023; Gao and Wright, 2023; Ganesh et al., 2023; Lowy et al., 2024; Menart et al., 2024).
1.1 Our Main Result
In this work, we study how to find the SOSP of privately. Let us formally define the FOSP and the SOSP. For more on Hessian Lipschitz continuity and related background, see the preliminaries in Section 2.
Definition 1.1 (FOSP).
For , we say a point is an -first-order stationary point (-FOSP) of a function if .
Definition 1.2 (SOSP, Nesterov and Polyak (2006); Agarwal et al. (2017)).
For a function which is -Hessian Lipschitz, we say a point is -second-order stationary point (-SOSP) of if .
Given the dataset size of , privacy parameters , and functions defined over -dimension space, Ganesh et al. (2023) proposed a private algorithm that can find an -SOSP for with
However, as shown in Arora et al. (2023), the state-of-the-art bound for privately finding an -FOSP is tighter: 111As proposed by Lowy et al. (2024), allowing exponential running time enables the use of the exponential mechanism to find a warm start, which can further improve the bounds for both FOSP and SOSP.
When the privacy parameter is sufficiently small, we observe that . This raises the question: is finding an SOSP under differential privacy constraints more difficult than finding an FOSP?
This work improves the results of Ganesh et al. (2023). Specifically, we present an algorithm that finds an -SOSP with privacy guarantees, where:
This improved bound suggests that when we try to find the stationary point privately, we can find the SOSP for free under additional (standard) assumptions.
It is also worth noting that, our improvement primarily affects terms dependent on the privacy parameters. In the non-private setting, as (i.e., without privacy constraints), all the results discussed above achieve a bound of , which matches the non-private lower bound established by Arjevani et al. (2023) in high-dimensional settings (where ). However, to our knowledge, whether this non-private term of can be further improved in low dimension remains an open question.
1.2 Overview of Techniques
In this work, we build on the SpiderBoost algorithm framework, similar to prior approaches (Arora et al., 2023; Ganesh et al., 2023), to find second-order stationary points (SOSP) privately. At a high level, our method leverages two types of gradient oracles: , which estimates the gradient at point , and , which estimates the gradient difference between two points, and . When performing gradient descent, to compute the gradient estimator at point , we can either use for a direct estimate, or to update based on the previous gradient. In our setting, is more accurate but incurs higher computational or privacy costs.
The approach in Arora et al. (2023) adopts SpiderBoost by querying periodically: they call once and then use for subsequent queries, controlled by a hyperparameter . Their method ensures the gradient estimators are sufficiently accurate on average, which works well for finding first-order stationary points (FOSP). However, finding an SOSP, where greater precision is required, presents additional challenges when relying on average-accurate gradient estimators.
To address this, Ganesh et al. (2023) introduced a variable called , where is the index of the last iteration when was queried. If remains small, the gradient estimator stays accurate enough, allowing further queries of . However, if grows large, the gradient estimator’s accuracy deteriorates, signaling the need to query for a fresh, more accurate estimate. This modification enables the algorithm to maintain the precision necessary for privately finding an SOSP.
Our improvement introduces two new components: the use of the tree mechanism instead of using the Gaussian mechanism as in Ganesh et al. (2023), and the implementation of adaptive batch sizes for constructing .
In the prior approach using the Gaussian mechanism, a noisy gradient estimator is computed, and the next estimator is updated via , where is Gaussian noise added to preserve privacy. Over multiple iterations, the accumulation of noise can severely degrade the accuracy of the gradient estimator, requiring frequent re-queries of . On the other hand, the tree mechanism mitigates this issue when frequent queries to are needed.
However, simply replacing the Gaussian mechanism with the tree mechanism and using a fixed batch size does not yield optimal results. In worst-case scenarios, where the function’s gradients are large, the grows quickly, necessitating frequent calls to , which diminishes the advantages of the tree mechanism.
To address this, we introduce adaptive batch sizes. In Ganesh et al. (2023), the oracle is constructed by drawing a fixed batch of size from the unused dataset and outputting . Given an upper bound on , they guaranteed that for some parameter , thereby bounding the sensitivity of .
In contrast, we dynamically adjust the batch size in proportion to , setting , and compute . Fixed batch sizes present two drawbacks: (i) when is large, the gradient estimator has higher sensitivity and variance, leading to worse estimate accuracy; (ii) when is small, progress in terms of function value decrease is limited. Using a fixed batch size forces us to handle both cases simultaneously: we must add noise and analyze accuracy assuming a worst-case large , but for utility analysis, we pretend is small to examine the function value decrease. The adaptive batch size resolves this paradox: it allows us to control sensitivity and variance adaptively. When is small, we decrease the batch size but can still control the variance and sensitivity; when it is small, the function value decreases significantly, aiding in finding an SOSP.
By combining the tree mechanism with adaptive batch sizes, we improve the accuracy of gradient estimation and achieve better results for privately finding an SOSP.
1.3 Other Related Work
A significant body of literature on private optimization focuses on the convex setting, where it is typically assumed that each function is convex for any in the universe (e.g., (Chaudhuri et al., 2011; Bassily et al., 2014, 2019; Feldman et al., 2020; Asi et al., 2021; Kulkarni et al., 2021; Carmon et al., 2023)).
The tree mechanism, originally introduced by the differential privacy (DP) community (Dwork et al., 2010; Chan et al., 2011) for the continual observation, has inspired tree-structure private optimization algorithms like Asi et al. (2021); Bassily et al. (2021); Arora et al. (2023); Zhang et al. (2024). Some prior works have explored adaptive batch size techniques in optimization. For instance, De et al. (2016) introduced adaptive batch sizing for stochastic gradient descent (SGD), while Ji et al. (2020) combined adaptive batch sizing with variance reduction techniques to modify SVRG and Spider algorithms. However, these works’ motivations and approaches to setting adaptive batch sizes differ from ours. To the best of our knowledge, we are the first to propose using adaptive batch sizes in the context of optimization under differential privacy constraints.
2 Preliminaries
Throughout the paper, we use to represent both the norm of a vector and the operator norm of a matrix when there is no confusion.
Definition 2.1 (Lipschitz, Smoothness and Hessian Lipschitz).
Let . Given a twice differentiable function , we say is -Lipschitz, if for all , ; we say is -smooth, if for all , , and we say the function is -Hessian Lipschitz, if for all , we have
2.1 Other Techniques
As mentioned in the introduction, we use the tree mechanism (Algorithm 1) to privatize the algorithm, whose formal guarantee is stated below:
Theorem 2.2 (Tree Mechanism, Dwork et al. (2010); Chan et al. (2011)).
Let be dataset spaces, and be the state space. Let be a sequence of algorithms for . Let be the algorithm that given a dataset , sequentially computes for , and then outputs .
Suppose for all , and neighboring for all auxiliary inputs . Then setting , Algorithm 1 is -DP. Furthermore, with probability at least , for all .
We also need the concentration inequality for norm-subGaussian random vectors.
Definition 2.3 (SubGaussian, and Norm-SubGaussian).
We say a random vector is SubGaussian () if there exists a positive constant such that . We say is norm-SubGaussian () if there exists such that .
Lemma 2.4 (Hoeffding type inequality for norm-subGaussian, Jin et al. (2019)).
Let be random vectors, and for each , is zero-mean where is the corresponding filtration. Then there exists an absolute constant such that for any , with probability at least , , which means is .
Theorem 2.5 (Matrix Bernstein Inequality, Tropp et al. (2015)).
Consider a finite sequence of independent, random matrices with common dimensions and a.s., . Let . Let . Then for any , we have
3 SOSP
We make the following assumption for our main result.
Assumption 3.1.
Let . Any function drawn from is -Lipschitz, -Hessian Lipschitz, and -smooth, almost surely. Moreover, we are given a public initial point such that .
We modify the Stochastic SpiderBoost used in Ganesh et al. (2023) and state it in Algorithm 2. The following standard lemma plays a crucial role in finding stationary points of smooth functions:
Lemma 3.2.
Assume is -smooth and let . Let . Then we have Moreover, if and , we have
Proof.
By the assumption of the smoothness, we know
When and , the conclusion follows from the calculation.
∎
One of the main challenges to finding the SOSP, compared with finding the FOSP, is showing the algorithm can escape from saddle points, which was addressed by previous results (e.g.,Jin et al. (2017)) and was established in the DP literature as well:
Lemma 3.3 (Essentially from Wang et al. (2019a)).
Under Assumption 3.1, run SGD iterations , with step size . Suppose is a saddle point satisfying and , . If where , , and for all , with probability at least , one has where .
Lemma 3.3 suggests that, if we meet a saddle point, then after the following steps, the function value will decrease by at least . This means the function value decreases by on average for each step. Similar to the proof in Ganesh et al. (2023), we have the following guarantee of Stochastic SpiderBoost:
Proposition 3.4.
The proof intuition of Proposition 3.4 is, if we do not find an -SOSP, then on average, the function value will at least decrease by . As we know , hence steps can ensure we find an -SOSP. See more discussion on Proposition 3.4 in the Appendix.
Algorithm 2 follows the SpiderBoost framework. We either query to estimate the gradient itself or query to estimate the gradient difference over the last consecutive points. The term controls the estimated error. When is small, we know is still a good estimator, and when is large, we draw a fresh estimator from . The term is used for the technical purpose of applying Lemma 3.3. When we meet a potential saddle point, we add Gaussian noise to escape from the saddle point and set to be ; this ensures that we won’t add the Gaussian again in the following steps.
3.1 Constructing Private Gradient Oracles
We construct the gradient oracles below in Algorithm 3.
Lemma 3.5 (Gradient oracles with bounded error).
Proof.
From now on, we adopt Algorithm 3 as the gradient oracles for Line 5 and Line 7 respectively in Algorithm 2, and we set . We then bound the error between gradient estimator and the true gradient for Algorithm 2.
Lemma 3.6.
Suppose the dataset is drawn i.i.d. from the distribution . For any and letting be the largest integer such that is set to be 0, with probability at least , for some universal constant , we have
(1) |
Proof.
Now, we consider the noise added from the tree mechanism to make the algorithm private.
Lemma 3.7.
Proof.
It suffices to consider the sensitivity of the gradient oracles.
Consider the sensitivity of first. Let denote the output with the neighboring dataset. Then it is obvious that
As for the sensitivity of , we have
The privacy guarantee follows from the tree mechanism (Theorem 2.2). ∎
With the noise added from the tree mechanism in mind, now we get the high-probability error bound of our gradient estimators .
Lemma 3.8.
Proof.
By our setting of parameters, we know
Then our choice of ensures the privacy guarantee by Lemma 3.7.
By Lemma 3.6 and our parameter settings, we have
Hence we conclude that , which completes the proof. ∎
We need to show that we can find an -SOSP before we use up all the functions. We need the following technical Lemma:
Lemma 3.9.
Consider the implementation of Algorithm 2. Suppose the size of dataset can be arbitrarily large with functions drawn i.i.d. from , and we run the algorithm until finding an -SOSP, then with probability at least , the total number of functions we will use is bounded by
Proof.
By Proposition 3.4, setting suffices to find an -SOSP. Let be the outputs of the algorithms, where denotes the step we halt the algorithm. We show
(2) |
Denote the set . As , we know .
Now consider the set denoting the index of steps when the norm of the gradient estimator is large. It suffices to bound .
By Lemma 3.2, we know when , , and when . Given the bound on the function values, we know
Hence
This completes the proof of Equation (2).
The total number of functions we used for is upper bounded by . The total number of functions we used for is upper bounded as follows:
This completes the proof. ∎
Given the dataset size requirement, we can get the final bound on finding SOSP.
Lemma 3.10.
Proof.
By Lemma 3.9 and our parameter settings, we need
First,
Secondly,
Thirdly,
Finally,
Combining these together, we get the claimed statement.
∎
Theorem 3.11.
Remark 3.12.
If we treat the parameters as constants , then we get as claimed before in the abstract and introduction.
4 Discussion
We combine the concepts of adaptive batch sizes and the tree mechanism to improve the previous best results for privately finding SOSP. Our approach achieves the same bound as the state-of-the-art method for finding FOSP, suggesting that privately finding an SOSP may incur no additional cost.
Several interesting questions remain. First, what is the tight bound for privately finding FOSP and SOSP? Second, can the adaptive batch size technique be applied in other settings? Could it offer additional advantages, such as reducing runtime in practice? Finally, while we can obtain a generalization error bound of using concentration inequalities and the union bound, can we achieve a better generalization error bound for the non-convex optimization?
References
- Agarwal et al. [2017] Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, and Tengyu Ma. Finding approximate local minima faster than gradient descent. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages 1195–1199, 2017.
- Arjevani et al. [2023] Yossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster, Nathan Srebro, and Blake Woodworth. Lower bounds for non-convex stochastic optimization. Mathematical Programming, 199(1):165–214, 2023.
- Arora et al. [2023] Raman Arora, Raef Bassily, Tomás González, Cristóbal A Guzmán, Michael Menart, and Enayat Ullah. Faster rates of convergence to stationary points in differentially private optimization. In International Conference on Machine Learning, pages 1060–1092. PMLR, 2023.
- Asi et al. [2021] Hilal Asi, Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private stochastic convex optimization: Optimal rates in l1 geometry. In International Conference on Machine Learning, pages 393–403. PMLR, 2021.
- Bassily et al. [2014] Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In Proc. of the 2014 IEEE 55th Annual Symp. on Foundations of Computer Science (FOCS), pages 464–473, 2014.
- Bassily et al. [2019] Raef Bassily, Vitaly Feldman, Kunal Talwar, and Abhradeep Guha Thakurta. Private stochastic convex optimization with optimal rates. Advances in neural information processing systems, 32, 2019.
- Bassily et al. [2021] Raef Bassily, Cristóbal Guzmán, and Anupama Nandi. Non-euclidean differentially private stochastic convex optimization. In Conference on Learning Theory, pages 474–499. PMLR, 2021.
- Carmon et al. [2020] Yair Carmon, John C Duchi, Oliver Hinder, and Aaron Sidford. Lower bounds for finding stationary points i. Mathematical Programming, 184(1):71–120, 2020.
- Carmon et al. [2023] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, and Kevin Tian. Resqueing parallel and private stochastic convex optimization. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), pages 2031–2058. IEEE, 2023.
- Chan et al. [2011] T-H Hubert Chan, Elaine Shi, and Dawn Song. Private and continual release of statistics. ACM Transactions on Information and System Security (TISSEC), 14(3):1–24, 2011.
- Chaudhuri et al. [2011] Kamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(3), 2011.
- Davis et al. [2022] Damek Davis, Dmitriy Drusvyatskiy, Yin Tat Lee, Swati Padmanabhan, and Guanghao Ye. A gradient sampling method with complexity guarantees for lipschitz functions in high and low dimensions. Advances in neural information processing systems, 35:6692–6703, 2022.
- De et al. [2016] Soham De, Abhay Yadav, David Jacobs, and Tom Goldstein. Big batch sgd: Automated inference using adaptive batch sizes. arXiv preprint arXiv:1610.05792, 2016.
- Dwork et al. [2006] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Proc. of the Third Conf. on Theory of Cryptography (TCC), pages 265–284, 2006. URL http://dx.doi.org/10.1007/11681878_14.
- Dwork et al. [2010] Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N Rothblum. Differential privacy under continual observation. In Proceedings of the forty-second ACM symposium on Theory of computing, pages 715–724, 2010.
- Fang et al. [2018] Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. Advances in neural information processing systems, 31, 2018.
- Feldman et al. [2020] Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private stochastic convex optimization: optimal rates in linear time. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pages 439–449, 2020.
- Ganesh et al. [2023] Arun Ganesh, Daogao Liu, Sewoong Oh, and Abhradeep Thakurta. Private (stochastic) non-convex optimization revisited: Second-order stationary points and excess risks. Advances in Neural Information Processing Systems, 2023.
- Gao and Wright [2023] Changyu Gao and Stephen J Wright. Differentially private optimization for smooth nonconvex erm. arXiv preprint arXiv:2302.04972, 2023.
- Ghadimi and Lan [2013] Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM journal on optimization, 23(4):2341–2368, 2013.
- Ji et al. [2020] Kaiyi Ji, Zhe Wang, Bowen Weng, Yi Zhou, Wei Zhang, and Yingbin Liang. History-gradient aided batch size adaptation for variance reduced algorithms. In International Conference on Machine Learning, pages 4762–4772. PMLR, 2020.
- Jin et al. [2017] Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M Kakade, and Michael I Jordan. How to escape saddle points efficiently. In International conference on machine learning, pages 1724–1732. PMLR, 2017.
- Jin et al. [2019] Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M Kakade, and Michael I Jordan. A short note on concentration inequalities for random vectors with subgaussian norm. arXiv preprint arXiv:1902.03736, 2019.
- Jordan et al. [2023] Michael Jordan, Guy Kornowski, Tianyi Lin, Ohad Shamir, and Manolis Zampetakis. Deterministic nonsmooth nonconvex optimization. In The Thirty Sixth Annual Conference on Learning Theory, pages 4570–4597. PMLR, 2023.
- Kornowski and Shamir [2021] Guy Kornowski and Ohad Shamir. Oracle complexity in nonsmooth nonconvex optimization. Advances in Neural Information Processing Systems, 34:324–334, 2021.
- Kulkarni et al. [2021] Janardhan Kulkarni, Yin Tat Lee, and Daogao Liu. Private non-smooth erm and sco in subquadratic steps. Advances in Neural Information Processing Systems, 34:4053–4064, 2021.
- Lowy et al. [2024] Andrew Lowy, Jonathan Ullman, and Stephen Wright. How to make the gradients small privately: Improved rates for differentially private non-convex optimization. In Forty-first International Conference on Machine Learning, 2024.
- Menart et al. [2024] Michael Menart, Enayat Ullah, Raman Arora, Raef Bassily, and Cristóbal Guzmán. Differentially private non-convex optimization under the kl condition with optimal rates. In International Conference on Algorithmic Learning Theory, pages 868–906. PMLR, 2024.
- Nesterov [1998] Yurii Nesterov. Introductory lectures on convex programming volume i: Basic course. Lecture notes, 3(4):5, 1998.
- Nesterov and Polyak [2006] Yurii Nesterov and Boris T Polyak. Cubic regularization of newton method and its global performance. Mathematical programming, 108(1):177–205, 2006.
- Nguyen et al. [2017] Lam M Nguyen, Jie Liu, Katya Scheinberg, and Martin Takáč. Sarah: A novel method for machine learning problems using stochastic recursive gradient. In International conference on machine learning, pages 2613–2621. PMLR, 2017.
- Tropp et al. [2015] Joel A Tropp et al. An introduction to matrix concentration inequalities. Foundations and Trends® in Machine Learning, 8(1-2):1–230, 2015.
- Wang et al. [2019a] Di Wang, Changyou Chen, and Jinhui Xu. Differentially private empirical risk minimization with non-convex loss functions. In International Conference on Machine Learning, pages 6526–6535. PMLR, 2019a.
- Wang et al. [2019b] Di Wang, Changyou Chen, and Jinhui Xu. Differentially private empirical risk minimization with non-convex loss functions. In International Conference on Machine Learning, pages 6526–6535. PMLR, 2019b.
- Wang et al. [2019c] Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, and Vahid Tarokh. Spiderboost and momentum: Faster variance reduction algorithms. Advances in Neural Information Processing Systems, 32, 2019c.
- Zhang et al. [2020] Jingzhao Zhang, Hongzhou Lin, Stefanie Jegelka, Suvrit Sra, and Ali Jadbabaie. Complexity of finding stationary points of nonconvex nonsmooth functions. In International Conference on Machine Learning, pages 11173–11182. PMLR, 2020.
- Zhang et al. [2024] Qinzi Zhang, Hoang Tran, and Ashok Cutkosky. Private zeroth-order nonsmooth nonconvex optimization. In The Twelfth International Conference on Learning Representations, 2024.
Appendix A Appendix
A.1 Discussion of Proposition 3.4
Proposition 3.4 has become a standard result in non-convex optimization. Ganesh et al. [2023] assume two kinds of gradient oracles that satisfy the norm-Subgaussian definition below:
Definition A.1 (SubGaussian gradient oracles).
For a -Lipschitz and -smooth function :
We say is a first kind of norm-subGaussian Gradient oracle if given , satisfies and is .
We say is a second kind of norm-subGaussian stochastic Gradient oracle if given , satisfies that and is .
In Definition A.1, the oracles are required to be unbiased with norm-subGaussian error. While this condition is sufficient, it is not necessary. These two types of gradient oracles are then used to construct gradient estimators whose errors are tightly controlled with high probability (Lemma3.3 in Ganesh et al. [2023]). Proposition 3.4 can be established directly using Lemma 3.3 from Ganesh et al. [2023], as demonstrated in the proof of Lemma 3.6 in the same work. We include the proof here for completeness.
Proof of Proposition 3.4.
By Lemma 3.2 and the precondition that , we know that, if , then . Otherwise, . If but is a saddle point, then by Lemma 3.3, we know
where . Then if none of point in is an -SOSP, then we know , which is contradictory to Assumption 3.1. Hence at least one point in should be an -SOSP, and hence complete the proof.
∎
A.2 Other results
The first result is combining the current result in finding the SOSP of the empirical function , and then apply the generalization error bound as follows:
Theorem A.2.
Suppose is i.i.d. drawn from the underlying distribution and under Assumption 3.1. Additionaly assume for some constrained domain of diameter . Then we know for any point , with probability at least , we have
Proof.
We construct a maximal packing of points for , such that for any , there exists a point such that .
By Union bound, the Hoeffding inequality for norm-subGaussian (Lemma 2.4 and the Matrix Bernstein Inequality(Theorem 2.5), we know with probability at least , for all point , we have
(3) |
Conditional on the above event Equation (3). Choosing , then by the assumptions on Lipschitz and smoothness, we have for any , there exists such that , and
Similarly, we can show
∎
The current SOTA of finding SOSP privately of is from Ganesh et al. [2023], where they can find an -SOSP. Combining the SOTA and Theorem A.2, we can find the -SOSP of privately with
If we allow exponential running time, as Lowy et al. [2024] suggests, we can find an initial point privately to minimize the empirical function and then use as a warm start to improve the final bound further.