Adaptive Batch Size for Privately Finding Second-Order Stationary Points
Abstract
There is a gap between finding a first-order stationary point (FOSP) and a second-order stationary point (SOSP) under differential privacy constraints, and it remains unclear whether privately finding an SOSP is more challenging than finding an FOSP. Specifically, Ganesh et al. (2023) claimed that an -SOSP can be found with , where is the dataset size, is the dimension, and is the differential privacy parameter. However, a recent analysis revealed an issue in their saddle point escape procedure, leading to weaker guarantees. Building on the SpiderBoost algorithm framework, we propose a new approach that uses adaptive batch sizes and incorporates the binary tree mechanism. Our method not only corrects this issue but also improves the results for privately finding an SOSP, achieving . This improved bound matches the state-of-the-art for finding a FOSP, suggesting that privately finding an SOSP may be achievable at no additional cost.
1 Introduction
Privacy concerns have gained increasing attention with the rapid development of artificial intelligence and modern machine learning, particularly the widespread success of large language models. Differential privacy (DP) has become the standard notion of privacy in machine learning since it was introduced by Dwork et al. (2006). Given two neighboring datasets, and , differing by a single item, a mechanism is said to be -differentially private if, for any event , it holds that:
In this work, we focus on the stochastic optimization problem under the constraint of DP. The loss function is defined below:
where the functions may be non-convex, the underlying distribution is unknown, and we are given a dataset drawn i.i.d. from . Notably, our goal is to design a private algorithm with provable utility guarantees under the i.i.d. assumption.
Minimizing non-convex functions is generally challenging and often intractable, but most models used in practice are not guaranteed to be convex. How, then, can we explain the success of optimization methods in practice? One possible explanation is the effectiveness of Stochastic Gradient Descent (SGD), which is well-known to be able to find an -first-order stationary point (FOSP) of a non-convex function —that is, a point such that —within steps (Nesterov, 1998). However, FOSPs can include saddle points or even local maxima. Thus, we focus on finding second-order stationary points (SOSP), for the non-convex function .
Non-convex optimization has been extensively studied in recent years due to its central role in modern machine learning, and we now have a solid understanding of the complexity involved in finding FOSPs and SOSPs (Ghadimi and Lan, 2013; Agarwal et al., 2017; Carmon et al., 2020; Zhang et al., 2020). Variance reduction techniques have been shown to improve the theoretical complexity, leading to the development of several promising algorithms such as Spider (Fang et al., 2018), SARAH (Nguyen et al., 2017), and SpiderBoost (Wang et al., 2019b). More recently, private non-convex optimization has emerged as an active area of research (Wang et al., 2019a; Tran and Cutkosky, 2022; Arora et al., 2023; Gao and Wright, 2023; Ganesh et al., 2023; Wang et al., 2023; Lowy et al., 2024; Kornowski et al., 2024; Menart et al., 2024).
1.1 Our Main Result
In this work, we study how to find the SOSP of privately. Let us formally define the FOSP and the SOSP. For more on Hessian Lipschitz continuity and related background, see the preliminaries in Section 2.
Definition 1.1 (FOSP).
For , we say a point is an -first-order stationary point (-FOSP) of a function if .
Definition 1.2 (SOSP, Nesterov and Polyak (2006); Agarwal et al. (2017)).
For a function which is -Hessian Lipschitz, we say a point is -second-order stationary point (-SOSP) of if .
Given the dataset size of , privacy parameters , and functions defined over -dimensional space, Ganesh et al. (2023) proposed a private algorithm that can find an -SOSP for with
However, as shown in Arora et al. (2023), the state-of-the-art bound for privately finding an -FOSP is tighter: 111As proposed by Lowy et al. (2024), allowing exponential running time enables the use of the exponential mechanism to find a warm start, which can further improve the bounds for both FOSP and SOSP.
When the privacy parameter is sufficiently small, and the error term depending on it dominates the non-private term , we observe that . This raises the question: is finding an SOSP under differential privacy constraints more difficult than finding an FOSP?
Moreover, as pointed out by Tao et al. (2025), the results in Ganesh et al. (2023) may be overly optimistic. In particular, the saddle point escape subprocedure used in Ganesh et al. (2023) was designed for the full-batch setting. However, to mitigate dependence issues, the algorithm performs only a single pass over the functions and relies on minibatches instead. A direct fix for this issue leads to a weaker guarantee on the minimum eigenvalue of the Hessian, specifically,
which fails to satisfy Definition 1.2. To address this, Tao et al. (2025) proposed an alternative correction, but their method resulted in weaker theoretical guarantees than Ganesh et al. (2023) originally claimed, yielding
This work improves upon the results of Ganesh et al. (2023). In addition, we introduce a new saddle point escape subprocedure that correctly addresses the issue in Ganesh et al. (2023) while fully recovering—and even strengthening—the theoretical guarantees.
Specifically, we present an algorithm that finds an -SOSP with privacy guarantees, where:
This improved bound suggests that when we try to find the stationary point privately, we can find the SOSP for free under additional (standard) assumptions.
It is also worth noting that, our improvement primarily affects terms dependent on the privacy parameters. In the non-private setting, as (i.e., without privacy constraints), all the results discussed above achieve a bound of , which matches the non-private lower bound established by Arjevani et al. (2023) in high-dimensional settings (where ). However, to our knowledge, whether this non-private term of can be further improved in low dimension remains an open question.
1.2 Overview of Techniques
In this work, we build on the SpiderBoost algorithm framework, similar to prior approaches (Arora et al., 2023; Ganesh et al., 2023), to find second-order stationary points (SOSP) privately. At a high level, our method leverages two types of gradient oracles: , which estimates the gradient at point , and , which estimates the gradient difference between two points, and . When performing gradient descent, to compute the gradient estimator at point , we can either use for a direct estimate, or to update based on the previous gradient. In our setting, is more accurate but incurs higher computational or privacy costs.
The approach in Arora et al. (2023) adopts SpiderBoost by querying periodically: they call once and then use for subsequent queries, controlled by a hyperparameter . Their method ensures the gradient estimators are sufficiently accurate on average, which works well for finding first-order stationary points (FOSP). However, finding an SOSP, where greater precision is required, presents additional challenges when relying on average-accurate gradient estimators.
To address this, Ganesh et al. (2023) introduced a variable called , where is the index of the last iteration when was queried. If remains small, the gradient estimator stays accurate enough, allowing further queries of . However, if grows large, the gradient estimator’s accuracy deteriorates, signaling the need to query for a fresh, more accurate estimate. This modification enables the algorithm to maintain the precision necessary for privately finding an SOSP.
Our improvement introduces two new components: the use of the tree mechanism instead of using the Gaussian mechanism as in Ganesh et al. (2023), and the implementation of adaptive batch sizes for constructing .
In the prior approach using the Gaussian mechanism, a noisy gradient estimator is computed, and the next estimator is updated via , where is Gaussian noise added to preserve privacy. Over multiple iterations, the accumulation of noise can severely degrade the accuracy of the gradient estimator, requiring frequent re-queries of . On the other hand, the tree mechanism mitigates this issue when frequent queries to are needed.
However, simply replacing the Gaussian mechanism with the tree mechanism and using a fixed batch size does not yield optimal results. In worst-case scenarios, where the function’s gradients are large, the grows quickly, necessitating frequent calls to , which diminishes the advantages of the tree mechanism.
To address this, we introduce adaptive batch sizes. In Ganesh et al. (2023), the oracle is constructed by drawing a fixed batch of size from the unused dataset and outputting . Given an upper bound on , they guaranteed that for some parameter , thereby bounding the sensitivity of .
In contrast, we dynamically adjust the batch size in proportion to , setting , and compute . Fixed batch sizes present two drawbacks: (i) when is large, the gradient estimator has higher sensitivity and variance, leading to worse estimate accuracy; (ii) when is small, progress in terms of function value decrease is limited. Using a fixed batch size forces us to handle both cases simultaneously: we must add noise and analyze accuracy assuming a worst-case large , but for utility analysis, we pretend is small to examine the function value decrease. The adaptive batch size resolves this paradox: it allows us to control sensitivity and variance adaptively. When is small, we decrease the batch size but can still control the variance and sensitivity; when it is small, the function value decreases significantly, aiding in finding an SOSP.
By combining the tree mechanism with adaptive batch sizes, we improve the accuracy of gradient estimation and achieve better results for privately finding an SOSP.
Fixing the Error:
Building upon our discussion of obtaining more accurate gradient estimations through adaptive batch sizes, we extend this approach to Hessian estimations, achieving accuracy in terms of the operator norm. Upon encountering a potential saddle point—characterized by a small gradient norm—we utilize the Hessian to inform the gradient estimation over several iterations. This process is analogous to performing stochastic power iteration methods on the Hessian to identify the direction corresponding to the smallest eigenvalue. If the Hessian exhibits a small enough eigenvalue (say smaller than ), we demonstrate that this approach facilitates a significant drift, effectively enabling successful escape from the saddle point. This idea can also be applied to Ganesh et al. (2023) to recover their claimed rate.
1.3 Other Related Work
A significant body of literature on private optimization focuses on the convex setting, where it is typically assumed that each function is convex for any in the universe (e.g., (Chaudhuri et al., 2011; Bassily et al., 2014, 2019; Feldman et al., 2020; Asi et al., 2021; Kulkarni et al., 2021; Carmon et al., 2023; Gopi et al., 2023)).
The tree mechanism, originally introduced by the differential privacy (DP) community (Dwork et al., 2010; Chan et al., 2011) for the continual observation, has inspired tree-structure private optimization algorithms like Asi et al. (2021); Bassily et al. (2021); Arora et al. (2023); Zhang et al. (2024). One can also use the matrix mechanism Fichtenberger et al. (2023) which improves the tree mechanism by a constant factor. Some prior works have explored adaptive batch size techniques in optimization. For instance, De et al. (2016) introduced adaptive batch sizing for stochastic gradient descent (SGD), while Ji et al. (2020) combined adaptive batch sizing with variance reduction techniques to modify SVRG and Spider algorithms. However, these works’ motivations and approaches to setting adaptive batch sizes differ from ours. To the best of our knowledge, we are the first to propose using adaptive batch sizes in the context of optimization under differential privacy constraints.
2 Preliminaries
Throughout the paper, we use to represent both the norm of a vector and the operator norm of a matrix when there is no confusion.
Definition 2.1 (Lipschitz, Smoothness and Hessian Lipschitz).
Let . Given a twice differentiable function , we say is -Lipschitz, if for all , ; we say is -smooth, if for all , , and we say the function is -Hessian Lipschitz, if for all , we have
2.1 Other Techniques
As mentioned in the introduction, we use the tree mechanism (Algorithm 1) to privatize the algorithm, whose formal guarantee is stated below:
Theorem 2.2 (Tree Mechanism, Dwork et al. (2010); Chan et al. (2011)).
Let be dataset spaces, and be the state space. Let be a sequence of algorithms for . Let be the algorithm that given a dataset , sequentially computes for , and then outputs .
Suppose for all , and neighboring for all auxiliary inputs . Then setting , Algorithm 1 is -DP. Furthermore, with probability at least , for all .
We also need the concentration inequality for norm-subGaussian random vectors.
Definition 2.3 (SubGaussian, and Norm-SubGaussian).
We say a random vector is SubGaussian () if there exists a positive constant such that . We say is norm-SubGaussian () if there exists such that .
Lemma 2.4 (Hoeffding type inequality for norm-subGaussian, Jin et al. (2019)).
Let be random vectors, and for each , is zero-mean where is the corresponding filtration. Then there exists an absolute constant such that for any , with probability at least , , which means is .
Theorem 2.5 (Matrix Azuma Inequality, Tropp (2012)).
Consider a finite sequence of random self-adjoint matrices with common dimensions and a.s., . Let . Let . Then for any , we have
3 SOSP
We make the following assumption for our main result.
Assumption 3.1.
Let . Any function drawn from is -Lipschitz, -Hessian Lipschitz, and -smooth, almost surely. Moreover, we are given a public initial point such that .
We modify the Stochastic SpiderBoost used in Ganesh et al. (2023) and state it in Algorithm 3. The following standard lemma plays a crucial role in finding stationary points of smooth functions:
Lemma 3.2.
Assume is -smooth and let . Let . Then we have Moreover, if and , we have
Proof.
By the assumption of the smoothness, we know
When and , the conclusion follows from the calculation. ∎
Lemma 3.2 shows that one can be able to find the FOSP with inexact gradient estimates as long as the estimated error is small enough. A key challenge in finding an SOSP, compared to an FOSP, is ensuring that the algorithm can effectively escape saddle points. Existing analyses of saddle point escape using inexact gradients are insufficient for our purposes. To address this, we propose a new approach that leverages both inexact gradients and Hessians for escaping saddle points. The details are presented in the following subsection.
3.1 Escape Saddle Point with Hessian
We present the subprocedure for escaping saddle points in this section, with its pseudocode provided in Algorithm 2. The core idea is straightforward: given a sufficiently accurate estimate of the function’s Hessian, we can apply the power method to escape the saddle point. If the Hessian’s smallest eigenvalue is sufficiently negative, then after a certain number of steps, the iterate will have moved a significant distance, resulting in a substantial decrease in function value—indicating a successful escape.
Definition 3.3.
We say the estimation of the pair of gradient and Hessian is -accurate at point , if
The Gaussian noise added in Line 5 is for the privacy purpose. We have the following guarantee of Algorithm 2:
Theorem 3.4.
Suppose the function satisfies the Assumption 3.1, and the initial point is a saddle point such that and . When , , and , then with probability at least , the output satisfies that
where .
We have the following guarantee on :
Lemma 3.5.
With probability at least , for all , we have
Let denote the difference between the true gradient and our estimation. We use the following standard technical claims to connect the distance of the trajectory and the function value change.
Claim 3.6 (Lemma 10 in Wang et al. (2019a)).
We have
Claim 3.7 (Lemma 11 in Wang et al. (2019a)).
For any , one has
In Algorithm 2, we halt the algorithm whenever we find a point such that . We use the following lemma to demonstrate that a large distance at means a large function value decrease at , that is, it escapes from the saddle point successfully:
Lemma 3.8.
Suppose Algorithm 2 halts in advance when condition is satisfied, then we have
Similar to previous works Jin et al. (2017); Wang et al. (2019a), we use a coupling argument to prove that we can escape the saddle point with high probability. Fix the sequence of Gaussian noise and ensure for all .
Let denote the output conditional on the first iterate . Define the set of stuck region around :
(1) |
where . Suppose is the saddle point, and let be the minimum eigenvector of . We have the following Lemma:
Lemma 3.9.
Suppose . For any two points , if with , then at least one of is not in .
3.2 Main Algorithm
Algorithm 3 follows the SpiderBoost framework. We primarily focus on gradient oracles and estimators for simplicity, as the Hessian case follows the same principle. Moreover, we allow a larger error tolerance for a Hessian estimate , approximately , compared to the gradient, which is . We discuss some key variables and parameters in it.
We either query to estimate the gradient directly or query to estimate the gradient difference between consecutive points. The term controls the estimated error: when is small, remains a reliable estimator; when exceeds the threshold determined by the parameter , we obtain a fresh estimate from .
The term serves a technical role in the application of Theorem 3.4; specifically, when a potential saddle point is detected, we invoke the subprocedure described in Algorithm 2.
The variable tracks the number of saddle point escapes since the last fresh gradient estimate and Hessian estimate . Once exceeds the threshold , we force a fresh query to and to ensure privacy, following the Gaussian Mechanism.
We have the following guarantee of Algorithm 3.
Proposition 3.10.
The proof intuition of Proposition 3.10 is, if we do not find an -SOSP, then on average, the function value will at least decrease by . As we know , hence steps can ensure we find an -SOSP. See the proof of Proposition 3.10 in the Appendix.
It suffices to build the private oracles with provable utility guarantees. We construct the gradient oracles below in Algorithm 4.
Lemma 3.11 (Oracles with bounded error).
From now on, we adopt Algorithm 4 as the gradient oracles for Line 6 and Line 10 respectively in Algorithm 3, and we set . We then bound the error between gradient estimator and the true gradient for Algorithm 3.
Lemma 3.12.
Suppose the dataset is drawn i.i.d. from the distribution . For any and letting be the largest integer such that is set to be 0, with probability at least , for some universal constant , we have
(2) | |||
(3) |
Now, we consider the noise added from the tree mechanism and the noise added in the stochastic power method to make the algorithm private.
Lemma 3.13.
With the noise added in mind, we get the high-probability error bound of gradient estimators and Hessian estimators .
Lemma 3.14.
We need to show that we can find an -SOSP before we use up all the functions. We need the following technical Lemma:
Lemma 3.15.
Consider the implementation of Algorithm 3. Suppose the size of dataset can be arbitrarily large with functions drawn i.i.d. from , and we run the algorithm until finding an -SOSP, then with probability at least , the total number of functions we will use is bounded by
Given the dataset size requirement, we can get the final bound on finding SOSP.
Lemma 3.16.
Theorem 3.17.
Remark 3.18.
If we treat the parameters as constants , then we get as claimed before in the abstract and introduction.
4 Discussion
We combine the concepts of adaptive batch sizes and the tree mechanism to improve the previous best results for privately finding SOSP. Our approach achieves the same bound as the state-of-the-art method for finding FOSP, suggesting that privately finding an SOSP may incur no additional cost.
Several interesting questions remain. First, what is the tight bound for privately finding FOSP and SOSP? Second, can the adaptive batch size technique be applied in other settings? Could it offer additional advantages, such as reducing runtime in practice? Finally, while we can obtain a generalization error bound of using concentration inequalities and the union bound, can we achieve a better generalization error bound for the non-convex optimization?
References
- Agarwal et al. [2017] Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, and Tengyu Ma. Finding approximate local minima faster than gradient descent. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages 1195–1199, 2017.
- Arjevani et al. [2023] Yossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster, Nathan Srebro, and Blake Woodworth. Lower bounds for non-convex stochastic optimization. Mathematical Programming, 199(1):165–214, 2023.
- Arora et al. [2023] Raman Arora, Raef Bassily, Tomás González, Cristóbal A Guzmán, Michael Menart, and Enayat Ullah. Faster rates of convergence to stationary points in differentially private optimization. In International Conference on Machine Learning, pages 1060–1092. PMLR, 2023.
- Asi et al. [2021] Hilal Asi, Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private stochastic convex optimization: Optimal rates in l1 geometry. In International Conference on Machine Learning, pages 393–403. PMLR, 2021.
- Bassily et al. [2014] Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In Proc. of the 2014 IEEE 55th Annual Symp. on Foundations of Computer Science (FOCS), pages 464–473, 2014.
- Bassily et al. [2019] Raef Bassily, Vitaly Feldman, Kunal Talwar, and Abhradeep Guha Thakurta. Private stochastic convex optimization with optimal rates. Advances in neural information processing systems, 32, 2019.
- Bassily et al. [2021] Raef Bassily, Cristóbal Guzmán, and Anupama Nandi. Non-euclidean differentially private stochastic convex optimization. In Conference on Learning Theory, pages 474–499. PMLR, 2021.
- Carmon et al. [2020] Yair Carmon, John C Duchi, Oliver Hinder, and Aaron Sidford. Lower bounds for finding stationary points i. Mathematical Programming, 184(1):71–120, 2020.
- Carmon et al. [2023] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, and Kevin Tian. Resqueing parallel and private stochastic convex optimization. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), pages 2031–2058. IEEE, 2023.
- Chan et al. [2011] T-H Hubert Chan, Elaine Shi, and Dawn Song. Private and continual release of statistics. ACM Transactions on Information and System Security (TISSEC), 14(3):1–24, 2011.
- Chaudhuri et al. [2011] Kamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12(3), 2011.
- Davis et al. [2022] Damek Davis, Dmitriy Drusvyatskiy, Yin Tat Lee, Swati Padmanabhan, and Guanghao Ye. A gradient sampling method with complexity guarantees for lipschitz functions in high and low dimensions. Advances in neural information processing systems, 35:6692–6703, 2022.
- De et al. [2016] Soham De, Abhay Yadav, David Jacobs, and Tom Goldstein. Big batch sgd: Automated inference using adaptive batch sizes. arXiv preprint arXiv:1610.05792, 2016.
- Dwork et al. [2006] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Proc. of the Third Conf. on Theory of Cryptography (TCC), pages 265–284, 2006. URL http://dx.doi.org/10.1007/11681878_14.
- Dwork et al. [2010] Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N Rothblum. Differential privacy under continual observation. In Proceedings of the forty-second ACM symposium on Theory of computing, pages 715–724, 2010.
- Fang et al. [2018] Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. Advances in neural information processing systems, 31, 2018.
- Feldman et al. [2020] Vitaly Feldman, Tomer Koren, and Kunal Talwar. Private stochastic convex optimization: optimal rates in linear time. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pages 439–449, 2020.
- Fichtenberger et al. [2023] Hendrik Fichtenberger, Monika Henzinger, and Jalaj Upadhyay. Constant matters: Fine-grained error bound on differentially private continual observation. In International Conference on Machine Learning, pages 10072–10092. PMLR, 2023.
- Ganesh et al. [2023] Arun Ganesh, Daogao Liu, Sewoong Oh, and Abhradeep Thakurta. Private (stochastic) non-convex optimization revisited: Second-order stationary points and excess risks. Advances in Neural Information Processing Systems, 2023.
- Gao and Wright [2023] Changyu Gao and Stephen J Wright. Differentially private optimization for smooth nonconvex erm. arXiv preprint arXiv:2302.04972, 2023.
- Ghadimi and Lan [2013] Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM journal on optimization, 23(4):2341–2368, 2013.
- Gopi et al. [2023] Sivakanth Gopi, Yin Tat Lee, Daogao Liu, Ruoqi Shen, and Kevin Tian. Private convex optimization in general norms. In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 5068–5089. SIAM, 2023.
- Ji et al. [2020] Kaiyi Ji, Zhe Wang, Bowen Weng, Yi Zhou, Wei Zhang, and Yingbin Liang. History-gradient aided batch size adaptation for variance reduced algorithms. In International Conference on Machine Learning, pages 4762–4772. PMLR, 2020.
- Jin et al. [2017] Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M Kakade, and Michael I Jordan. How to escape saddle points efficiently. In International conference on machine learning, pages 1724–1732. PMLR, 2017.
- Jin et al. [2019] Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M Kakade, and Michael I Jordan. A short note on concentration inequalities for random vectors with subgaussian norm. arXiv preprint arXiv:1902.03736, 2019.
- Jordan et al. [2023] Michael Jordan, Guy Kornowski, Tianyi Lin, Ohad Shamir, and Manolis Zampetakis. Deterministic nonsmooth nonconvex optimization. In The Thirty Sixth Annual Conference on Learning Theory, pages 4570–4597. PMLR, 2023.
- Kornowski and Shamir [2021] Guy Kornowski and Ohad Shamir. Oracle complexity in nonsmooth nonconvex optimization. Advances in Neural Information Processing Systems, 34:324–334, 2021.
- Kornowski et al. [2024] Guy Kornowski, Daogao Liu, and Kunal Talwar. Improved sample complexity for private nonsmooth nonconvex optimization. arXiv preprint arXiv:2410.05880, 2024.
- Kulkarni et al. [2021] Janardhan Kulkarni, Yin Tat Lee, and Daogao Liu. Private non-smooth erm and sco in subquadratic steps. Advances in Neural Information Processing Systems, 34:4053–4064, 2021.
- Lowy et al. [2024] Andrew Lowy, Jonathan Ullman, and Stephen Wright. How to make the gradients small privately: Improved rates for differentially private non-convex optimization. In Forty-first International Conference on Machine Learning, 2024.
- Menart et al. [2024] Michael Menart, Enayat Ullah, Raman Arora, Raef Bassily, and Cristóbal Guzmán. Differentially private non-convex optimization under the kl condition with optimal rates. In International Conference on Algorithmic Learning Theory, pages 868–906. PMLR, 2024.
- Nesterov [1998] Yurii Nesterov. Introductory lectures on convex programming volume i: Basic course. Lecture notes, 3(4):5, 1998.
- Nesterov and Polyak [2006] Yurii Nesterov and Boris T Polyak. Cubic regularization of newton method and its global performance. Mathematical programming, 108(1):177–205, 2006.
- Nguyen et al. [2017] Lam M Nguyen, Jie Liu, Katya Scheinberg, and Martin Takáč. Sarah: A novel method for machine learning problems using stochastic recursive gradient. In International conference on machine learning, pages 2613–2621. PMLR, 2017.
- Tao et al. [2025] Youming Tao, Dongxiao Yu, Xiuzhen Cheng, Falko Dressler, and Di Wang. Private stochastic optimization for achieving second-order stationary points, 2025. URL https://openreview.net/forum?id=UVaLZMv0uk. OpenReview preprint, ICLR submission.
- Tran and Cutkosky [2022] Hoang Tran and Ashok Cutkosky. Momentum aggregation for private non-convex erm. Advances in Neural Information Processing Systems, 35:10996–11008, 2022.
- Tropp [2012] Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of computational mathematics, 12:389–434, 2012.
- Wang et al. [2019a] Di Wang, Changyou Chen, and Jinhui Xu. Differentially private empirical risk minimization with non-convex loss functions. In International Conference on Machine Learning, pages 6526–6535. PMLR, 2019a.
- Wang et al. [2023] Lingxiao Wang, Bargav Jayaraman, David Evans, and Quanquan Gu. Efficient privacy-preserving stochastic nonconvex optimization. In Uncertainty in Artificial Intelligence, pages 2203–2213. PMLR, 2023.
- Wang et al. [2019b] Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, and Vahid Tarokh. Spiderboost and momentum: Faster variance reduction algorithms. Advances in Neural Information Processing Systems, 32, 2019b.
- Zhang et al. [2020] Jingzhao Zhang, Hongzhou Lin, Stefanie Jegelka, Suvrit Sra, and Ali Jadbabaie. Complexity of finding stationary points of nonconvex nonsmooth functions. In International Conference on Machine Learning, pages 11173–11182. PMLR, 2020.
- Zhang et al. [2024] Qinzi Zhang, Hoang Tran, and Ashok Cutkosky. Private zeroth-order nonsmooth nonconvex optimization. In The Twelfth International Conference on Learning Representations, 2024.
Appendix A Omitted Proof of Subsection 3.1
A.1 Proof of Lemma 3.5
Proof.
By the concentration of the Gaussian, we know with probability at least , for all . The following proof is conditional on the event that for all .
By the Assumption 3.1, for each , we know
where is a point in the section between and . Note that by the algorithm design, hence we know . Hence, we have
This completes the proof.
∎
A.2 Proof of Lemma 3.8
A.3 Proof of Lemma 3.9
Proof.
For proof purposes, let and be the two trajectories with and .
It suffices to show that
(5) |
Let be the difference. Hence we have
Recall that , and . Then we know that
which means
by our choices of parameters. This establishes Equation (5) and completes the proof.
∎
A.4 Proof of Proposition 3.10
Proof of Proposition 3.10.
By Lemma 3.2 and the precondition that , we know that, if , then . Otherwise, . If but is a saddle point, then by Theorem 3.4, we know with probability at least ,
where . Then if none of the points in is an -SOSP, then we know , which is contradictory to Assumption 3.1. Hence at least one point in should be an -SOSP, and hence complete the proof.
∎
A.5 Proof of Theorem 3.4
Now we complete the proof of Theorem 3.4.
Proof of Theorem 3.4.
It suffices to upper bound . Let . Recall the Gaussian we add is . Suppose we add two independent Gaussians and get two independent first iterates and . By the property of Gaussians, we have
Then
Appendix B Proof of Subsection 3.2
B.1 Proof of Lemma 3.11
B.2 Proof of Lemma 3.12
Proof.
When , then for each such that , we know conditional on , we have
That is is zero-mean and by applying Lemma 3.11. Then Equation (2) follows from applying Lemma 2.4.
The case for Hessian estimation involves some truncation. By Lemma 3.11, we know with probability at least , we have
with . The similar concentration holds for . We truncate the distribution of around , that is . It is straightforward to see that
B.3 Proof of Lemma 3.13
Proof.
We use the tree mechanism to privatize the gradients if no potential saddle point is met, and we use the Gaussian mechanism during escaping from the saddle point.
We first show the indistinguishability of the gradients. It suffices to consider the sensitivity of the gradient oracles.
Consider the sensitivity of first. Let denote the output with the neighboring dataset. Then it is obvious that
As for the sensitivity of , we have
The privacy guarantee of gradients follows from the tree mechanism (Theorem 2.2), which means .
As for the Gaussian mechanism, note that for neighboring datasets with Hessian estimates and respectively, we know that
Note that we force a fresh estimate from after escaping the saddle points for times, and in each time, we ensure that and are used at most steps, which means the difference is
Then the property of Gaussian Mechanism and composition finishes the privacy guarantee proof. ∎
B.4 Proof of Lemma 3.14
Proof.
By our setting of parameters, we know and hence
Then our choice of and ensures the privacy guarantee by Lemma 3.13.
B.5 Proof of Lemma 3.15
Proof.
By Proposition 3.10, setting suffices to find an -SOSP. Let be the outputs of the algorithms, where denotes the step we halt the algorithm. We first show
(9) |
Denote the set . As , we know .
Now consider the set denoting the index of steps when the norm of the gradient estimator is large. It suffices to bound .
By Lemma 3.2, we know when , , and when . Given the bound on the function values, we know
Hence
This completes the proof of Equation (9). Moreover, we know the total time that we will escape from the saddle point is at most . Hence, The total number of functions we used for and , is upper bounded by
The total number of functions we used for is upper bounded as follows:
This completes the proof. ∎
B.6 Proof of Lemma 3.16
Proof.
By Lemma 3.15 and our parameter settings, we need
First,
Secondly,
Thirdly,
Fourthly,
Finally,
Combining these together, we get the claimed statement.
∎
Appendix C Other results
The first result is combining the current result in finding the SOSP of the empirical function , and then apply the generalization error bound as follows:
Theorem C.1.
Suppose is i.i.d. drawn from the underlying distribution and under Assumption 3.1. Additionally assume for some constrained domain of diameter . Then we know for any point , with probability at least , we have
Proof.
We construct a maximal packing of points for , such that for any , there exists a point such that .
By Union bound, the Hoeffding inequality for norm-subGaussian (Lemma 2.4 and the Matrix Bernstein Inequality(Theorem 2.5), we know with probability at least , for all point , we have
(10) |
Conditional on the above event Equation (10). Choosing , then by the assumptions on Lipschitz and smoothness, we have for any , there exists such that , and
Similarly, we can show
∎
The current SOTA of finding SOSP privately of is from Ganesh et al. [2023], where they can find an -SOSP. Combining the SOTA and Theorem C.1, we can find the -SOSP of privately with
If we allow exponential running time, as Lowy et al. [2024] suggests, we can find an initial point privately to minimize the empirical function and then use as a warm start to improve the final bound further.