\ul
Smoothed Robust Phase Retrieval
Abstract
The phase retrieval problem in the presence of noise aims to recover the signal vector of interest from a set of quadratic measurements with infrequent but arbitrary corruptions, and it plays an important role in many scientific applications. However, the essential geometric structure of the nonconvex robust phase retrieval based on the -loss is largely unknown to study spurious local solutions, even under the ideal noiseless setting, and its intrinsic nonsmooth nature also impacts the efficiency of optimization algorithms. This paper introduces the smoothed robust phase retrieval (SRPR) based on a family of convolution-type smoothed loss functions. Theoretically, we prove that the SRPR enjoys a benign geometric structure with high probability: (1) under the noiseless situation, the SRPR has no spurious local solutions, and the target signals are global solutions, and (2) under the infrequent but arbitrary corruptions, we characterize the stationary points of the SRPR and prove its benign landscape, which is the first landscape analysis of phase retrieval with corruption in the literature. Moreover, we prove the local linear convergence rate of gradient descent for solving the SRPR under the noiseless situation. Experiments on both simulated datasets and image recovery are provided to demonstrate the numerical performance of the SRPR.
Keywords— Convolution smoothing; Landscape analysis; Nonconvex optimization.
1 Introduction
Investigating the recovery of a signal vector of interest from a set of sensing vectors and their linear transformations in the presence of noise is a critical research question in statistics and data science. There has been considerable interest in the estimation of this signal vector utilizing magnitude-based measurements in the presence of potential errors and corruptions. This problem, commonly referred to as phase retrieval, plays a pivotal role in addressing the phase issue [34, 30]. It finds applications across a wide spectrum of scientific applications, including X-ray crystallography [25], optical physics [27], coherent diffractive imaging using arrays and high-power sources [6], astronomy [15], and microscopy [26].
Over the last decade, several pioneering papers have established the theoretical foundation for the noiseless phase retrieval problem, which seeks to recover the signal vectors or in from a collection of magnitude-based measurements without any errors or corruptions:
(1) |
for each . Here, represent non-negative measurements. It is important to note that (1) is a nonconvex problem recognized as being NP-hard [14]. To address this challenge, [5] introduced the PhaseLift method, employing a convex relaxation strategy based on semidefinite programming (SDP) to solve a trace-norm minimization problem. However, as the dimension of the signal vector increases, SDP-based methods tend to become computationally infeasible [4]. In response to the limitations of SDP-based methods, the literature has seen alternative approaches based on the non-convex optimization, including the Wirtinger flow [4], truncated Wirtinger flow [7], thresholded Wirtinger flow [3], truncated amplitude flow [35], and others. These non-convex methods have utilized spectral initialization [4] to facilitate global convergence. Recently, the seminal paper [32] demonstrated that the least-squares formulation for recovering the signal from (1) exhibits a benign global geometry (also known as function landscape) ensuring that, with high probability, the non-convex problem has no spurious local minimizers and can be globally optimized through efficient iterative methods, even with random initialization. Further, [8] showed that vanilla gradient descent is effective and provided a comprehensive analysis of the optimization trajectory, and [2] illustrated that the smoothed amplitude flow model shares similar properties.
In real-world applications, it is crucial not to overlook the influence of noise on phase retrieval. Especially, we should consider the presence of additive errors and infrequent corruptions affecting magnitude-based measurements, which may arise due to equipment limitations, measurement errors, extreme occurrences, and so on. The noisy magnitude-based measurements can be formalized as follows: for each ,
(2) |
Here, represents the additive errors bounded in , and signifies the infrequent yet arbitrary corruptions in . The set denotes the indices of outliers, and is the cardinality of . We define as the proportion of corrupted measurements. It is essential to note that remains a relatively small quantity due to the infrequency of corruption. However, corruptions ’s can be unbounded and may behave adversarially. Furthermore, independence between and may not hold, introducing additional challenges to signal recovery.
Model (2) provides a general framework for signal recovery from noisy magnitude-based measurements. This framework includes various specialized scenarios as special examples. Specifically, the noiseless phase retrieval problem, as formulated in (1), can be perceived as a particular instance of (2), characterized by and for all . Moreover, phase retrieval in the presence of bounded noise, as investigated by [37], emerges as another special case of (2) when , indicating the absence of outliers. Conversely, the robust phase retrieval problem [11] aligns with (2) under the situation that for all , focusing on the robustness against corruptions. Additionally, the approach for non-convex phase retrieval with outliers, as proposed by [36], resonates with (2) and accommodates both dense bounded noise and sparse arbitrary outliers: for each .
To solve the problem of phase retrieval with corruptions in Model (2), the objective remains to recover the signal vectors or using the observations . In this vein, [36] extended the methodologies of [7] to propose the median truncated Wirtinger flow. [11] explored robust phase retrieval employing the -loss, formally defined as
(3) |
[11] introduced the proximal linear (PL) algorithm to tackle this nonconvex optimization challenge, demonstrating that the -loss enhances success rates in comparison to the median truncated Wirtinger flow [36]. Furthermore, they established the properties of sharpness and weak convexity for , alongside the local convergence of the PL algorithm. Subsequent research, targeting the optimization problem (3), has expanded upon these findings. [9, 10] investigated the local convergence of subgradient algorithms. Recently, [38] proposed the inexact proximal linear (IPL) algorithm and studied its local convergence.
Despite these advancements, the essential geometric structure of the robust phase retrieval is largely unknown to study spurious local solutions. This contrasts with the works of [32, 8, 2], which provided insights into the landscape of smooth objective functions in phase retrieval under noiseless conditions. Notably, even under the ideal noiseless setting, [11] did not conduct landscape analysis to understand the stationary points of the nonconvex and nonsmooth optimization. [10] only defined and analyzed stationary points in the context of subgradient, and whether they are local minimums is still unknown. Moreover, under the situation with corruption, we cannot find any existing work to study the landscape of any objective function in the literature. As a result, there is no theoretical guarantee for robust phase retrieval to avoid spurious local minimums using random initialization.
Another concern arises from the intrinsic nonsmooth nature of (3) that significantly impacts the efficiency of optimization algorithms including the PL method [11] and the IPL method [38]. Both approaches necessitate iteratively solving nonsmooth subproblems using the proximal operator graph splitting (POGS) algorithm [28] or the accelerated proximal gradient algorithm, which invariably leads to diminished optimization efficiency. Moreover, as pointed out by [9, 10], the subgradient algorithms for solving (3) are sensitive to the step size selection, and there are no practically reliable methods for choosing the step size.
All these challenges stem from the non-differentiability of the -loss employed in (3), which limits the technical tools for analyzing function landscapes and complicates the optimization process. In our paper, we propose the smoothed robust phase retrieval (SRPR) method by minimizing the smoothed surrogate based on the convolution-type smoothing, which involves the convolution of the -loss with a kernel function and its bandwidth . This smoothing idea was first introduced by [13] and then investigated by [16], [33], [18], and [23] to overcome the lack of smoothness in quantile regression and improve optimization efficiency while achieving the desired convergence rate and oracle property.
We demonstrate that, with high probability, not only satisfies the weak convexity property similar to but also satisfies generalized sharpness in the noiseless scenario. Furthermore, this approach facilitates the utilization of gradient and Hessian information, providing a convenient mathematical structure. These properties are crucial for determining the convergence rates of gradient-based optimization algorithms and for conducting landscape analyses. By directly minimizing in the absence of noise, exact signal recovery is achievable. Additionally, our methodology extends to scenarios involving corrupted measurements, illustrating its versatility and applicability. The SRPR approach has at least the following contributions to the field, enhancing both the theoretical and practical aspects of phase retrieval when compared to existing methodologies.
Firstly, the smoothness of our approach enables the application of gradient-based algorithms to minimize the nonconvex function . We demonstrate that under the noiseless situation, the minimization of through general first-order methods allows SRPR to achieve local linear convergence. This marks a significant improvement over the sublinear convergence rates observed with the PL [11] and IPL [38] algorithms. The attainment of a linear rate is facilitated by the Polyak-Lojasiewicz condition [21] that holds locally, circumventing the need for local convexity. Additionally, our method benefits from the potential for acceleration and line search strategies, offering clear advantages over subgradient algorithms [9, 10], which encounter challenges in step size tuning. In numerical experiments, SRPR demonstrates superior efficiency relative to competing algorithms.
Secondly, in the context of noiseless phase retrieval, we conduct landscape analysis to show that , with high probability, exhibits a benign landscape and has no spurious local minima. This analysis underscores the efficacy of random initialization in achieving exact recovery through the minimization of . To the best of our knowledge, this is the first instance of a benign landscape being identified for a robust objective function in the domain of noiseless phase retrieval, as previously discussed works [11] and [10] have only considered the local behaviors of optimization algorithms and the locations of stationary points.
Thirdly, in the presence of corrupted measurements, we prove that when is small and bounded noises are absent, with high probability, the landscape of remains benign outside a vicinity around with a radius proportional to . Similar results hold for the general Model (2). To the best of our knowledge, this represents the first result of benign function landscapes in the context of phase retrieval with corruptions.
The rest of this paper is organized as follows. Section 2 introduces the methodology of SRPR. Section 3 focuses on the noiseless landscape analysis. Section 4 focuses on the empirical landscapes with corruption. Section 5 presents the numerical experiments. Section 6 includes a few concluding remarks. The complete proofs are presented in the supplement.
2 Methodology
This section first presents the general framework for smoothed robust phase retrieval in Subsection 2.1 and then discusses the convolution-type smoothed loss functions in Subsection 2.2. Subsection 2.3 presents the properties of the smoothed objective function. Subsection 2.4 discusses the optimization algorithms.
Before proceeding, we introduce some useful notations. Denote as the indicator function. Let . Define , and denotes a identity matrix. Let represent a vector whose th element is 1 with all other elements taking the value 0. In addition, throughout the paper, we use to denote generic positive constants, though the actual values may vary on different occasions.
2.1 A General Framework
In the robust phase retrieval problem (2), we observe vectors and non-negative scalars and aim to recover the true phases 111Due to the quadratic relationship in (2), both and are true signal vectors..
Assumption 1.
Let be random vectors in that are independently and identically distributed with the mean zero and covariance matrix , satisfying for all , where is a positive constant. Here, are non-negative integers, and is a positive integer. The true phase and
(4) |
The ’s are random variables in with for any . The ’s are non-negative random variables without further assumptions.
In Assumption 1, represents the total number of measurements, and denotes the proportion of corrupted measurements. The sets and correspond to the inliers and outliers among the measurements, respectively. In the special case where , the model reverts to the noiseless phase retrieval scenario as depicted in (1). Our goal is to recover the true signal vectors or from the observed , without prior knowledge of which measurements are corrupted. This setup ensures the mutual independence of and . We will introduce stronger assumptions on the distribution of later. The bounded noise variables s require no additional distributional assumptions, whereas no assumptions are made regarding the distribution of or its independence with respect to , allowing for potentially heavy-tailed and adversarially distributed corruption. This assumption is more general then the model assumption of [11], which requires independence between and . For simplicity, in subsequent discussions, we will refer to a random vector to denote the generic distribution of the s.
As shown in [11], when , exact recovery is achievable in the optimization problem (3) employing an -loss, indicating that , with high probability, under certain assumptions. To refine this approach, we employ convolution-type smoothing, focusing on the problem formulated as:
(5) |
where
(6) |
represents a kernel function, is the bandwidth, denotes the convolution operation, and . In this context, serves as a smoothed loss function, substituting for . The adoption of convolution-type smoothing is motivated by gradient approximations: closely mirrors the subgradient of because , thereby suggesting that the statistical behavior of will approximate that of . Moreover, the smoothness of significantly benefits gradient-based optimization and landscape analysis. Provided some mild conditions on the kernel function , both and are well-defined, ensuring that is smooth and equipped with gradients and Hessians for both optimization and landscape analysis purposes.
2.2 Convolution-type Smoothing
In this subsection, we elaborate on the convolution-type smoothed loss function as defined in (6). Additionally, we introduce the notations and for the integrated kernel function and its scaled version, respectively. To facilitate our analysis, we adhere to the subsequent assumption regarding the kernel function throughout this paper:
Assumption 2.
The mapping satisfies the following conditions:
-
(a)
is a kernel, namely, and .
-
(b)
is bounded, Lipschitz continuous and .
-
(c)
Let . is bounded, Lipschitz continuous and .
Next, we give some examples of the kernel that satisfies Assumption 2.
-
•
Gaussian kernel which generates with and being the CDF of .
-
•
Logistic kernel which generates with .
-
•
Epanechnikov kernel which generates with .
-
•
Triangular kernel , which generates with .
-
•
which generates the Pseudo-Huber loss .
In practical implementations, ought to be a small constant so that approximates the -loss, following the ideas introduced in smoothed quantile regression [16]. Following Assumption 2, we can derive several properties of , which are briefly discussed here for space consideration. More details are summarized in Lemma LABEL:lemma_property_huber of Appendix LABEL:appendix:property_hubers in the supplement. Similar to [16], we prove that is convex and has continuous first-order and second-order derivatives, namely, and This ensures the existence of the gradient and Hessian for . Moreover, note that for some positive constants and . This relation suggests that exhibits characteristics akin to the -loss for values of near , and resembles the -loss for values of distant from . This bound acts as the foundation for the generalized sharpness condition to in the absence of noise, a topic to be elaborated upon in the subsequent subsection.
2.3 Properties of the Smoothed Objective Function
In this subsection, we discuss the properties of , which will indicate the similarities between and and how minimizing works for robust phase retrieval. We first introduce two parallel assumptions given in [11]. Assumption 3 claims that the distribution of is non-degenerate in all the directions, and Assumption 4 describes the tail-behavior of the distribution of .
Assumption 3.
There exists two positive constants and such that, for any and such that , we have
Assumption 4.
The vectors ’s are sub-Gaussian with parameter , which means that
With the same Assumptions, we can prove two properties for which are comparable to properties for as follows. The first one will be generalized sharpness.
Lemma 1.
In the literature, the sharpness of a function bounds the difference between and its global minimum by the distance between and its nearest global minimum point. This concept is prevalent in the optimization problems characterized by equation-solving aspects and nonsmooth objective functions. For more details, please see Section 1 of [29]. The generalized sharpness condition extends the concept of sharpness for , as outlined in Corollary 3.1 of [11], which is the special example of (7) when :
(8) |
[11] and [12] proved that (8) holds with high probability for the noiseless robust phase retrieval. [11], [9], [10], and [38] leveraged (8) to show the convergence rate for their algorithms.
The second one is the weak convexity that relies on the Assumption 4.
Lemma 2.
This relationship also holds when substituting with and replacing the gradient with the subgradient. Weak convexity, a notion extensively recognized in optimization literature, serves as a generalization of convexity. It necessitates that be convex for a certain positive constant . Various definitions for weak convexity are discussed throughout the literature. The formulation presented in Lemma 2 aligns with that in [10]. Furthermore, Corollary 3.2 in [11] defines weak convexity by bounding the deviation between and its linear approximation, requiring the existence of a positive constant such that:
Next, under the case , we discuss how generalized sharpness and weak convexity lead to the robustness of using . As is an approximation of , we can expect the approximation in function values, which is for some positive constant . Together with the sharpness of given in (8), we can conclude that, for some positive constant ,
This inequality shows the closeness of minimums for and . Moreover, our local landscape analysis, which aims at identifying and categorizing stationary points near the true signal vectors, allows for a finer characterization of local regions potentially containing a stationary point. When , and under the conditions specified in (7) and (9), it is deduced that:
So, when is sufficiently small, there exists positive constants such that
(10) |
Similar conclusions hold for the symmetrical area where . So, within , we can claim that true signal vectors are stationary points and local minima.
In scenarios involving corruptions with , we employ the decomposition
where , for , represents the sample mean of for the noiseless and corrupted measurements, respectively. This allows us to express the gradient as
The first term on the right-hand side can be bounded in a manner akin to that in (10). For the second term, we will show in Section 4 that, with high probability, its absolute value can be uniformly bounded by within , where is a positive constant independent of and . This serves as an indication of the impact corrupted measurements have on the gradient. When is sufficiently small, these bounds suggest that the vicinity around the true signal vectors within is defined as
for a positive constant , and no stationary points are found in . Therefore, if an algorithm minimizing converges to a local minimum within , it must be located in . This illustrates the robustness of employing , as the algorithm converges to a point near the true signal vectors. In a noiseless context, this point coincides with one of the true signal vectors. In cases with corruption, and when is small, the convergence point is close to either , serving as an effective warm-start for the IPL algorithm that minimizes with notable local efficiency, thereby exactly finding one of the true signal vectors.
Addressing robustness alone is not sufficient given the nonconvex nature of . We also need to demonstrate the absence of spurious local minima outside the region to ensure the efficacy of algorithms with random initialization. In Sections 3 and 4, we conduct landscape analysis to identify and to illustrate the benign landscape characteristics beyond .
Remark: Inner product quantities akin to , which involve the interaction between the (sub)gradient and the deviation from to the true vector, are pivotal in the landscape analysis across various nonconvex formulations [10, 24, 32, 8]. Specifically, [24] leveraged such quantities in the global landscape analysis of single-index models with the essential assumption of a strictly monotone link function. Given that phase retrieval, with its squared link function not satisfying this monotonicity requirement, usage of inner product quantities in our paper, [32] and [8] is confined to local vicinities. In these works, nonlocal landscape analyses are conducted through the examination of Hessian matrix-related quantities. A distinctive aspect of our study is the incorporation of corruption into the analysis, setting it apart from existing literature. The comprehensive details of this approach will be elaborated upon in Section 4.
Assumption 5.
For any , .
2.4 Algorithms and Rate of Convergence
In this subsection, we discuss the algorithms for minimizing . We will show that under the noiseless situation, with high probability, gradient descent on enjoys local linear convergence. Before we start, we first give a Lemma that finds a Lipschitz constant for with high probability.
Lemma 3.
Given the above lemma, the local linear convergence for gradient descent can be shown in Theorem 1.
Theorem 1.
In (12), if we let and choose , we have . Thus, we can define as the condition number. It is worth pointing out that is not related to or so the convergence rate is statistically stable.
Next, we comment on the technical details we use for reaching this result. Linear convergence usually appears in a strongly convex situation. When investigating the Hessian matrix , provides an upper bound for . However, it is hard to find a positive lower bound for the smallest eigenvalue for the Hessian matrix even within the local vicinity of the true signal vectors. So, we turn to the Polyak-Lojasiewicz condition introduced in [21], which proves linear convergence of gradient descent algorithm for minimizing a function by assuming with . We can prove that when (7), (9) and (11) hold with , for any that is small enough,
in which and are positive constants that are not related to the selection of . With these conditions, we can get the convergence rate.
Linear convergence guarantees that minimizing the smoothed objective function is not only sufficient for phase retrieval in the noiseless scenario but also offers enhanced efficiency in comparison to PL [11] and IPL [38]. This improvement stems from the methodologies adopted by [11], which solves the subproblem utilizing the POGS algorithm [28], and IPL, which solves the same subproblem through accelerated proximal gradient descent tailored for the dual problem. Both of these algorithms are characterized by their sub-linear convergence rates. Furthermore, a local linear convergence rate has been established across various phase retrieval algorithms, including Wirtinger flow [4], truncated Wirtinger flow [7], truncated amplitude flow [35], and subgradient methods [9, 10]. Smoothed robust phase retrieval thus joins this list of methodologies. In addition, Theorem 1 and (7) indicate that, for any , the vanilla gradient descent on the smoothed robust objective function can find an such that with at most iterations. This is better than for the subgradient algorithms [9, 10] using the robust loss. In our numerical experiments in Section 5, we employ monotone accelerated gradient descent (as outlined in Algorithm 1 of [22]) combined with line search techniques to achieve better overall efficiency. These experiments demonstrate that smoothed robust phase retrieval exhibits superior numerical efficiency on noiseless datasets when compared to both PL and IPL.
2.5 Overview of Benign Landscape
In this subsection, we provide an overview of the benign landscape for . Its gradient and Hessian matrix help us find and classify the stationary points, and we will discuss different kinds of benign landscapes in this paper.
-
(a)
Noiseless population landscape. In this case, and we focus on the stationary point for . Under some assumptions, we exactly locate all the stationary points and prove that only are local minimums.
-
(b)
Noiseless empirical landscape. In this case, and we care about the stationary point of . Under some assumptions, we show that with high probability, the stationary points should be close to those for noiseless population landscape, and only are local minimums.
-
(c)
Empirical landscape with infrequent corruptions only. In this case, and we study the stationary point of . Under some assumptions, we show that the stationary points are close to those of the noiseless population landscape and all the local minimums are close to with high probability. Specifically, all the local minimums lie in the set in which and is a positive constant. Thus, if an optimization algorithm finds a local minimum of , its gap to is small when both and are small.
-
(d)
Empirical landscape with infrequent corruptions and bounded noises. When , similar results can be obtained when are small enough and , with high probability, all the local minimums lie in the set where is a positive constant.
Now, we describe the procedure of smoothed robust phase retrieval. We first apply some gradient-based algorithm to minimize and find a local minimum . Due to the benign landscapes described above, for the optimization, we expect that both random initialization and modified spectral initialization work well such that is close to or . Second, for the noiseless situation, we can claim that is already a true signal vector. For the situation with noise, benign landscape guarantees that is close to one of the true signal vectors and is treated as a warm-start for IPL that minimizes for exact recovery.
3 Landscape Analysis in the Noiseless Case
In this section, we focus on the noiseless case with and analyze both population and empirical landscapes. Without loss of generosity, we always assume that throughout this section.
3.1 Notations
Denote The gradient vector is and the Hessian matrix . The empirical and population versions of objective functions can be written as and They have continuous gradient vectors and Hessian matrices as , , , and We will use the Hessian matrix to classify the stationary points and find the benign noiseless population landscapes for and empirical landscapes for .
Next, we introduce three quantities that are used to characterize the landscape. Define
and
Based on Assumption 5, it is easy to find that, for any given satisfying , we have , This implies that most directional derivatives in will be 0 and we only need to consider the remaining two directions. To be specific,
For , it is easy to find that is sufficient for concluding that the Hessian matrix of at contains at least one negative eigenvalue, which indicates that is not a local minimum. Similar properties hold for their empirical counterparts:
3.2 Population Landscape
In this subsection, we will give the conclusions for the stationary points of . Denote
for some positive constant , whose rigorous definition will be given in Theorem 2 later. To help to understand, we also use graphical explanations in Figure 1. We express vectors in the phase space in a two-dimensional coordinate. For each vector , we decompose it into the direction of and an orthogonal direction as with . This decomposition exists for any , and for a given , and are unique. We let a point with coordinate denote the subset of in which elements can give a decomposition with coefficients and :
So, in Figure 1, if we let the first coordinate for be and and the second coordinate for be and , we know that represent the true signal vectors, represent points in and the origin represents the . For the noiseless population landscape, we can show that when with some positive constant , there exists a unique positive constant such that, the stationary point set is , among which, only global minimums and are local minimums. This conclusion corresponds to Figure 1, in which are stationary points and only are local minimums. The following results describe the conclusion rigorously.

Theorem 2.
The conclusion of Theorem 2 is similar to that of [10]. Next, we comment on some key ideas behind the proof. It is easy to find that are all stationary points. For it is also easy to find out that . So we only need to solve the equation Based on the isotropy property of normal distribution, we can calculate by assuming and . So, is the solution to the equation about . For the complete proof, we also need to explain why is not a stationary point when and . The proof is a direct application of [10]’s work on spectral functions. Please refer to Appendix LABEL:appendix:proof_positions for details.
With access to the Hessian matrix, we can also classify those stationary points. The conclusion is in the Theorem 3.
Theorem 3.
Theorem 3 classifies all the stationary points for and indicates the benign noiseless population landscape that there is no spurious local minimum for . Another interesting result found in the proof of Theorem 3 is that the limits and do not depend on the selection of kernel . It shows that,
when is the unique solution of function and
for any that satisfies and . When , , which is negative. This implies that these quantities are properties for the loss.
Finally, we give a lemma about the positiveness of when is very large.
Lemma 4 further strengthens the non-zero nature of the gradient when is large and will be useful in reaching conclusions about the empirical landscape.
3.3 Empirical Landscape
In this subsection, we first show the conclusions for the benign empirical landscape and then provide a sketch of the proof.
We still use visualizations to help explain the landscape. Under some conditions, we can show that, with high probability, all the stationary points lie in the set
for any small positive constant , among which, only global minimums and are local minimums. In other words, a stationary point is either a true phase or lies in the vicinity of . This conclusion corresponds to Figure 2, in which possible stationary points correspond to and the inner parts of the three circles, and only are local minimums.

Before giving the formal conclusion, we define the following events.
Formally, the conclusion is as follows.
Theorem 4.
Theorem 4 shows that the opposite event which corresponds to the benign noiseless empirical landscape happens with high probability. So, if we use a gradient-based optimization algorithm for minimizing with random initialization and it finds a local minimum, the iterative sequence will converge to one of . In addition, under the benign landscape, though there are only theoretical guarantees for global convergence for first-order algorithms with additional tricks [19, 20] because of the existence of saddle points, numerical performances for robust phase retrieval indicate that we do not need those tricks for good performance. Readers with interests in saddle points can refer to [20] and references therein. The global convergence guarantee makes sure that our discussion of the local linear convergence rate in Section 2.4 is meaningful.
For the rest of this subsection, we provide a sketch of the proof. In the first step, we focus on a large bounded area and prove that all of uniformly concentrate to their population versions with high probability. These quantities are related to the gradient and Hessian matrix for . If we do not consider the high-dimensional situation for , it is easy to apply the central limit theorem to each element of the gradient and Hessian and claim convergence. However, if we consider the high-dimensional case when we can only assume that , we can only focus on some selected key elements, which are and . Fortunately, they are sufficient to support our claim about the empirical landscape. Next, we state our conclusions.
Lemma 5.
Lemma 5 guarantees the uniform concentration of the selected quantities within a large bounded subset of when is large. This indicates that, within this area, stationary points and local minimums for should be close to those for . It is also worth mentioning that, inferred from the proof, using smaller will require larger for these quantities to get close to their population version.
In the second step, we can go beyond closeness within a local area that contains the true signal vectors. Local landscape analysis strengthens the conclusion by showing that, within a local area that contains the true signal vectors, is non-zero when , which means that there is no other stationary point in the local area. The two key properties for this part are the generalized sharpness condition given in Lemma 1 and the weak convexity condition given in Lemma 2. With them, we can claim the following conclusion for the lower bound of the norm of the gradient and claim the benign local landscape.
Lemma 6.
Finally, having handled , we will prove that the unbounded area does not contain a stationary point by showing that is positive on the boundary and that is monotone increasing with regard to for any . The monotonicity is formally given in Lemma 7.
Lemma 7.
For any constant the function is a monotone increasing function on .
Lemma 7 is obvious because is monotone increasing. It directly indicates the following result about the unbounded area.
Lemma 8.
If there exists a constant such that then we have
The positiveness of means that there is no stationary point in this area.
4 Empirical Landscape in the Presence of Noise
In this section, we discuss the corrupted situation and the empirical landscape of . Without loss of generosity, we still assume that throughout this section. We will follow the statistical assumptions in Section 2 with the general corruption setting . Subsection 4.1 will first provide the results under the situation that in Assumption 1 and then extend the result to the case that . Subsection 4.2 will provide a sketch of the proof.
4.1 Benign Landscape When Sparse Corruptions Exists
In this subsection, we first discuss the situation that in Assumption 1. We will show that, under some conditions, with high probability, all the stationary points of lie in the set
for some positive constant that is related to and a positive constant , among which, local minimums lie in
In other words, a stationary point either lies in the vicinity of whose radius is proportional to or lies in the vicinity of , and all local minimums lie in the vicinity of the true phase. This conclusion corresponds to Figure 3, in which possible stationary points correspond to the inner parts of the five circles and only inner parts of the circles and contain local minimums. The rigorous statement is given in the Theorem 5.
Theorem 5.
In Theorem 5, we claim that the local minimums for only exist in the vicinity of the true phase . The radius of the vicinity is proportional to the bandwidth and monotone increasing with regard to the proportion of the corrupted measurements. In practice, we need to choose a . Although smaller leads to closer local minimums, the non-local optimization efficiency is still unclear for difference choices of , and decreasing will also require larger due to the dependency of the constants in Theorem 5 on . Next, we provide some guidance for picking under the situation that we do not know the covariance of and is not necessarily 1. Using the consideration of homogeneity, we can let , in which is a small but not extremely small positive constant. Finding or roughly estimating is easy in practice and a rough estimation of can be done via prior knowledge or modified spectral initialization (Algorithm 3 in [11]). We will provide an example of image recovery in Section 5. For that should match the theoretical results based on the noiseless population landscape given in Theorem 3, we will conduct sensitive analysis in Section 5.

Next, we extend a benign landscape in Theorem 6 for the general situation that .
Theorem 6.
Suppose that . Under Assumptions 1, 2 5, there exist positive constants , and , such that when , for any , such that and , the probability that a local minimum of exists and does not belongs to the set
(17) |
is at most . Here, are positive constants that depend on and , and and are positive constants that do not depend on , or .
4.2 Sketch of Proof
We first provide a sketch of proof for Theorem 5. Before we start, we define and the definition of in (5) can also be written as It is easy to find that , and Similar to Section 3, we also define
and
Now we provide the proof sketch. First, we focus on a large bounded area . The sample mean notations satisfies that . Here, takes the sample mean with regard to indices from 1 to , and focuses on the indices from to . When is small, We can expect that, in , concentrations for the quantities defined above with hold as usual and those quantities with are bounded by small positive constants. The rigorous statement is as follows.
Lemma 9.
Under Assumptions 1, 2 and 5 with , , the following results hold.
-
(a)
For any given , and , when , we have,
(18) Here, are positive constants that depend on .
-
(b)
For any and , when , we have, ,
(19) Here, are all positive constants. is affected by but is not affected by or . are affected by but are not affected by .
-
(c)
For any and , when , we have
(20) Here, are all positive constants. is affected by but is not affected by , or . are affected by but are not affected by .
All the above conclusions also hold for the situation of replacing with .
Lemma 9(a) is a direct conclusion from the noiseless case. For others, we can observe that, when is large, can be very small so that the corruptions affect each of and by some constant multiplying . This indicates that as long as is smaller than some constant, the corruption will not overcome the power of normal measurements for that is not in the vicinity of . The last inequality will be used in the local landscape analysis to quantify the vicinity.
Second, we try to quantify the vicinity. Currently, within , we know that all the local minimums are close to the true signal vectors. Local landscape analysis strengthens the conclusion by showing that, within a local area that contains the true signal vectors, is non-zero when , which quantifies the distance of the stationary points in the local area to the true signal vectors. The key of the analysis is the conclusion for the local landscape of the noiseless situation given in Lemma 6, which means that with high probability,
(21) |
Together with the third conclusion in Lemma 9, we can find the following result for the local landscape. Lemma 10 quantifies the radius of the vicinity as .
Lemma 10.
When Assumptions 1, 2 and 5 hold, , there exists positive constants and such that when , and , the following relationship holds with probability at least .
(22) |
Here, , and are positive constants that are not related to the selection of and . Positive constants are related to and . This conclusion also holds for the situation of replacing with .
Last, we will prove that the unbounded area does not contain a stationary point by showing that is positive on the boundary and that is monotone increasing with regard to for any . The following two lemmas discuss monotonicity and positiveness similarly as in Lemmas 7 and 8.
Lemma 11.
For any constant the function is a monotone increasing function on .
Lemma 12.
If there exists a constant such that then we have
Finally, we discuss some key points for generalizing the proof of Theorem 5 for to Theorem 6 for . First, requiring that to be close to , , we need that is small and . Next, similar to the previous discussions on the sketch of proof, we need to let dominates in some bounded area excluding the vicinity of . The estimation that with high probability remains unchanged. For the , due to , the relationship only holds when . So, we can get the set given in (17) by comparing the bounds.
5 Numerical Experiments
This section compares the numerical performance of PL[11], IPL[38], and SRPR. Subsection 5.1 presents three algorithms. Subsections 5.2 and 5.3 present the numerical results from simulation and a task of image recovery, respectively.
5.1 Description of Different Algorithms
To illustrate the efficacy of the benign landscape for smoothed robust phase retrieval, we evaluate both random initialization (RI) and modified spectral initialization (SI) as described in Algorithm 3 of [11]. The SI method leverages spectral initialization on subsets of samples with small values, aiming to circumvent the influence of heavily-tailed corrupt measurements within . [11] established that SI can, with high probability, generate an initial point satisfying , provided the ratio is sufficiently large and is sufficiently small.
Next, we discuss the proximal linear (PL) algorithm presented in [11] and the inexact PL (IPL) proposed in [38]. These algorithms are selected for comparative analysis due to PL’s pioneering role in minimizing and IPL’s advancements, which enhance the numerical efficiency over PL and subgradient methods detailed in [9]. Thus they set a benchmark for robust phase retrieval [38]. Both PL and IPL proceed by solving a sequence of convex subproblems to minimize .
We then introduce the smoothed robust phase retrieval (SRPR) methodology. In the noiseless scenario, SRPR involves minimizing , utilizing either RI or SI for initialization. In cases involving corruption, SRPR entails an initial minimization of using RI or SI, followed by employing the resultant minimization as a warm-start for IPL. Given that direct minimization of does not yield an exact recovery of the true signal vectors, this two-step approach is necessary. For kernel selection across all comparisons with proximal linear algorithms, we employ the kernel , which generates the Pseudo Huber loss function . The choice of will be elaborated upon subsequently. The monotone accelerated gradient descent algorithm (MAPG) with line search, as outlined in Algorithm 1 of [22], is utilized for optimization. This algorithm, specifically designed for nonconvex optimization, incorporates a reference update mechanism based on direct gradient descent steps to ensure sufficient decrease. It effectively guarantees performance parity with, or superiority over, simple gradient descent. Subsequent numerical results will illustrate the linear convergence achievable through this algorithm in the noiseless context.
We consider the candidate algorithms in the comparison: proximal linear algorithm with SI or RI (PL-SI and PL-RI), inexact proximal linear algorithm with SI or RI (IPL-SI and IPL-RI), and smoothed robust phase retrieval with SI or RI (SRPR-SI and SRPR-RI).
In what follows, we examine their performance on both simulated datasets and image recoveries. For an experiment with a specific dataset and initialization, let the final solution obtained by an algorithm be denoted as . We consider the phase retrieval successful for simulated datasets if the relative error, defined as , does not exceed for both the IPL and SRPR algorithms. For PL, a less stringent threshold of is adopted. For image recoveries, the success threshold for PL is adjusted to , whereas for IPL and SRPR, it remains unchanged. This differentiation in accuracy thresholds is attributed to the comparatively lower optimization efficiency of PL compared to IPL and SRPR. Next, we explain the CPU time associated with each algorithm. Specifically, it encompasses the aggregate duration spent by all components of an algorithm. For instance, the CPU time calculation for SRPR-SI in the context of corruptions includes the time required for modified spectral initialization, the minimization of through MAPG, and the execution of IPL for exact recovery. These experiments are conducted on a server equipped with an Intel Xeon E5-2650v4 (2.2GHz) processor. To ensure robustness in our findings, each experimental setup, defined by a unique set of hyper-parameters related to the dataset, is replicated 50 times with varied random samples for all components involved. This procedure enables us to report both the success rate and median CPU time for each experimental scenario. Consistent with the observations made by [11], we note that an increase in and a decrease in generally lead to higher success rates. However, the impact on CPU time does not lend itself to straightforward conclusions. The subsequent subsections will present numerical experiments that furnish further insights.
5.2 Simulated Dataset
This subsection follows [11] to conduct experiments on simulated datasets. We generate simulated datasets that independently follows and with each element independently following discrete uniform distribution so that . The corruptions for these measurements are independent and the corruption type will be zeroing (i.e. letting ) or using Cauchy distribution. A comprehensive description of the related hyper-parameters is as follows.
-
•
Signal size for recovery: .
-
•
Oversampling ratio .
-
•
Corruption type . for . The corrupted response satisfies that for , we have and are independent for different measurements. When , the corrupted values are all equal to zero. This condition diminishes the effectiveness of the modified spectral initialization, which relies on s with small magnitudes. Conversely, when , the corrupted values are drawn from a positive Cauchy distribution, characterized by its infinite expected value. The choice of this distribution stems from its heavy-tailed nature as the capacity to handle heavy-tail corruption represents one of the robust model’s benefits.
-
•
.
-
•
Random initialization , in which and are independent random variables.
-
•
for SRPR when we make comparisons with PL and IPL, while can be selected easily here because we already know .
Next, we present the performance of the candidate algorithms. Beginning with an evaluation under noiseless conditions, Figure 4 illustrates the comparison between CPU time and success rate for these algorithms. The figure is divided into two panels: the left panel displays the success rate, while the right panel provides the median CPU time (after logarithmic transformation) for successful trials. An absence of data points indicates a success rate of zero. It is observed that algorithms incorporating SI exhibit superior success rates compared to those employing random initialization. Notably, SRPR-SI emerges as the most effective among them. For algorithms utilizing random initialization, SRPR achieves a success rate for large ratios, benefiting from the benign landscape. In contrast, PL and IPL fail to achieve similar success rates under these conditions. Regarding CPU time efficiency, SRPR stands out due to its local linear convergence property. This efficiency is further evidenced in Figure 5, which delineates the decline of over iterations for a trial with random initialization (where and ) using MAPG. The figure reveals a pronounced linear trend, with no discernible sub-linear or non-local stages.


Second, we focus on simulated datasets with corruptions. The conclusion of the comparison is consistent for all the combinations of hyper-parameters. We pick the combination and compare the performance of successful rate and CPU time for different selections of .

Figure 6 shows the performance of using these hyper-parameters and gives the same conclusion as Figure 4 in terms of success rate. In terms of CPU time, there is only a little advantage for SRPR compared to IPL-SI as SRPR eventually turns back to minimizing for exact recovery.
In the sensitivity analysis concerning the parameter , our focus centers on the relative error following the minimization of across various settings of and . Employing hyperparameters , , and , along with SI, Figure 7 delineates the outcomes from two perspectives. The first perspective plots on the -axis against the relative error on the -axis, while the second uses as the -axis. Each point on the graph represents the median relative error derived from 50 replicates. The analysis distinctly illustrates that lower values of and are associated with reduced relative errors. This trend of increase in and is in alignment with the theoretical results outlined in Theorem 5. Notably, the precision achieved by surpasses that of the SI, which exhibits a relative error ranging from 0.2 to 1.2.
Next, we provide some numerical results that will show the robustness of the model when bounded noise exists. We let , in which is independent with and independently follows . We set the threshold for success as 0.05. Figure 8 shows the experiment results and shows similar patterns to Figure 6. This means that SRPR still enjoys a benign landscape and advantages over PL and IPL.


5.3 Image Recovery
Now, we study the task of image recovery within the field of computational biology, leveraging the prevalent application of phase retrieval techniques [17, 31]. Consider an image array , where each element takes a value within the interval . Here, represents the image dimensions, and signifies the RGB color channels. We define the signal as , where , and . The Hadamard matrix is denoted by , and we let , for , be random diagonal matrices with diagonal elements following an independent discrete uniform distribution. The measurement matrix is defined as . Here, the semicolons mean that the matrix blocks are stacked vertically. This configuration benefits from an efficient computational complexity of for the transformation for , paralleling the complexity of the discrete Fourier transform commonly encountered in phase retrieval scenarios with complex transformation matrices. We use a real RNA nanoparticles image222https://visualsonline.cancer.gov/details.cfm?imageid=11167 which is also used in [11] with an original image size of . We follow [11]’s code333https://stanford.edu/ jduchi/projects/phase-retrieval-code.tgz to conduct experiments on a colored sub-image and pick or . A description of the related hyper-parameters is as follows. We consider the signal size for recovery , oversampling ratio , corruption type , and . The corrupted response satisfies that for , we have and are independent for different measurements. The random initialization is , in which . Next, we provide the choice of . We can find that Based on the property of an RGB image, we assume that and is usually small when there is a large proportion of black pixels. So, can be selected as for some small positive constant . As a result, we choose , since .
In our experiments on image recoveries, we observe consistent results across different settings of . For illustration purposes, we select . The success rates are reported in Table 1, highlighting that SRPR uniquely does not depend on modified spectral initialization for large values of . Regarding the median CPU time when , the performances of SRPR-RI, SRPR-SI, and IPL-SI are comparable. The CPU time spans from 200 to 600 seconds for , and from 6000 to 15000 seconds for . The computational times for the other algorithms are significantly longer. Interestingly, in scenarios where , IPL-SI demonstrates markedly improved efficiency over its performance in cases of corruption, surpassing both SRPR-RI and SRPR-SI in terms of CPU time efficiency.
Success Rate | ||||
---|---|---|---|---|
PL-RI | 0 | 0 | 0 | 0 |
IPL-RI | 0 | 0 | 0 | 0 |
SRPR-RI | 0 | 1 | 0 | 1 |
PL-SI | 1 | 1 | 0 | 1 |
IPL-SI | 1 | 1 | 0 | 1 |
SRPR-SI | 1 | 1 | 0 | 1 |
6 Conclusion
In this paper, we aim to fill an important gap in the literature by analyzing the essential geometric structure of the nonconvex robust phase retrieval. We propose the SRPR based on a family of convolution-type smoothed loss functions as a promising alternative to the RPR based on the -loss. We prove the benign geometric structure of the SRPR with high probability. Our result under the noiseless setting matches the best available result using the least-squares formulations, and our result under the setting with infrequent but arbitrary corruptions provides the first landscape analysis of phase retrieval with corruption in the literature. We also prove the local linear convergence rate of gradient descent for solving the SRPR under the noiseless setting. Numerical experiments on both simulation and a task of image recovery are presented to demonstrate the performance of the SRPR.
References
- [1]
- Cai et al. [2022] Cai, J.-F., Huang, M., Li, D. and Wang, Y. [2022], ‘Solving phase retrieval with random initial guess is nearly as good as by spectral initialization’, Applied and Computational Harmonic Analysis 58, 60–84.
- Cai et al. [2016] Cai, T. T., Li, X. and Ma, Z. [2016], ‘Optimal rates of convergence for noisy sparse phase retrieval via thresholded wirtinger flow’, The Annals of Statistics 44(5), 2221–2251.
- Candes et al. [2015] Candes, E. J., Li, X. and Soltanolkotabi, M. [2015], ‘Phase retrieval via wirtinger flow: Theory and algorithms’, IEEE Transactions on Information Theory 61(4), 1985–2007.
- Candes et al. [2013] Candes, E. J., Strohmer, T. and Voroninski, V. [2013], ‘Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming’, Communications on Pure and Applied Mathematics 66(8), 1241–1274.
- Chai et al. [2010] Chai, A., Moscoso, M. and Papanicolaou, G. [2010], ‘Array imaging using intensity-only measurements’, Inverse Problems 27(1), 015005.
- Chen and Candès [2017] Chen, Y. and Candès, E. J. [2017], ‘Solving random quadratic systems of equations is nearly as easy as solving linear systems’, Communications on Pure and Applied Mathematics 70(5), 822–883.
- Chen et al. [2019] Chen, Y., Chi, Y., Fan, J. and Ma, C. [2019], ‘Gradient descent with random initialization: Fast global convergence for nonconvex phase retrieval’, Mathematical Programming 176(1), 5–37.
- Davis et al. [2018] Davis, D., Drusvyatskiy, D., MacPhee, K. J. and Paquette, C. [2018], ‘Subgradient methods for sharp weakly convex functions’, Journal of Optimization Theory and Applications 179(3), 962–982.
- Davis et al. [2020] Davis, D., Drusvyatskiy, D. and Paquette, C. [2020], ‘The nonsmooth landscape of phase retrieval’, IMA Journal of Numerical Analysis 40(4), 2652–2695.
- Duchi and Ruan [2019] Duchi, J. C. and Ruan, F. [2019], ‘Solving (most) of a set of quadratic equalities: Composite optimization for robust phase retrieval’, Information and Inference: A Journal of the IMA 8(3), 471–529.
- Eldar and Mendelson [2014] Eldar, Y. C. and Mendelson, S. [2014], ‘Phase retrieval: Stability and recovery guarantees’, Applied and Computational Harmonic Analysis 36(3), 473–494.
- Fernandes et al. [2021] Fernandes, M., Guerre, E. and Horta, E. [2021], ‘Smoothing quantile regressions’, Journal of Business & Economic Statistics 39(1), 338–357.
- Fickus et al. [2014] Fickus, M., Mixon, D. G., Nelson, A. A. and Wang, Y. [2014], ‘Phase retrieval from very few measurements’, Linear Algebra and its Applications 449, 475–499.
- Fienup and Dainty [1987] Fienup, C. and Dainty, J. [1987], ‘Phase retrieval and image reconstruction for astronomy’, Image Recovery: Theory and Application 231, 275.
- He et al. [2023] He, X., Pan, X., Tan, K. M. and Zhou, W.-X. [2023], ‘Smoothed quantile regression with large-scale inference’, Journal of Econometrics 232(2), 367–388.
- Huang et al. [2021] Huang, L., Liu, T., Yang, X., Luo, Y., Rivenson, Y. and Ozcan, A. [2021], ‘Holographic image reconstruction with phase recovery and autofocusing using recurrent neural networks’, ACS Photonics 8(6), 1763–1774.
- Jiang and Yu [2021] Jiang, R. and Yu, K. [2021], ‘Smoothing quantile regression for a distributed system’, Neurocomputing 466, 311–326.
- Jin et al. [2017] Jin, C., Ge, R., Netrapalli, P., Kakade, S. M. and Jordan, M. I. [2017], How to escape saddle points efficiently, in ‘International Conference on Machine Learning’, PMLR, pp. 1724–1732.
- Jin et al. [2018] Jin, C., Netrapalli, P. and Jordan, M. I. [2018], Accelerated gradient descent escapes saddle points faster than gradient descent, in ‘Conference On Learning Theory’, PMLR, pp. 1042–1085.
- Karimi et al. [2016] Karimi, H., Nutini, J. and Schmidt, M. [2016], Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition, in ‘Joint European Conference on Machine Learning and Knowledge Discovery in Databases’, Springer, pp. 795–811.
-
Li and Lin [2015]
Li, H. and Lin, Z. [2015], Accelerated proximal gradient methods for nonconvex programming, in ‘Advances in Neural Information Processing Systems’, Vol. 28, Curran Associates, Inc.
https://proceedings.neurips.cc/paper/2015/file/f7664060cc52bc6f3d620bcedc94a4b6-Paper.pdf - Man et al. [2024] Man, R., Pan, X., Tan, K. M. and Zhou, W.-X. [2024], ‘A unified algorithm for penalized convolution smoothed quantile regression’, Journal of Computational and Graphical Statistics 33(2), 625–637.
- Mei et al. [2018] Mei, S., Bai, Y. and Montanari, A. [2018], ‘The landscape of empirical risk for nonconvex losses’, The Annals of Statistics 46(6A), 2747–2774.
- Miao et al. [1999] Miao, J., Charalambous, P., Kirz, J. and Sayre, D. [1999], ‘Extending the methodology of x-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens’, Nature 400(6742), 342–344.
- Miao et al. [2008] Miao, J., Ishikawa, T., Shen, Q. and Earnest, T. [2008], ‘Extending x-ray crystallography to allow the imaging of noncrystalline materials, cells, and single protein complexes’, Annu. Rev. Phys. Chem. 59, 387–410.
- Millane [1990] Millane, R. P. [1990], ‘Phase retrieval in crystallography and optics’, JOSA A 7(3), 394–411.
- Parikh and Boyd [2014] Parikh, N. and Boyd, S. [2014], ‘Block splitting for distributed optimization’, Mathematical Programming Computation 6(1), 77–102.
- Roulet and d'Aspremont [2017] Roulet, V. and d'Aspremont, A. [2017], Sharpness, restart and acceleration, in ‘Advances in Neural Information Processing Systems’, Vol. 30, Curran Associates, Inc.
- Shechtman et al. [2015] Shechtman, Y., Eldar, Y. C., Cohen, O., Chapman, H. N., Miao, J. and Segev, M. [2015], ‘Phase retrieval with application to optical imaging: a contemporary overview’, IEEE Signal Processing Magazine 32(3), 87–109.
- Stefik [1978] Stefik, M. [1978], ‘Inferring dna structures from segmentation data’, Artificial Intelligence 11(1-2), 85–114.
- Sun et al. [2018] Sun, J., Qu, Q. and Wright, J. [2018], ‘A geometric analysis of phase retrieval’, Foundations of Computational Mathematics 18(5), 1131–1198.
- Tan et al. [2022] Tan, K. M., Wang, L. and Zhou, W.-X. [2022], ‘High-dimensional quantile regression: Convolution smoothing and concave regularization’, Journal of the Royal Statistical Society Series B: Statistical Methodology 84(1), 205–233.
- Taylor [2003] Taylor, G. [2003], ‘The phase problem’, Acta Crystallographica Section D 59(11), 1881–1890.
- Wang et al. [2017] Wang, G., Giannakis, G. B. and Eldar, Y. C. [2017], ‘Solving systems of random quadratic equations via truncated amplitude flow’, IEEE Transactions on Information Theory 64(2), 773–794.
- Zhang et al. [2016] Zhang, H., Chi, Y. and Liang, Y. [2016], Provable non-convex phase retrieval with outliers: Median truncated wirtinger flow, in ‘International Conference on Machine Learning’, PMLR, pp. 1022–1031.
- Zhang et al. [2017] Zhang, H., You, S., Lin, Z. and Xu, C. [2017], Fast compressive phase retrieval under bounded noise, in ‘Proceedings of the AAAI Conference on Artificial Intelligence’, Vol. 31.
- Zheng et al. [2024] Zheng, Z., Ma, S. and Xue, L. [2024], ‘A new inexact proximal linear algorithm with adaptive stopping criteria for robust phase retrieval’, IEEE Transactions on Signal Processing 72, 1081–1093.