Phase retrieval for affine groups over prime fields
Abstract
We study phase retrieval for group frames arising from permutation representations, focusing on the action of the affine group of a finite field. We investigate various versions of the phase retrieval problem, including conjugate phase retrieval, sign retrieval, and matrix recovery. Our main result establishes that the canonical irreducible representation of the affine group (with prime), acting on the vectors in with zero-sum, has the strongest retrieval property, allowing to reconstruct matrices from scalar products with a group orbit consisting of rank-one projections. We explicitly characterize the generating vectors that ensure this property, provide a linear matrix recovery algorithm and explicit examples of vectors that allow matrix recovery. We also comment on more general permutation representations.
Keywords: phase retrieval; conjugate phase retrieval; sign retrieval; matrix recovery; Pauli pairs; group frame; group Fourier transform
AMS Subject Classification: 42C15; 42A38; 94A12; 20B20
1 Introduction
The phase retrieval problem for frames was introduced in [3], with motivation coming from applications such as speech recognition and imaging. Since its introduction to the field of mathematical signal processing, phase retrieval has developed into an autonomous area of mathematical research with rich connections to diverse mathematical fields such as complex analysis, optimization, numerical analysis, algebraic geometry, harmonic analysis and representation theory, as witnessed by the review papers [16, 12] and their extensive lists of references.
The phase retrieval problem can be briefly described as follows: Given a system , we aim to reconstruct , up to a scalar factor , from the family . Here denotes the multiplicative group of complex numbers with modulus one. Clearly, a necessary condition for the system is that , i.e. the linear coefficient map is necessarily injective. This observation motivates the name “phase retrieval”: Solving the phase retrieval problem for a system spanning amounts to reconstructing the phases of the expansion coefficients from their moduli (which is a nonlinear problem), and then inverting the coefficient map (which is a standard, linear procedure).
It is therefore clear that phase retrieval requires some redundancy of : A basis does not admit phase retrieval, since each expansion coefficient with respect to a basis (in particular its phase) can be chosen independently of the remaining ones. The necessary degree of redundancy has been quantified fairly sharply: Generic systems consisting of vectors allow phase retrieval, and for certain dimensions , phase retrieval for systems with less than vectors is impossible [9].
A problem closely related to phase retrieval is matrix recovery [6, 2], asking whether any matrix can be recovered from the family
of scalar products. This injectivity property was discussed in [2] by the name of maximal span property, picking up on earlier investigations of the problem under the name of quantum state tomography in physics.
Any system doing matrix recovery does phase retrieval. To see this, one only needs to realize that
where is given by . Hence if does matrix recovery, can be recovered from , and can be recovered up to a phase factor from . An interesting byproduct of this observation is that stability of linear inversion is well-understood and controllable, unlike the non-linear problem of phase recovery. However, there is also a price to pay in terms of redundancy: For dimension reasons, matrix recovery requires at least measurements.
Phase retrieval and matrix recovery are often studied under additional assumptions on the analyzed signal or matrix, such as sparsity or low rank; see e.g. [10, 18]. Here one does not aim for recovery guarantees for all possible signals, thereby allowing a reduction in the number of measurements. The constructions in this context, as well as the recovery guarantees, are typically of a probabilistic nature.
Overview of the paper
In this paper we are interested in the deterministic construction of structured systems guaranteeing phase retrieval and matrix recovery, more precisely in the construction of group frames with phase retrieval. Group frames are characterized by the fact that they are generated as orbits of a (usually unitary) representation of a group acting on a suitably chosen vector ; see [24] for group frames over finite groups, and [14] for an extensive discussion of group frames over locally compact groups.
The chief example of group frames allowing phase retrieval is provided by the case where is a Heisenberg group, and is a Schrödinger representation. This setting provided the earliest results concerning the use of group actions for quantum state tomography [22]. In the setting of Schrödinger representations, the phase retrieval properties of the system were seen to be closely related to properties of the ambiguity function (see [16]). It was then observed in [5] (or rather, rediscovered, in the light of [22]) that for finite Heisenberg groups, proper assumptions on even allow to solve the matrix recovery problem. A further investigation of this property for more general group frames was then provided in [19, 8], with emphasis on irreducible projective representations of finite abelian groups. A quite far-reaching extension of these results to irreducible representations of general nilpotent groups was recently obtained in [15]. These developments notwithstanding, the problem of phase retrieval for group frames is far from being fully understood.
In this paper we focus on permutation representations, and in particular on actions of affine groups of finite fields (with prime). Our main results, Theorem 6.5, Corollary 6.6 and Theorem 6.7, establish criteria for generating vectors guaranteeing matrix recovery, provide an explicit recovery formula, and simple concrete examples of generating vectors fulfilling these criteria. We also present a general recipe how to tackle the problem of matrix recovery for group frames (see Remark 6.8), and explain how the known instances (finite Heisenberg groups and affine groups) fit in this general framework (Remark 6.9). Remarks 6.11 and 6.12 exhibit some connections of our results to intriguing questions in finite Fourier analysis.
To put these results in perspective, we first point out that the retrieval problem for finite affine groups can be understood as a finite analog of the wavelet phase retrieval problem previously studied by Mallat and Waldspurger [21]. At this point in time it is safe to say that explicit constructions of systems for matrix recovery, with linear reconstruction formulae, are only known for very specific systems of vectors, such as equiangular tight frames, or unions of mutually unbiased bases in -dimensional Hilbert spaces, see [25, 2]. It is considered highly doubtful that every dimension accommodates mutually unbiased bases. Known explicit constructions of such systems exist only in very restricted dimensions, e.g. for equal to a prime power. By contrast to this, our representation theoretic construction of a system doing matrix recovery via the affine group of the field ( prime) works in a space of dimension , yielding a system of vectors. Hence, while our argument does not rely on the notion of mutually unbiased bases, it is intriguing to note that the cardinalities of the resulting group frames obey the same dimension-dependent relationship.
Throughout the paper, we also discuss variants of phase retrieval, such as sign and conjugate phase retrieval, and extensions to more general permutation representations. We present several results and examples of independent interest, mostly for the purpose of highlighting the challenges that the representation-theoretic understanding of phase retrieval properties faces.
2 Phase retrieval for group frames
Throughout the paper, we let . Given finite sets , we let denote the space of matrices , with entries in . We identify matrices with the induced linear maps if needed. We endow the matrix space with the usual scalar product
We will use the convention for . Special vectors in are given by and , for and elsewhere.
We now formally introduce a variety of retrieval properties. Sign and phase retrieval for frames were introduced by Balan et.al. [3]. The intermediate notion of conjugate phase retrieval is due to Evans and Lai [11]. Matrix recovery and associated linear reconstruction methods were introduced to this discussion in [2]; see also [6].
Definition 2.1
Let denote a finite-dimensional real or complex Hilbert space. Given a system , we define the associated coefficient transform as
Recall that is injective iff is total. Given a complex Hilbert space , we say that the system does phase retrieval if the map
is one-to-one. Here denotes the orbit space under the canonical action of on , and is obtained by pointwise application of the modulus function to .
Given a real Hilbert space , and , we define the sign retrieval property by requiring that
is one-to-one.
We define the intermediate property of conjugate phase retrieval as follows: We assume that is a complex subspace, where is finite, and is endowed with the usual scalar product. Furthermore, we assume . Then does conjugate phase retrieval if the equation is equivalent to the existence of such that either or holds, where complex conjugation is understood to be applied entrywise.
Finally, we say that the system does matrix recovery if the map
is one-to-one. Here denotes the space of linear mappings .
Remark 2.2
Consider the following statements, for a system :
-
1.)
does matrix recovery.
-
2.)
does phase retrieval.
-
3.)
does conjugate phase retrieval.
-
4.)
does sign retrieval.
-
5.)
spans .
Then one has the implications , as well as . Observe that conditions 3.), 4.) refer to real-valued systems, which never do phase retrieval, due to the equation holding for real-valued .
Definition 2.3
Let denote a representation of a finite group on the finite-dimensional Hilbert space over . Given , the associated group frame is given by . We call the generating vector of the system .
Given , we write
In case we want to emphasize the dependence on the representation , we will also use .
We say that does phase/conjugate phase/sign retrieval if there exists such that does phase/conjugate phase/sign retrieval.
In the case of conjugate phase retrieval, we additionally assume that for suitable , and that holds.
Similarly, does matrix recovery if there exists such that does matrix recovery.
The following observation reformulates matrix recovery for group frames. Recall that a vector is called cyclic for if is one-to-one. With this definition in mind, the proposition is seen to be a mere reformulation.
Proposition 2.4
Let denote a representation of a group acting on .
Let by , i.e. the representation of acting on by
Then does matrix recovery iff is a cyclic vector for .
We next prove a basic feature of the sets of vectors that guarantee one of the retrieval properties. In the following lemma, a set is called real Zariski open (by slight abuse of terminology) if there exist finitely many functions on that are polynomials in the real and imaginary parts of the coordinates such that
The terminology naturally extends to subset of .
Lemma 2.5
Assume that does sign/conjugate phase/phase/matrix recovery. Then the set of vectors that guarantee this property is real Zariski open.
Proof 1
We identify a system of vectors in with the matrix . Then the set of all that perform phase or sign retrieval is real Zariski open in , by [9]. The analogous observation has been made for conjugate phase retrieval in [11].
In order to apply these observations to group frames, it is enough to note that the map is conjugate-linear, and that the preimage of a real Zariski open set under such a map is real Zariski open.
In the case of matrix recovery, the map is polynomial, and by definition, does matrix recovery iff the linear map has rank . However, the set of maximal rank matrices is real Zariski open, being defined by the nonvanishing of at least one -subdeterminant. Hence the set of vectors guaranteeing matrix recovery is the polynomial preimage of a real Zariski open set, hence real Zariski open.
Note that real Zariski open sets are open in the standard topology, and either empty or of full Lebesgue measure. This shows that once a retrieval property is established for , almost every vector guarantees it. However, verifying either that the real Zariski open set is nonempty, or that a given concrete vector is contained in it, remains a considerable challenge, and the results in the present papers bear witness to this fact.
Remark 2.6
So far, phase retrieval for group frames has been studied almost exclusively for Schrödinger representations of (finite or infinite) Heisenberg groups. An extension of these results to projective representations of abelian groups was performed in [19]. Phase retrieval for the continuous wavelet transform in dimension one was studied in [21, 1]. Here the underlying representation is the quasiregular representation of the affine group , and it is fair to say that the phase retrieval problem even for this basic representation is far from being completely understood. Our paper provides an analysis for the finite analogs of .
The implications from Remark 2.2 immediately yield the implications , and between the following statements.
-
1.)
does matrix recovery.
-
2.)
does phase retrieval.
-
3.)
does conjugate phase retrieval.
-
4.)
does sign retrieval.
-
5.)
is cyclic.
Unlike in the general case considered in Remark 2.2, the implication does make sense as soon as , and holds. Note that here asks for the existence of a particular complex-valued generating vector, whereas requires a suitable real-valued generating vector. The relationship between conditions and is currently open.
At this point it is also worth noting that not all properties are invariant under unitary equivalence. While it is easily verified that matrix and phase retrieval of a representation are preserved under unitary equivalence, the property that depends on the realization of the representation. This follows from the fact that most intertwining operators do not commute with taking pointwise conjugates or real parts. A concrete example will be given by the representation studied more closely in Section 6, which is equivalent to the permutation representation . The permutation representation satisfies , but its unitary equivalent does not.
We finally point out that not all cyclic representations do phase retrieval. As an extreme case in point, we can take the regular representation of a finite group , which is cyclic. Any cyclic vector for necessarily fulfills (for dimension reasons), and the map is clearly not injective on .
Remark 2.7
Recent research activity has extended the original setup to include projective representations of finite groups , see e.g. [19]. Recall that by definition, a (unitary) projective representation of a finite group is a map such that
This extension can be useful for the treatment of special cases, such as the Schrödinger representation of a finite Heisenberg group , which can be alternatively viewed as a projective representation of the quotient group , where denotes the center of .
However, it is well-known that every projective representation of a finite group can be lifted to a standard unitary representation of a finite central extension of , and the system differs from only by the inclusion of a fixed number of scalar multiples of , for every vector . None of the retrieval properties discussed here is affected by this change. Hence the extension to projective representations ultimately does not enlarge the realm of available constructions in any truly substantial manner.
3 Setup: Affine groups and permutation representations
Let denote a prime, and let , the associated finite field of order . We typically denote , and understand arithmetic operations on this set to be defined modulo . then denotes the multiplicative group of . The affine group of is given by the semidirect product , using the canonical multiplicative action of on . I.e. as a set, , with group law . Unless otherwise specified, the symbol will always refer to this semidirect product.
Given , we let denote the symmetric group of . Note that since acts faithfully on via , we may regard as a subgroup of . In particular, one has .
The quasiregular representation of acts on by
(3.1) |
In the case of a prime number, the restriction of to the subgroup will be denoted by , and it is given explicitly by
(3.2) |
Note that our notation does not explicitly distinguish the real and complex cases. In fact, we have , a fact which allows to discuss conjugate phase and sign retrieval for and its restrictions.
Remark 3.1
We let . We make the following observation:
Lemma 3.2
The subspaces and are invariant under , and therefore also under . The associated restrictions of acting on respectively are irreducible.
In this paper we wish to address the various retrieval properties for and/or . Ideally, we would like to have explicit and sharp criteria for generating vectors guaranteeing any of the properties, as well as concrete examples. We first note a simple observation regarding .
Corollary 3.3
The representation does not do matrix recovery.
Proof 2
Matrix recovery is equivalent to stating that . For dimension reasons, this requires , which is obviously false.
Note that this argument does not apply to the codimension one subspace , and we will indeed show that does matrix recovery.
4 Sign retrieval for
In this section we consider the sign retrieval properties of the permutation representation acting on the real vector space .
Definition 4.1
-
1.
Let denote a finite-dimensional Hilbert space, and . Then has the complement property iff for all , either , or .
-
2.
The family has full spark if , and for every subset of cardinality , the system spans .
The following result are contained in [4]; see Theorems 3 and 7.
Theorem 4.2
Let denote a system of vectors contained in the finite-dimensional Hilbert space .
-
(a)
If does phase or sign retrieval, then has the complement property.
-
(b)
If is a real Hilbert space, then does sign retrieval iff it has the complement property.
The following theorem is implied by Theorem 10 in [20].
Theorem 4.3
The quasiregular representation on has full spark, which means that there exists a real-valued vector such that has full spark. Any such vector does sign retrieval.
Proof 3
The first statement is Theorem 10 in [20]; a closer inspection of its proof shows that a vector whose orbit is a full spark frame can indeed be chosen to be real-valued. Finally, since has elements, every full spark frame indexed by necessarily has the complement property. Hence, Theorem 4.2 yields sign retrieval.
5 Conjugate phase retrieval for doubly transitive permutation groups
From now on we let . Throughout this section, we may assume that is a subgroup of the symmetric group. Additionally, we also assume that to avoid trivialities. Recall next, that is called -fold transitive if for every pair of -tuples , where both and have pairwise distinct entries, there exists such that
-fold transitive groups are also called doubly transitive. An important class of doubly transitive groups is given by , for .
We intend to prove that -fold transitive groups do conjugate phase retrieval. The argument will be based on a particular choice of generating vector, and the following, well-known property of the complex plane.
Remark 5.1
Let , satisfy
(5.1) |
Then there exists an isometry with , for all .
Furthermore, the group of isometries of the Euclidean space coincides with the Euclidean motion group, i.e. it is the composition of a translation and an orthogonal matrix. In addition, any orthogonal mapping on is given by (complex) multiplication with an element of , possibly followed by conjugation.
We then have the following observation, which implies conjugate phase retrieval for .
Theorem 5.2
Let denote a doubly transitive subgroup. Pick with , and let . Then does conjugate phase retrieval on the invariant subspace .
Proof 4
Let denote any distinct pair of elements, and let such that . Then and this shows for all :
An application of Remark 5.1 yields either or holds for all , with and .
To complete the proof, it remains to show , but this follows (for the case without conjugation) from via
The second case, involving complex conjugation, is treated entirely analogously. This establishes conjugate phase retrieval.
Remark 5.3
We now turn to the question of conjugate phase retrieval for on , using the vector
Note that the projection of onto the subspace does conjugate phase retrieval for the restriction of to that subspace, and the same can be said (for trivial resasons) of the orthogonal complement of , the one-dimensional space . Yet these properties are generally not enough to guarantee conjugate phase retrieval for the full space , as the following example for shows:
Let , and define by letting
We then have , and one easily verifies, for all ,
On the other hand, the vectors and are clearly distinct, even orthogonal.
Note that this example does not yet establish that does not do conjugate phase retrieval on , as it only applies to a particular choice of generating vector . But it does illustrate that the problem of conjugate phase retrieval on all of cannot be addressed by being able to solve it on each invariant subspace , separately, somewhat contrary to common (linear) representation theoretic wisdom. This phenomenon emphasizes the nonlinear nature of the conjugate phase retrieval problem.
We do not know whether analogs of the counterexample constructed here are available in dimensions .
As a further application of the basic proof principle exploited in this section, we have the following observation:
Proposition 5.4
Let , and let denote a doubly transitive subgroup. Pick with , and let . Then does sign retrieval on .
Proof 5
Assume , with and , such that
Since is doubly transitive, this is equivalent to
(5.2) |
Assuming that is constant then yields either , or that is constant. The latter implies , and , as desired (since ). If , we get
However, there do not exist equidistant points on the real line, hence this case can be excluded.
Hence it remains to address the case that is not constant. Assuming then requires that . If , we obtain
and thus by Theorem 5.2. If , it follows that is constant, and thus
and since is not constant, follows. As above, the fact that there are no equidistant points in excludes this possibility.
The final case to consider is therefore that is not constant and . This implies that
Plugging this into the first equation of (5.2) and simplifying yields
which contradicts our assumptions on and .
6 Matrix recovery for
In this section we establish the matrix recovery property for . Recall that we are interested in finding specific cyclic vectors for the representation obtained by conjugating with . The basic strategy of the following is to first decompose into suitable subrepresentations, with the help of an explicitly derived intertwining operator; see Proposition 6.4. This decomposition will provide fairly explicit access to criteria for cyclic vectors. The remaining task is to apply the intertwining operator from Proposition 6.4 to matrices of the type , and determine vectors that fulfill the criteria for cyclic vectors. As we will see, the quadratic dependence of on makes this final step a further nontrivial obstacle.
The starting point in this approach is to switch to a unitarily equivalent representation , obtained by conjugating with the Fourier transform on . In the following, we will denote elements of as . With this notation, the Fourier transform is given by
(6.1) |
We will also use the notation . With the normalization used here, the Plancherel theorem becomes , whereas the convolution theorem for the convolution product
reads as
The chief advantage of the Fourier transform is that it diagonalizes the translation action of the affine group. The effect of conjugation with is spelled out in the following lemma.
Lemma 6.1
Define ,
-
(a)
, with , .
-
(b)
Let denote the conjugation action of on . Then one has, for all
Proof 6
We include the computations proving the lemma for completeness. For part (a), we have for all that
The statements concerning are obvious.
For the proof of (b), we fix as well as , and get by (a)
It follows that
which establishes (b).
Remark 6.2
Our explicit matrix recovery algorithm in Corollary 6.6 makes use of the group Fourier transform of . In the following, we will briefly sketch the fundamentals of this transform, for a general finite group , and a concrete choice of unitary dual of . By definition, the unitary dual is a system of representatives of the unitary equivalence classes of irreducible representations of . Such a system exists for every finite group , and it is necessarily finite. For ease of notation, we assume , i.e., .
Given , one can now define its group Fourier transform as the matrix-valued tuple
given by
(6.2) |
Observe that the Fourier transform (6.1) for the cyclic case is a special case of this construction, since the characters of the additive group are precisely given by
Note however that in equation (6.1) is normalized by the factor , in order to obtain a unitary map, whereas we refrained from introducing a normalization to the group Fourier transform. In order to avoid confusion in the following, we will mostly stick to the notation instead of , when we refer to the group Fourier transform.
The Fourier transform fulfills the Plancherel formula
where the norm on the right hand side is the Frobenius norm. Its inverse is given explicitly by
(6.3) |
for all . For all these general facts, we refer the reader to [23, Chapter 15].
Returning to the special case of the affine group , we have the following description of its unitary dual. We refer to [23, Chapter 17] for a proof.
Lemma 6.3
The unitary dual of is given by the following set of representatives: The representation , as well as the set of characters of the multiplicative group, lifted to the full group by letting
We thus obtain the following group Fourier transform for the affine group : Given , its group Fourier transform is the tuple
consisting of the scalars
and the matrix
The next result provides a decomposition of into suitable subrepresentations, which is an important step towards establishing matrix recovery. We remind the reader that all algebraic operations involving numbers are understood modulo .
Proposition 6.4
Define the unitary operator
by
Define by . Then
(6.4) |
Define as the subspace of matrices with for all , and let . These spaces are invariant under , and we have
where is the regular representation of , lifted to .
Proof 7
It is clearly enough to prove (6.4), and we do this by explicit calculation. First, one quickly verifies that the map
is a bijection on , with inverse given by
Hence is indeed unitary, with
Fixing , we let and . We then get
Accordingly, for ,
For we fix , and get
which allows to compute
and (6.4) is established.
We next state criteria for matrix recovery.
Theorem 6.5
Let . Then does matrix recovery iff fulfills the following conditions:
-
(i)
All entries of , defined by
are nonzero.
-
(ii)
The matrix defined by
has a left inverse .
The first and most involved step of the proof (given below) consists in explicitly inverting the linear operator , assuming that fulfills conditions (i) and (ii). We give a detailed description of this procedure in the following corollary to the proof:
Corollary 6.6
Assume that fulfills conditions (i) and (ii) from the previous theorem. An explicit algorithm for the inversion of the linear map is described as follows: Let denote the matrix describing the linear map
and let be the matrix describing the linear map
Let and let be defined as
Then can be recovered from by the following steps:
-
1.
Define by
-
2.
Define .
-
3.
With the unitary operator from Proposition 6.4, we have
In particular, can be recovered up to a phase factor by computing from using steps (1)-(3), and then determining as the eigenvector associated to the nonzero eigenvalue of , normalized by .
Proof 8
Throughout the proof, the symbol will denote zero matrices of varying sizes. Since these sizes can be derived from the context, we will refrain from using more specific notation.
The intertwining property of yields
We write with , and get
with a vector , .
Similarly, we have
where is obtained by plugging
into the definition of , which leads to
(6.5) |
In particular,
The overall proof strategy for the inversion formula is to recover both and from via group Fourier transform, which allows to reconstruct . It then only remains to apply to the result.
By definition of the various matrices involved, we have
Now equation (6.4) yields
(6.6) |
whereas
(6.7) |
This reveals as a convolution product over the factor group , whereas is seen to be a special instance of the Fourier inversion formula (6.3). In particular, we have
(6.8) |
We now can use (6.8) to get for all that
By assumption, , which allows to solve this equation for , and plug the result into the Fourier inversion on , yielding
which is the formula from the first step.
For the second step, we recall that (6.7) allows to view as an inverse Fourier transform, which together with (6.8) gives rise to
(6.9) | |||||
We next show
(6.10) |
For this purpose we introduce the change of variables , . is a well-defined bijection, with inverse map given by . Thus the map is a well-defined unitary map with inverse . We apply this change of variables to the row vectors of , and the formulas
show that
But then left multiplication by , effecting a sign flip in , yields
as desired. Hence (6.10) is established, which implies . In combination with (6.9) this yields
by choice of .
Hence the recovery of from via steps (1) and (2) is shown, and application of yields the desired result.
This establishes the inversion algorithm, as well as sufficiency of the conditions (i), (ii). It remains to show necessity of the conditions. First assume that condition (i) is violated, i.e. there exists a character such that
Fix , and let , as well as . Using the above notations and calculations, we then have
for all . On the other hand, , since is unitary.
Assuming condition (ii) to be violated, we obtain the existence of a nonzero vector such that . We pick an arbitrary nonzero and define , and . Then is nonzero, but the above computations yield for all :
by choice of . This concludes the proof.
The remaining open question is whether the conditions for matrix recovery established in the last theorem can actually be fulfilled by a suitably chosen vector . The positive answer is contained in the next theorem. Note that Lemma 2.5 then implies that almost every vector does matrix recovery.
Theorem 6.7
Let be given by
in the case , and by
for . Then does matrix retrieval.
Proof 9
In the case , the matrix has dimensions , and it fulfills the rank condition (ii) iff . Condition (i) is fulfilled as soon as , and both conditions are fulfilled by .
Now assume and let . Then property is easy to verify: For the constant characer of , we find . Noting that , we find for all remaining characters that
Therefore it remains to verify condition (ii), which is clearly equivalent to the equation . By definition of we get for all , with
Letting denote the matrix obtained by
we have , and
with value for the remaining entries.
The next step in the proof consists in successive applications of Gaussian elimination steps to the rows of : We first subtract the second row from the first, then the third row from the second,, the th row from the th. This yields
Removal of the first row (and an index shift) results in the square matrix , given by
(6.11) |
i.e.
(6.12) |
Since is a submatrix of , the steps taken so far ensure that the proof is finished once we have shown that is invertible. For this purpose, we make one final Gaussian elimination step to take care of the last row of , by letting
(6.13) |
for , and for .
For , we then get
(6.14) |
Now fix . In case , there is only a single contribution on the right-hand side, leading to
For , we distinguish between even and odd cases. In the case where , the only nonzero contributions on the right-hand side of (6.13) are provided by and , leading to
In the case , the nonzero contributions also come from and , with corresponding entries resp. , resulting again in .
The appearance of the group Fourier transform in the matrix recovery algorithm is no coincidence, but rather a byproduct of the approach taken in this paper. The following remark explains these connections in some more detail, and sketches how Corollary 6.6 can possibly be generalized to other settings.
Remark 6.8
Recall from Proposition 2.4 that matrix recovery for group frames requires finding certain cyclic vectors for the conjugation representation
acting on . The natural representation-theoretic approach to this problem relies on the decomposition of into irreducibles, of the type
where denotes the multiplicity with which occurs in .
Slightly rewriting this decomposition allows to introduce the group Fourier transform in a rather natural fashion. As a starting point, we make the observation that any cyclic vector for yields an injective operator intertwining with the left regular representation acting on . A necessary and sufficient condition for the existence of such an intertwining operator is that is less than or equal to the multiplicity of in , and the latter coincides with .
Hence, we have obtained for all , as a necessary condition for matrix recovery. In this case, one can in effect realize by left action of on a suitable subspace , acting by left multiplication. This results in an intertwining operator
such that
(6.15) |
In this realization, we get for arbitrary with and , that the matrix coefficient can be computed as
which shows that the coefficient transform is very closely related to the Fourier inversion formula (6.3). As a consequence, we can make the following observations, using similar reasoning as in the proof of Theorem 6.5:
-
(a)
Invertibility of is equivalent to injectivity of on each .
-
(b)
Recovery of from can be carried out as follows: (i) Compute ; (ii) for each , recover from the result of the previous step (as guaranteed by the condition in (a)); and (iii) invert .
-
(c)
The additional challenge posed by matrix retrieval from measurements of the type
shows itself in applying the criteria and inversion methods from (a) and (b) to specific matrices , which depends quadratically on the entries in . Note that the explicit formulation of these criteria also hinges on the concrete choice of the intertwining operator .
We emphasize that these observations apply to any finite group, and they provide a unified representation-theoretic perspective on matrix recovery via group frames. Remark 6.9 below explains how the well-understood case of Heisenberg groups fits in this framework. Note that this point of view is quite similar to the treatment of admissible vectors (i.e., special continuous group frames arising from unitary actions of locally compact groups, and their relation to the Plancherel formula of such groups), as developed in [14].
The criteria and inversion method in Theorem 6.5 are closely related to the general program sketched in (a)-(c) above, but they do not constitute a faithful implementation. Proposition 6.4 decomposes the representation space into two orthogonal subspaces, and . On , is essentially the regular representation of the quotient group , whereas the action on is a multiple of the irreducible representation .
Now the decomposition of the regular representation of is effected by its characters, which can also be considered as (lifted) characters of the whole group . Thus the Fourier coefficients occurring in condition (i) from Theorem 6.5 and in Step (1) of the recovery algorithm could also be expressed in terms of the group Fourier transform of , which would increase the similarity to steps (a) and (b).
In the case of condition (ii) and recovery step (2) from Theorem 6.5, the group Fourier transform is more explicitly employed, in a way that corresponds quite closely to steps (a) and (b) above. The main difference is that we opted to formulate the injectivity condition (ii), which corresponds to (a), in terms of the matrix . This matrix has the advantage that its dependence on the vector is rather transparent. This simplifies verification of condition (ii), as can be seen in the proof of Theorem 6.7. The price to pay is the occurrence of the auxiliary matrices and in the inversion step (2).
Remark 6.9
In this remark we study phase retrieval and matrix recovery in the context of finite Gabor systems. The following is largely based on observations made in [5]. The main purpose of this remark, and the slightly novel contribution to the discussion of matrix recovery for the Schrödinger representation, is the explicit reference to the program outlined in Remark 6.8.
Fix , , and define the associated finite Heisenberg group by
with group law
The center of this group is given by , and .
The Schrödinger representation of acts on via
with understood modulo . The Schrödinger representation is irreducible, and the restriction of to the center is a character. Since we are interested in matrix retrieval, we will systematically omit the central variable in the following: All scalar actions cancel out in the conjugation action, which means that is necessarily in the kernel of . In particular, we will write .
In order to establish criteria for matrix recovery, we make the following two observations:
(6.16) |
which means that (for dimension reasons) the matrices constitute an orthonormal basis of . The second observation is
(6.17) |
We are interested in the conjugation action , and we therefore immediately rewrite (6.17) to obtain
(6.18) |
Taking (6.16), (6.18) together, we obtain the unitary equivalence
with pairwise distinct homomorphisms defined by
Hence we have decomposed into irreducible and pairwise inequivalent representations of , with the intertwining operator explicitly given by
For rank-one operators of the form this specializes to
which is the so-called ambiguity function of .
In order to see how this function enters into the computation of , for an arbitrary matrix , we use the intertwining property of to compute
Since is just the set of characters of the abelian group , this formula for is recognized as a Fourier transform over . Fourier inversion then yields
(6.19) |
This can be solved for the expansion coefficients if (and only if) is nowhere vanishing. The remaining step is then to invert .
These observations are summarized as follows:
-
(a)
does matrix recovery iff the ambiguity function of is nowhere vanishing.
-
(b)
Assume that fulfills the condition from part (a). Then an arbitrary matrix can be recovered from ,
by the following steps:
-
(i)
Compute .
-
(ii)
is given by
-
(i)
We do not claim originality for these observations. The fact that a nonvanishing ambiguity function guarantees phase retrieval for the short-time Fourier transform is well-known. Sufficiency of criterion (a) for matrix recovery can be found in [16, Theorem 2.2], whereas necessity is implicit in [16, Theorem 2.3]. We are not aware of a previous formulation of the recovery procedure from Part (b). However, it is implicit in the proof of [16, Theorem 3.16] (although only for ).
The parallels between items (a) and (b) in the previous paragraph and the corresponding steps in the program outlined in Remark 6.8 should now become obvious. Possibly the biggest obstacle to the proper understanding of this correspondence comes from the fact that the representation is trivial on the center of , i.e. it can be viewed as a representation of the abelian quotient group . As a consequence, the decomposition of only involves characters, and all irreducible subspaces are one-dimensional, which greatly simplifies both the criterion for matrix recovery, and the inversion formula. At the same time, these simplifications somewhat obscure the role of the group Fourier transform of the (nonabelian) Heisenberg group itself in steps (a) and (b).
In fact, it can be argued with some justification that this role can be safely ignored, since all computations can be carried out over the abelian factor group. However, the slightly more involved point of view developed here allows to appreciate the wider representation-theoretic context, and a more direct comparison to the case of affine groups.
It is worthwhile to explicitly compute the generating vector for the unitarily equivalent permutation representation , using Fourier inversion.
Corollary 6.10
Let be a prime, and define by
in the case , and
for . Then does matrix recovery.
Remark 6.11
Condition (i) from Theorem 6.5 is equivalent to requiring that all diagonal operators can be recovered from their scalar products with .
This property provides an interesting interpretation of the phase retrieval problem for the permutation representation , acting on , by the observation that the diagonal operators in Fourier domain are just the convolution operators in time domain.
Hence condition (i) imposed on is equivalent to the fact that all convolution operators on can be recovered from their scalar products .
Now assume to be given vectors with . Using that , with , and the analogous formula for , together with the Plancherel formula and convolution theorem for , allow to compute for all that
Both ends of this equation are convolution products over the multiplicative group. Now condition (i) from Theorem 6.5 implies that convolution with is injective; compare the argument following equation (6.8). This implies
With this equality established, the convolution theorem yields
This means that if we compare the fixed lines , the assumption is equivalent to the condition
I.e., we get a family of so-called Pauli pairs. The study of Pauli pairs and Pauli uniqueness, i.e. the question whether can be recovered from and is a phase retrieval problem in its own right, with origins in quantum physics; see e.g. [17, 16]. Note that the whole family of Pauli pairs is generated from a single choice of and .
Remark 6.12
Corollary 6.10 allows to derive an intriguing observation regarding phase retrieval for finite Fourier transforms. Given , let . Define for each the projection operator onto . In other words, is computed from by omitting the th frequency from the Fourier series of . Then using
and allows to write
whence
Thus the fact that does phase retrieval is equivalent to the following fact concerning Fourier phase retrieval:
For any pair : holds for all iff .
We are not aware of a previous source making this observation, nor of a more elementary proof than via Theorem 6.5. It is also unclear to us whether this observation holds for the Fourier transform over general cyclic groups, or only over those of prime order.
7 Phase retrieval for -fold transitive permutation actions
Recall that as a slight reformulation of the results from the previous section, we have that , acting on , does phase retrieval. We will now use this observation to prove that every -fold transitive subgroup does phase retrieval, when acting on .
We start by noting an auxiliary result which requires additional notation. Given , we denote
and let denote the orthogonal projection. The following is a phase propagation result, in a spirit similar to results in [7].
Lemma 7.1
Let , and assume that are such that for all three-element subsets there exists with . Then there exists with .
Proof 10
We proceed by induction, first noting that the case is obvious. For the induction step, assume the desired statement has been established for , and let be given, with for all 3-element subsets .
W.l.o.g. is nonzero, and since the union of all corresponding to three-element subsets spans , this implies that is nonzero, too. Furthermore, implies that the support of contains at least two elements .
In the case where , the fact that holds for all 3-element sets of the type implies for all , and thus follows.
We can therefore assume for the remainder of the proof that contains at least three elements, and we let and .
Now the induction hypothesis, applied with and all three-element subsets and , respectively, provide us with such that
The fact that implies that . But then we get on the one hand
and on the other
leading to
with nonzero . This finally entails . Now the fact that spans all of implies
Theorem 7.2
Let denote a -fold transitive permutation group. Pick doing phase retrieval for under the action of , and define as trivial extension of . Then does phase retrieval on .
Proof 11
Fix any three-element subset , and consider the set
By a suitable reindexing of by , this set corresponds to the system which was chosen to do phase retrieval on the subspace perpendicular to the constant signal. It follows for all satisfying , and for all three-element subsets , that holds, with . Hence the assumptions of the previous lemma are fulfilled, and follows.
References
- [1] R. Alaifari, F. Bartolucci, and M. Wellershoff. Phase retrieval of bandlimited functions for the wavelet transform, 2020.
- [2] R. Balan, B. G. Bodmann, P. G. Casazza, and D. Edidin. Painless reconstruction from magnitudes of frame coefficients. J. Fourier Anal. Appl., 15(4):488–501, 2009.
- [3] R. Balan, P. Casazza, and D. Edidin. On signal reconstruction without phase. Appl. Comput. Harmon. Anal., 20(3):345–356, 2006.
- [4] A. S. Bandeira, J. Cahill, D. G. Mixon, and A. A. Nelson. Saving phase: injectivity and stability for phase retrieval. Appl. Comput. Harmon. Anal., 37(1):106–125, 2014.
- [5] I. Bojarovska and A. Flinth. Phase retrieval from Gabor measurements. J. Fourier Anal. Appl., 22(3):542–567, 2016.
- [6] E. J. Candès, Y. C. Eldar, T. Strohmer, and V. Voroninski. Phase retrieval via matrix completion. SIAM J. Imaging Sci., 6(1):199–225, 2013.
- [7] C. Cheng, I. Daubechies, N. Dym, and J. Lu. Stable phase retrieval from locally stable and conditionally connected measurements, 2020.
- [8] C. Cheng and D. Han. On twisted group frames. Linear Algebra Appl., 569:285–310, 2019.
- [9] A. Conca, D. Edidin, M. Hering, and C. Vinzant. An algebraic characterization of injectivity in phase retrieval. Appl. Comput. Harmon. Anal., 38(2):346–356, 2015.
- [10] Y. C. Eldar and S. Mendelson. Phase retrieval: stability and recovery guarantees. Appl. Comput. Harmon. Anal., 36(3):473–494, 2014.
- [11] L. Evans and C.-K. Lai. Conjugate phase retrieval on by real vectors. Linear Algebra Appl., 587:45–69, 2020.
- [12] A. Fannjiang and T. Strohmer. The numerics of phase retrieval. Acta Numer., 29:125–228, 2020.
- [13] K. Flornes, A. Grossmann, M. Holschneider, and B. Torrésani. Wavelets on discrete fields. Appl. Comput. Harmon. Anal., 1(2):137–146, 1994.
- [14] H. Führ. Abstract harmonic analysis of continuous wavelet transforms, volume 1863 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 2005.
- [15] H. Führ and V. Oussa. Phase retrieval for nilpotent groups. 2022.
- [16] P. Grohs, S. Koppensteiner, and M. Rathmair. Phase retrieval: uniqueness and stability. SIAM Rev., 62(2):301–350, 2020.
- [17] P. Jaming. Phase retrieval techniques for radar ambiguity problems. J. Fourier Anal. Appl., 5(4):309–329, 1999.
- [18] R. Kueng, H. Rauhut, and U. Terstiege. Low rank matrix recovery from rank one measurements. Appl. Comput. Harmon. Anal., 42(1):88–116, 2017.
- [19] L. Li, T. Juste, J. Brennan, C. Cheng, and D. Han. Phase retrievable projective representation frames for finite abelian groups. J. Fourier Anal. Appl., 25(1):86–100, 2019.
- [20] R. D. Malikiosis and V. Oussa. Full spark frames in the orbit of a representation. Appl. Comput. Harmon. Anal., 49(3):791–814, 2020.
- [21] S. Mallat and I. Waldspurger. Phase retrieval for the Cauchy wavelet transform. J. Fourier Anal. Appl., 21(6):1251–1309, 2015.
- [22] J. Schwinger. Unitary operator bases. Proc. Nat. Acad. Sci. U.S.A., 46:570–579, 1960.
- [23] A. Terras. Fourier analysis on finite groups and applications, volume 43 of London Mathematical Society Student Texts. Cambridge University Press, Cambridge, 1999.
- [24] S. Waldron. Group frames. In Finite frames, Appl. Numer. Harmon. Anal., pages 171–191. Birkhäuser/Springer, New York, 2013.
- [25] W. K. Wootters and B. D. Fields. Optimal state-determination by mutually unbiased measurements. Ann. Physics, 191(2):363–381, 1989.