Learning the structure of any Hamiltonian
from minimal assumptions
Abstract
We study the problem of learning an unknown quantum many-body Hamiltonian from black-box queries to its time evolution . Prior proposals for solving this task either impose some assumptions on , such as its interaction structure or locality, or otherwise use an exponential amount of computational postprocessing. In this paper, we present efficient algorithms to learn any -qubit Hamiltonian, assuming only a bound on the number of Hamiltonian terms, . Our algorithms do not need to know the terms in advance, nor are they restricted to local interactions. We consider two models of control over the time evolution: the first has access to time reversal (), enabling an algorithm that outputs an -accurate classical description of after querying its dynamics for a total of evolution time. The second access model is more conventional, allowing only forward-time evolutions; our algorithm requires evolution time in this setting. Central to our results is the recently introduced concept of a pseudo-Choi state of . We extend the utility of this learning resource by showing how to use it to learn the Fourier spectrum of , how to achieve nearly Heisenberg-limited scaling with it, and how to prepare it even under our more restricted access models.
1 Introduction
Identifying an unknown Hamiltonian is a central task in quantum physics. As the Hamiltonian fully characterizes the interactions within any closed system, learning it provides a tractable description of a potentially highly complex quantum system. This is in contrast to the more general task of quantum process tomography, which learns the dynamical map itself but is exponentially expensive in the size of the system, even when the system is closed [BKD14, BMQ22, HKOT23]. Applications of Hamiltonian learning include quantum metrology and sensing [GLM11, DRC17], quantum device engineering [Shu+14, IBFBP20], and certifying quantum computations [WGFC14]. By developing more accurate and efficient methods for understanding quantum systems, we can enable the design and control of such systems with finer precision.
Historically, Hamiltonian learning draws from the subject of quantum metrology [Ram50, Cav81, BIWH96], extending to heuristic protocols for learning many-body models with familiar types of interactions [GFWC12, WGFC14a, HBCP15, Wan+17, BAL19]. Recently, there has been a surge of interest in learning Hamiltonians of large many-body systems with more complicated interactions and rigorous theoretical guarantees [AAKS21, HKT24, GCC24, HTFS23, Car24, DOS23, BLMT24, BLMT24a]. This body of work substantially broadens the applicability of this task, especially as experimental capabilities in engineering large-scale quantum devices continues to improve. From a theoretical perspective, such works explore fundamental questions in the learnability of quantum systems, such as: what classes of Hamiltonians can be learned? How efficiently and accurately can this task be carried out? And what level of control or access to the system suffices? This paper aims to shed some light on these foundational questions.
Broadly speaking, an unknown Hamiltonian is accessed either through its static or dynamical behavior. In the former, we are tasked with deducing the Hamiltonian when the system is at equilibrium [Alh23]. This is typically given through its Gibbs state at some temperature [AAKS21, HKT24, GCC24, BLMT24], although learning from eigenstates has also been considered [QR19]. In the latter model, we are given black-box access to the time evolution operator of the unknown Hamiltonian , for some controllable amount of time . Again, we are tasked to output a classical description of , this time given the ability to prepare some initial probe states, evolving them under the dynamics of , and measuring in some bases. Both versions of Hamiltonian learning have different merits and challenges, and are appropriate depending on the experimental scenario at hand. For example, the static model is more experimentally friendly, arising in many natural settings where one has limited control of the system. On the other hand, dynamical access requires finer experimental control but can serve as a more powerful learning resource [HKT24, HTFS23]. Learning from the time evolution operator also has close connections to applications in quantum simulation and algorithms. For the remainder of this paper, we will focus exclusively on the dynamical paradigm.
In this context, the primary cost of a Hamiltonian learning algorithm is the total evolution time, . This is the total amount of time under which we evolve by . We can understand this cost in relation to the fractional- or continuous-query model [BCCKS14], wherein queries to the time evolution operator cost less for smaller . Note that this is part of, but not the same as, the total time complexity of the algorithm, which includes the cost of additional quantum and classical computation (before, during, and after querying the system). In this work, we will not closely inspect the details of the computational complexity; we will only be concerned whether an algorithm is computationally efficient (polynomially scaling in all parameters) or not. Other figures of merit include the time resolution , which dictates how finely an experimenter must be able to control the time evolution; , the total number of prepare–evolve–measure experiments to run; and , the number of ancilla qubits used either to perform supporting quantum computation/control or to coherently store queries in quantum memory.
Most Hamiltonian learning results operate under the assumption that the structure of the unknown Hamiltonian is fully or at least partially known. The structure of a Hamiltonian is the set of terms for which . For some algorithms, the learning guarantees can be lost in the worst case when has even a single unidentified term [HKT24, HTFS23]. For others, the algorithm may succeed at learning the parameters of a partially known structure, but there is no guarantee for efficiently learning the unidentified terms [SLP11, BAL19, ZYLB21]. To address this deficiency, Bakshi, Liu, Moitra, and Tang [BLMT24a] formalized the problem of Hamiltonian structure learning, and they provided an efficient algorithm to solve it. Specifically, when is a constant-local -qubit Hamiltonian (with no restriction on the range of its local interactions) they showed that it is possible to identify all local terms with , and furthermore estimate the values of all such to additive error . In terms of evolution time, their algorithm costs , where is an effective sparsity parameter of the qubits’ interaction graph.
With nearly optimal scaling, this breakthrough result essentially completely solves the problem for local spin Hamiltonians. However, it remained an open question as to how one performs structure learning for even broader classes of Hamiltonians. For example, suppose that we assume is -local, but there are in fact a few -local terms, where . This can occur, for instance, by (undesired) spin-squeezing effects [KCM22]. In this setting, the algorithm of [BLMT24a] can learn all the -local terms, but not the -local terms. Furthermore, even if one were given the value of , if say then the complexity of their algorithm would be superpolynomial.
More generally, we pose the scenario where is completely arbitrary, with no locality restriction on its terms. Even if , it is unclear how to identify these terms from a pool of candidate terms within polynomial time. [BLMT24a] refers to this problem as the “sparse, nonlocal” setting and suggests that their algorithm could be applied here as well; however it is unclear how to avoid an exponential amount of computation using either their Goldreich–Levin-like algorithm [GL89] or general shadow tomography ideas [Aar20, BO21]. To our knowledge, there are only a handful of proposals in the literature which can operate in this setting that at least avoid an exponential number of experiments [YSHY23, Car24, CW23]. However, the algorithms of Caro [Car24] and Castaneda and Wiebe [CW23] would use an exponential amount of classical computation to solve this task, essentially brute-force checking over all possible terms. Meanwhile, the analysis by Yu, Sun, Han, and Yuan [YSHY23] assumes that the terms are distributed uniformly among the set of Pauli operators, and therefore only guarantees that their algorithm succeeds with high probability (when is sufficiently small) over the draw of from this ensemble of Hamiltonians. Their numerics suggest good heuristic performance but does not guarantee the ability to structure-learn any given sparse, nonlocal Hamiltonian in the worst case.
We therefore ask: can efficient structure learning be accomplished in this setting, and if so, what is the minimal amount of prior information required? Because we require an efficient algorithm, we only consider Hamiltonians for which , as otherwise merely writing down the description of is intractable. As an example of prior information, consider again the algorithm of [BLMT24]. Implicit in their problem input is a bound on the effective sparsity and a “local one-norm” of (i.e., the sum of strengths of all terms involving any given qubit). Bounds on these quantities are necessary in order to choose appropriate values for and . On the other hand, this aspect is trivial in non-structure-learning settings because knowing the structure is already more than sufficient to establish such bounds. We answer both questions by describing structure learning algorithms which essentially only require a bound on to run correctly. As long as , our algorithms are efficient in all metrics.
1.1 Main results
In this paper, we propose two algorithms for learning the structure of any unknown -qubit Hamiltonian. Let us begin by providing a technical definition for what we mean by the structure of a Hamiltonian.
Definition 1.1.
An -qubit Hamiltonian is a Hermitian matrix
(1) |
where are the unique Pauli terms of , and are the parameters of . The labeling is arbitrary; defining the Pauli structure of as the set , we take convention that the labels correspond to the elements of . As a slight abuse of notation, we refer to as the vector of nonzero parameters, which we can pad with zeros as necessary.
For normalization, we adopt units of energy such that .
Our definition of structure is understood with respect to an operator basis. In this paper we consider the Pauli basis , as it is arguably the most natural choice for studying many-qubit systems. Other choices of basis are also possible in principle, as long as they admit an efficient classical description; such a choice should be thought of as part of the definition to the learning problem. In this paper, we will exclusively consider the Pauli structure, which is a quantum analogue to the Fourier spectrum of a classical Boolean function. We will be primarily concerned with sparse (), nonlocal () Hamiltonians in this basis.
Note that our definition of explicitly excludes the identity component, as it merely constitutes an energy shift. This shift contributes to the time evolution as a global phase, and is therefore fundamentally unobservable within our black-box access models. However, let us point out that we will not be restricted to traceless Hamiltonians as inputs; our techniques will essentially extract the traceless part of any given Hamiltonian “for free.”
With these considerations in mind, let us define the learning problem as follows.
Problem 1.2.
Let be an arbitrary, unknown -qubit Hamiltonian, its Pauli structure, and a desired accuracy parameter. In the Hamiltonian structure learning problem, we are given as input: query access to applications of for tunable and an integer . We are tasked with outputting a description such that each , with probability at least .
The success probability is conventional; by a standard median-boosting argument, any algorithm that solves this problem can be repeated times to amplify the probability above .
The particular type of access to that we are given can significantly affect the learning guarantees for solving 1.2. At a minimum, we will suppose discrete control over the Hamiltonian in the sense of [DOS23]. This is the standard type of quantum control assumed in most settings.
Definition 1.3.
Discrete quantum control is the ability to run the circuit
(2) |
where are discrete control operations (quantum gates) acting jointly on the -qubit system and some ancilla qubits, and the are some sequence of tunable evolution times.
Without some form of quantum control, [DOS23] showed that any Hamiltonian learning algorithm cannot learn with precision beyond the standard quantum limit .111Up to some technical consideration about . Along with discrete control, one of our algorithms uses an additional form of control to beat the standard limit: the ability to implement time reversal.
Definition 1.4.
Access to time-reversal dynamics is the ability to apply for both and .
For example, systems possessing a chiral symmetry obey , which immediately gives rise to time-reversal dynamics. Alternatively, in the context of algorithmic certification, one typically has direct control over as a tunable, real parameter. The power of time reversal in learning was previously considered in [Sch+23, CSM23, SHH24], where exponential separations for certain unitary learning tasks were established between protocols with access to time reversal and those without. However, these results do not necessarily imply such a separation in our setting (i.e., learning Hamiltonians with sparse structures).
With access to discrete quantum control and time-reversal dynamics, we prove the following theorem.
Theorem 1.5 (Structure learning with time reversal).
There exists a quantum algorithm that solves 1.2 under the discrete control and time-reversal access model. The algorithm queries for a total evolution time of , requires a time resolution of , runs experiments, and uses quantum and classical computation. The discrete control operations act on an ancillary computational register of qubits.
This algorithm nearly achieves the Heisenberg limit of quantum metrology, up to a factor of hidden in the notation.
In more restricted experimental settings, it is unrealistic to assume the ability to perform time reversal. Therefore, we consider a second access model that is more conventional: only having discrete quantum control and forward-time evolutions.
Theorem 1.6 (Structure learning without time reversal).
There exists a quantum algorithm that solves 1.2 under the discrete control access model alone. The algorithm queries for a total evolution time of , requires a time resolution of , runs experiments, and uses quantum and classical computation. The discrete control operations act on an ancillary computational register of qubits.
Above, we have phrased the complexity in terms of , mirroring the statements of related results in the literature [Car24, CW23]. Without knowledge of , we simply apply the bound (which holds due to our normalization convention ). We remark that it is not possible to make the reverse replacement in Theorem 1.5 with our current analysis.
The proof of Theorem 1.6 can in fact be strengthened to almost recover the conventional scaling in the precision. Although a strict asymptotic improvement in every metric, the resulting algorithm becomes “galactic”: for example, to get a scaling of , constants on the order of appear.222On the other hand, smaller speedups feature more reasonable constants. For instance, achieving only incurs a factor of relative to the hidden constants of Theorem 1.6. We therefore treat this result as a curious observation, and we will defer any further discussion and analysis to Appendix B.
Corollary 1.7 (Galactic structure learning without time reversal).
There exists a quantum algorithm that solves 1.2 under the discrete control access model alone. For any real constant , the algorithm queries for a total evolution time of , requires a time resolution of , runs experiments, and uses quantum and classical computation. The discrete control operations act on an ancillary computational register of qubits.
1.2 Related work
In this section we will give an overview of prior works on Hamiltonian learning from time evolution, as well as some related topics. The literature on Hamiltonian learning is vast and so this discussion will not be comprehensive. Table 1 catalogs various results in Hamiltonian learning and shows how they compare to our algorithms. In the table, we say an algorithm can achieve structure learning (SL) depending on the assumed class of Hamiltonians. This is a subtle but important point: for example, applying any algorithm under the “sparse, nonlocal” category to constant-local Hamiltonians can achieve SL within that smaller class (by simply searching over the entire -sized pool of candidate terms).
Reference Hamiltonian class SL? Control? [ZYLB21]∗ Geometrically local Yes None 0 [SMDWB24] —— No None [HKT24] Low intersection No None [HTFS23] —— No Discrete [DOS23] —— No Continuous [BLMT24a] Effectively sparse, bounded strength Yes Discrete [YSHY23] Sparse, nonlocal Yes† Discrete [Car24] —— No‡ Discrete [OKSM24] —— No Discrete [CW23]§ —— No‡ [CW23]§ —— No Here Sparse, nonlocal Yes Here —— Yes Discrete, Here¶ —— Yes Discrete
∗The structure-learning analysis was demonstrated posthoc in [BLMT24a].
†Their analysis assumes that the unknown terms are distributed uniformly among all Pauli operators.
‡SL can be achieved if given exponential classical resources (also incurs a factor of in and ).
§Their results were originally stated with -error, which we have converted to -error here.
¶Can be improved to and , with potentially enormous constants.
Algorithms capable of (local) structure learning. While [BLMT24a] first put forth the explicit problem of Hamiltonian structure learning, it was recognized that prior results could be reanalyzed in this context as well. At the core of many of these results, including [BLMT24a] itself, is the so-called time-derivative estimator [SMLKR11, SLP11, ZYLB21, SMDWB24]. This technique is essentially based on the relation for small , where is a mixed state defined by any Pauli operator that anticommutes with the term to be learned. This relation does not depend on the locality of the terms, and therefore even a naive term-by-term approach can estimate any sparse, nonlocal Hamiltonian in polynomial time. However, achieving structure learning by this approach requires that the size of the pool of candidate terms be polynomially bounded, which we do not assume in our results. Furthermore, parallelizing the estimation of many ’s per experiment relies on a classical shadows-like argument that ony applies to local terms [HKP20].
The algorithms of Caro [Car24] and Castaneda and Wiebe [CW23] also experience similar advantages and limitations. However, one particular advantage they have over the time-derivative approaches is their use of shadow tomography variants (those of [CCHL22] and [HKP20], respectively), which improves the parallel estimation of many potentially nonlocal observables. These algorithms require more sophisticated quantum resources however, needing at least additional qubits to prepare the (pseudo-)Choi states central to their strategies. Notably, [Car24] requires a quantum memory of size on top of the Choi qubits, which is inherited from the shadow tomography protocol of [CCHL22]. [CW23] proposes two approaches, the second of which is based on the gradient-estimation-based algorithm of [Hug+22], achieving but using a large ancilla register of size . Also, both algorithms of [CW23] require the ability to apply .
Other algorithms of note include that of Yu, Sun, Han, and Yuan [YSHY23], which is based on sparse Pauli channel learning [FW20]. To our knowledge, this is the only prior algorithm which can specifically target sparse, nonlocal Hamiltonians while maintaining efficient scaling. However, their rigorous analysis requires them to assume that the Pauli terms are uniformly distributed over all possible combinations. Their results therefore only hold with high probability over the draw of . The algorithm of Odake, Kristjánsson, Soeda, and Murao [OKSM24] applies a linear transformation to the Hamiltonian to isolate each term , thereby allowing them to learn the parameter via robust phase estimation [KLY15]. Iterating this procedure over all known terms furnishes the Heisenberg-limited estimation of . One particular downside to their approach is that their , where one factor is incurred to approximate the linear transformation, and then another factor is taken to get appropriate guarantees on the robust phase estimation.
Finally, Gutiérrez [Gut24] describes an algorithm to learn a -local Hamiltonian with some unique properties. First, it is strongly limited to because of the use of a noncommutative Bohnenblust–Hille inequality [HCP23, VZ24]. However, this inequality is also what allows it to achieve -error without explicit dependence on . In contrast, all other attempts to get -error pay the standard factor of for converting from the -norm. Assuming , his algorithm has query complexity . Unfortunately this features rather large constants in the exponents: unraveling the analysis for unnormalized shows that and . The learning algorithm also requires applications of , inherited from the use of the quantum Goldreich–Levin algorithm of [MO10] as a subroutine.
Learning Pauli spectra. Prior works have also studied the task of learning an operator’s Pauli structure, sometimes referred to as its Pauli or Fourier spectrum, given query access to that operator [AS07, MO10, CNY23, Ang23, Gut24, BLMT24a]. These can be understood as quantum analogues to the Goldreich–Levin algorithm, which allows one to efficiently estimate any Fourier coefficient of a Boolean function from queries to that function [GL89]. Most works study the Pauli spectrum of a unitary operator, developing algorithms to test and learn unitaries acting on only a constant subsystem of qubits (or more generally, possessing a constant amount of influence over the system) [AS07, MO10, CNY23, Ang23].
Most relevant to the problem of Hamiltonian learning are the algorithms of Bakshi, Liu, Moitra, and Tang [BLMT24a] and Guitérrez [Gut24], which approximate the spectrum of from that of using series approximations at small . Both results are limited to local Hamiltonians, featuring explicit exponential scalings with respect to the locality of the Pauli terms being learned. In contrast, our algorithm works with an object called the pseudo-Choi state of [CW23], which gives direct access to the Pauli spectrum of rather than deducing it from its time evolution unitary. As such, our approach is not hindered by any locality considerations. At its core, our algorithm is efficient as long as , and in fact can be applied to learn the Pauli spectrum of any operator, not necessarily Hermitian or unitary, as long as one can efficiently block encode it.
Time reversal in learning problems. Time reversal is a tool that has been utilized with success in various modern quantum experiments [Gär+17, Mi+21, Bra+22, Col+22, Li+23]. In learning tasks, the intuitive power of time reversal is its ability to “refocus” the evolution of a physical system, enhancing the signal of important many-body correlations that might only manifest at late time scales [Sch+23]. Theoretically, superpolynomial separations have been established between learning protocols with and without access to time-reversal dynamics. Cotler, Schuster, and Mohseni [CSM23] considered the task of distinguishing whether a Haar-random unitary was sampled from or . Meanwhile, Schuster, Haferkamp, and Huang [SHH24] gave the task of distinguishing between two different classes of pseudorandom quantum circuits. In both cases, it was proven that no time-forward experiment can solve these tasks in polynomial time, but protocols using time-reversed dynamics to probe out-of-time-ordered correlators (OTOCs) can solve them very efficiently.
In our work, we do not use time reversal to probe OTOCs, but rather to implement quantum simulation techniques like the quantum singular value transformation (QSVT) [GSLW19]. In a similar spirit to the works discussed above, we observe that our time-reversed protocol can nearly reach the Heisenberg limit, while our time-forward algorithm does not surpass the standard quantum limit. However, as we do not consider lower bounds in this paper, we do not establish analogous separations for the sparse Hamiltonian learning problem.
Indistiguishable particles. We have restricted attention to many-qubit systems, although we note that there has been some progress on learning bosonic [LTGNY24] and fermionic [NLY24, MH24] Hamiltonians. However, these algorithms only consider Hubbard models, and they cannot perform structure learning due to their use of the Hamiltonian reshaping strategy of [HTFS23]. Conversely, it is interesting to point out that our results can be applied to these systems under an appropriate qubit mapping. For instance, we immediately get an efficient learning algorithm for any fermionic Hamiltonian with at most polynomially many interaction terms. This is true even if one uses the nonlocal Jordan–Wigner transformation [JW28] to represent the fermions as qubits.
1.3 Technical overview
In this section, we will describe the high-level techniques used to build our algorithms and prove our results. As per 1.2, we are given an unknown -qubit Hamiltonian and an integer with the promise that is supported on at most Pauli operators. Therefore, we write , with the understanding that may potentially be padded with zeros. At times, it is natural to consider the size of the Hamiltonian in terms of its operator norm . Whenever the learner does not have access to this quantity, we keep in mind that the bound can always be used in its stead.
Learning with pseudo-Choi states
The key technique enabling our learning algorithms is the pseudo-Choi state, a concept introduced by Castaneda and Wiebe [CW23]. They define a version with a reference control qubit, but it is straightforward to consider a referenceless version as well. We will use both variants in our algorithms.
Definition 1.8.
Let . The pseudo-Choi states of an -qubit operator , with and without reference, are
(3) | ||||
(4) |
respectively. and are normalization constants.
This state draws analogies to the usual Choi state of a quantum channel . However, since does not necessarily induce a quantum channel, pseudo-Choi states do not exhibit channel–state duality in the sense of the Choi–Jamiołkowski isomorphism. Nonetheless, it serves as a powerful learning resource: [CW23] shows that Clifford classical shadows [HKP20] can be used to estimate all parameters of with copies of . Here, is a normalization factor necessary for the preparation of this state. This measurement scheme is therefore able to simultaneously learn a large number of observables with no restriction on their locality. We describe this protocol in more detail in Section 3.
To prepare such states, [CW23] calls upon a subroutine to block encode the Hamiltonian from its time evolution [LC17, GSLW19]. This involves block encoding the operator as a linear combination of unitaries (LCU) [CW12], followed by invoking the QSVT [GSLW19] to apply a polynomial approximation of the function onto that block. The normalization ensures that the spectrum being transformed lies in the appropriate domain of the approximation. This technique is very accurate: the polynomial only needs degree to get spectral error , and choosing suffices for the learning problem. Then from queries to the controlled time evolution with resolution , one can construct the block encoding of . Finally, the pseudo-Choi state is prepared probabilistically from the block encoding by measuring some ancilla flag qubits; this succeeds with probability at least , so a constant number of repetitions per copy suffices.
The limitations of this approach are two-fold. First, it does not supply an efficient way to learn the terms if they are a priori unknown. Second, the state-preparation procedure requires controlled forms of the time evolution, and . Access to these resources is unnatural in most learning frameworks, and there are no-go conditions for controlling black-box unitaries [AFCB14, GST24]. Furthermore, in the time-forward access model one cannot directly query the (controlled) inverses either. In order to make the pseudo-Choi state method applicable to more conventional learning scenarios, we seek a way to lift these limitations.
Preparing pseudo-Choi states without controlled evolutions
To get our main results Theorems 1.5 and 1.6, we show how to prepare pseudo-Choi states without either controlled time evolutions or (in the second access model) time reversal. The guarantees on our state-preparation algorithms are as follows.
Theorem 1.9 (Pseudo-Choi state preparation with uncontrolled time-reversal queries).
Let , . From access to time-reversal dynamics, we can prepare -close states to and with success probabilities at least and , respectively. For either state, each preparation attempt uses evolution time to with time resolution . The number of ancilla qubits required is at most .
Theorem 1.10 (Pseudo-Choi state preparation with uncontrolled time-forward queries).
Let , . From access to only time-forward dynamics, we can prepare -close states to and with success probabilities at least and , respectively. The parameter is known analytically and obeys . For either state, each preparation attempt uses evolution time to with time resolution . The number of ancilla qubits required is .
Observe that the success probability for is always at least . Meanwhile for , the dependence of will cancel nicely with its use case (which we describe later). The demand for accuracy is a consequence of the fact that encodes the normalized parameters . In order to get the desired in accuracy in learning , we need the state-preparation error to be for a small constant . Below, we describe the techniques that make these constructions possible.
Controlization to construct block encodings
We remove the need for controlled time evolutions through a technique called controlization. While various approaches for controlization have been proposed [Jan02, Zho+11, FDDB14, NSM15, DNSM19, CBW24], we employ the one described by Odake, Kristjánsson, Taranto, and Murao [OKTM23] due to its simplicity and universal applicability. Let be the traceless part of . The subroutine generates a mixed-unitary channel which approximates to any target diamond-norm distance . In terms of evolution time, controlization has the nice feature that it exhibits zero overhead: to approximate , we query for a total of times. The downside is that each query requires a small time resolution of , so our main task is bounding what suffices to achieve the total target error.
The controlization technique of [OKTM23] is based on Pauli twirling and randomized product formulas. It is well know that averaging over the Pauli group gives the identity
(5) |
This can be extended by appending a control qubit and replacing each Pauli gate by . In this case the identity becomes
(6) |
Exponentiating this twirled Hamiltonian gives, up to some global phase, the unitary . We then show that this idea can be easily extended to synthesize multi-controlled unitaries , where is any -qubit control string.
In order to evolve by the twirled Hamiltonian, Trotter’s product formula tells us that . It is enticing to simply concatenate the unitaries , but clearly this is inefficient; even ignoring the Trotter error, there are exponentially many terms. Fortunately, Campbell’s qDRIFT protocol [Cam19] was designed with precisely this concern in mind. Rather than include each term in the simulation, qDRIFT constructs random product formulas wherein terms are selected with probability proportional to their norm. In our context, all terms have the same weight, so we just sample from uniformly. By treating this ensemble of random product formulas as a mixed-unitary channel (wherein we draw a new formula for each instance), the analysis of [Cam19] (and tightened in [CHKT21]) shows that qDRIFT sequences of length suffice to get diamond-norm distance at most . Since we only need applications running for time , this simplifies to .
To determine what we need to maintain our learning guarantees, note that our approximation prepares a mixed state from which we probabilistically prepare the pseudo-Choi states. A careful error analysis shows that error in trace distance of the mixed states (hence in diamond-norm distance of the channels) suffices. Each block-encoding circuit makes queries to , so choosing nets the claim of Theorem 1.9. (For the success probabilities, observe that always holds because of the reference term, while is proportional to the norm of the block-encoded matrix.)
Block encoding the matrix logarithm without time reversal
The construction above removes the need for , but still requires negative-time queries in order to implement the QSVT (and even the LCU for ). To develop a completely time-reversal-free approach, we construct an entirely different quantum circuit for block encoding . The basic idea is simple: we appeal to the well-known power series for the (matrix) logarithm [Hal15].
For any square complex matrix such that , the following identity holds:
(7) |
Letting , this formula gives us a relationship between the Hamiltonian and a linear combination of its time evolution operators which only run forward in time. Block encoding the LCU still requires controlled unitaries, but by our prior discussion we now have a technique to approximate them. Note that our normalization requirement is slightly relaxed compared to the QSVT approach, but for sake of clarity let us impose as before.333We can actually relax the QSVT normalization further: one can allow for any by increasing the polynomial degree to [GSLW19, Lemma 70].
Technically, Eq. 7 is not written in the form of an LCU. It is also a series with an infinite number of terms. Our analysis is then to determine where we should truncate the series to maintain sufficient accuracy, and how to express the truncated sum as a proper LCU. The former is straightforward: since , we have . Let be the partial sum of Eq. 7 up to terms. An application of the triangle inequality gets , which means that ensures a truncation error of at most .
To express as an LCU, we expand the terms using the binomial theorem. With the help of combinatorial identities, we can derive analytical expressions for coefficients such that . Techniques for implementing LCUs are well established [CW12, GSLW19], requiring controlled instances of the unitaries on only control qubits. The bottleneck of any LCU implementation is the size of its coefficient -norm, . Using the analytical form of the ’s, we can compute an essentially tight bound of . Recalling that we need to take error below , we immediately get and Theorem 1.10 follows. For learning the Hamiltonian parameters, our LCU now prepares the state (still with at least probability), which requires us to refine the learning precision by . In turn, this raises the copy complexity to . Each copy costs evolution time to produce, leading to the claimed result in Theorem 1.6.
Identifying Pauli spectra using pseudo-Choi states
So far we have discussed the utility of the pseudo-Choi state with reference, as introduced by [CW23]. Let us now turn to the referenceless version as in Eq. 4. We invoke it in order to accomplish the structure identification task. The reference was essential to learn values of the Hamiltonian coefficients , but it serves no purpose when we only care about detecting which coefficients are large relative to the others. By performing Bell measurements [Mon17] on copies of , we can learn which terms in have weight greater than .
Theorem 1.11 (Bell sampling for structure learning).
With high probability, we can generate a set of Pauli labels containing . This process consumes copies of , measuring each in the Bell basis.
Observe that while we need copies, the probability of producing a single copy is (time-reversal) or (time-forward) from evolution time. It is a standard fact that Bernoulli trials are sufficient to get successes. Hence, (time-reversal) or (time-forward) total time evolution is used to learn the structure of . Unlike prior approaches to estimating Pauli spectra of unitary operators [MO10, Ang23, Gut24, BLMT24a], our algorithm can in principle be applied to any -qubit operator , as long as one can consistently generate copies of its pseudo-Choi state . We are also not restricted by any locality considerations, and the total number of queries to essentially only depends on (i.e., the target error in “natural” units of the Hamiltonian’s norm).
Algorithm 1 describes the core idea of our structure-identifying algorithm. For each copy of , we measure in the -qubit Bell basis, which can be described as . In turn, this induces a distribution governed by . Sampling from this distribution allows us to learn the Pauli terms in that have relatively high weight.
To analyze the cost of this algorithm, we need to bound the number of measurements such that we observe at least all the terms with . This is a variant of the coupon collector problem, which is well studied [Shi07]. Our setup has two modifications from the classic version of the problem. First, we have a nonuniform distribution, which bottlenecks the sample complexity based on the least likely coupon. Second, we only care to collect low-rarity coupons (i.e., those such that ). Hence the rarest coupon we need to collect has at least . This condition allows us to bound the tail of the cumulative distribution where, after i.i.d. draws of the coupons (Pauli terms), we still have not observed at least one of each type. We show that this probability is at most whenever .
Besides this main argument, there are some smaller technical details to take care of. First is the fact that we do not prepare exactly, but rather an approximation . Suppose that for some small , which we can tune by constructing a more precise block encoding. Then, we show that . While we may pick up terms with (this is possible even if sampling from the exact pseudo-Choi state), it is fine to learn a superset. This is because the next stage predicts all using classical shadows, which only has an explicit logarithmic cost in the number of parameters. Any parameters deemed too small can simply be discarded. Clearly since we only take a polynomial number of samples, this superset has bounded size. But we can prove something even stronger: . That is, even if the approximation has support that leaks onto other Pauli terms, its -support never contains those erroneous terms.
Finally, we remark that the controlization approximation affects this algorithm more severely than the parameter estimation stage. We find that the Hamiltonian coefficients become perturbed on the order of , where is the number of queries to the time evolution for one block encoding and is the controlization error per query. In order to ensure the correctness of the Bell sampling procedure, we need to adjust the threshold of Theorem 1.11 and choose a sufficiently small . The upshot of this analysis is a time resolution of under the time-reversal access model, and under the conventional access model, for this stage of the algorithm. As is inherent to controlization, and are unaffected.
Bootstrapping to (nearly) Heisenberg-limited scaling
Having described an algorithm that learns at the standard quantum limit, we consider how to the enhance the procedure to reach the Heisenberg limit. Using the pseudo-Choi state approach, this is ultimately achieved by a technique called uniform spectral amplification [LC17], a block-encoding generalization of amplitude amplification. As such, we will need access to time evolution inverses, so the following discussion is only viable under the time-reversal model.
To achieve Heisenberg-limited scaling, we use the precision bootstrapping framework, an idea that has enjoyed many fruitful successes in the dynamical learning paradigm [KLY15, HKOT23, DOS23, Zha+23, BLMT24a]. The first use of the strategy for many-body Hamiltonian learning is attributed to Dutkiewicz, O’Brien, and Schuster [DOS23], inspired by the feedback control scheme of Kura and Ueda [KU18] for qudit learning. Its broader applicability was subsequently appreciated by [BLMT24a]. We briefly review the idea here.
Let be some base learning algorithm with the guarantee that . The idea is to bootstrap applications of this base algorithm, with constant precision , to the Heisenberg limit by adaptively adjusting the Hamiltonian being learned via quantum control. Consider a previously learned estimate and form a “residual Hamiltonian” . Assuming that the current estimate is only -good, we can amplify by a factor of to boost the residual errors up to a constant. Then if we learn using with merely constant precision , we can improve our estimate to error. Recursing on this process for multiple rounds for some , it is clear that each round produces an estimate with error . Hence suffices to get the target accuracy . Taking some care with the acceptable amount of failure probability establishes the correctness of the algorithm. This overall strategy is outlined in Algorithm 2.
To achieve Heisenberg-limited scaling, the process of boosting the residual by needs to take time, otherwise the advantage is lost. In the conventional time-evolution setting, this holds because applying takes time by definition. In [DOS23], they consider continuous quantum control, wherein the experimenter can implement directly, while [BLMT24a] assumes only discrete control, requiring them to approximate the residual evolution via Trotterization. In our algorithm, we prepare pseudo-Choi states from a block encoding of the Hamiltonian. Thus the bootstrapping strategy requires us to block encode the residual Hamiltonian, which comes with unique advantages and challenges. The main advantage is that we can block encode exactly as an LCU [CW12], thereby avoiding the complications of Trotter error in the discrete control model.
On the other hand, boosting the block-encoded residual by is more challenging. To do this, we invoke Low and Chuang’s uniform spectral amplification technique [LC17],444A generalization for amplifying the singular values of a not-necessarily-Hermitian matrix [GSLW19, Theorem 30] also accomplishes this task. which essentially allows us to map , up to some controllable error , as long as . This costs queries to the block encoding of . Meanwhile, recall that block encoding itself costs queries to . A careful error analysis shows that we need to take (on the other hand, suffices). Thus in terms of , the cost of each round of the bootstrap scales as , giving us nearly Heisenberg-limited scaling up to the factor of .
Note that this argument applies to the structure identification stage of our algorithm as well. For intuition here, consider the first round , wherein we identify all the terms which have and estimate them, giving us . In the next round , we synthesize and amplify by . This splits the Hamiltonian terms into two sets: those in , and those in the complement . We only care to identify the elements of ; after amplification, all the such terms larger than become larger than , which again can be identified with just a constant precision. Iterating in this way, we can deduce the identity of all terms larger than with only rounds. Combining this with the parameter estimation procedure gives us the claim in Theorem 1.5.
One final detail to point out is the normalization of the residual Hamiltonian. To guarantee that , we make the particular choice . This is because our error metric is with respect to the vector-norm of the coefficients, which typically has a loose conversion from the operator norm:
(8) |
That said, there exist Hamiltonians which saturate this inequality, so in the worst case we cannot expect to tighten it. While we use the -norm as our metric, converting to different norms simply incurs factors of in the sample complexity.
1.4 Discussion
In this paper, we have described algorithms to efficiently learn sparse, nonlocal Hamiltonians from minimal prior information. This addresses an open direction posed in [BLMT24a]. At the center of our results is the pseudo-Choi state introduced by [CW23], which enables accurate and parallelized learning of many Hamiltonians terms without locality restrictions or a large quantum memory. The original proposal required controlled time evolutions and time reversal to prepare these states; we have shown how to lift these resource requirements, thereby bringing the scope of the problem down to conventional black-box query scenarios.
For the time-reversal access model, our algorithm can achieve nearly Heisenberg-limited scaling with the pseudo-Choi method. For the time-forward access model, we prepare pseudo-Choi states using a strategy for block encoding the logarithm of a unitary operator without the QSVT. These constructions may be of independent interest more broadly.
To learn the structure of a sparse, nonlocal Hamiltonian, we established a protocol to estimate the Pauli spectrum of any Hamiltonian by sampling its (referenceless) pseudo-Choi state. Parallel to other works which establish quantum analogues of the Goldreich–Levin algorithm for unitaries [MO10, Ang23] and local Hamiltonians [Gut24, BLMT24a], our result establishes a version for nonlocal Hamiltonians. In fact, this algorithm is applicable not only to Hamiltonians but to any matrix that we can efficiently block encode. It would be interesting to find applications of this result in other contexts.
While our work affirms that it is possible to efficiently learn any Pauli-sparse Hamiltonian from queries to its time evolution, many important questions remain open. We list a few below.
-
1.
Is there an information-theoretic separation between Hamiltonian learning protocols with and without time reversal? While exponential separations have been established in unitary learning problems [CSM23, SHH24], these results do not immediately translate to our sparse Hamiltonian learning problem. Since we have established that this problem can be solved in polynomial time, any separation can only be polynomially large. Investigating the (im)possibility of nearly Heisenberg-limited scaling without time reversal would be an important question to address in this context.
-
2.
Can the time resolution of our algorithms be improved? Because the small resolution arises from controlization, an access model given immediately bypasses this issue. Within the controlization perspective, an analysis of randomized block encodings (e.g., as in [MR24]) might be useful.
-
3.
Do nontrivial -dependent lower bounds exist for 1.2? To our knowledge, the current best lower bound is [HTFS23]. This bound is demonstrated by reducing the -term Hamiltonian problem to distinguishing between two single-term Hamiltonians. Meanwhile, exponential lower bounds have been shown for protocols with () and without () ancillas and adaptivity, where now the distinguishing task corresponds to a net of Hamiltonians generated by Haar-random unitary transformations [BCO24]. Related is the problem of learning unitary channels, which has the tight lower bound of queries [HKOT23]. These results can be understood as either the case of constant or exponentially large ; a lower bound for the intermediate regime is of clear interest. Note added: During the final preparation stage of this manuscript, a lower bound of with respect to -error was claimed by Ma, Flammia, Preskill, and Tong [MFPT24].
-
4.
To what extent is our algorithm robust to state-preparation and measurement errors?
2 Background
2.1 Notation
We denote the imaginary unit by . For any integer , define the set . The natural logarithm is denoted by , and other bases will be specified when necessary. Asymptotic upper and lower bounds are denoted by and , respectively. We write if and . Tildes on big-O notation suppress polylogarithmic factors, e.g., . When we only care that a quantity is polynomially bounded, we write . Generally speaking, for an object , we will use to denote an estimate of it and to denote an approximation to it.
2.2 Linear algebra
We collect some standard facts that we will make regular use of.
Proposition 2.1 (Vector norm conversion).
Let be the -norm of any . For all , these vector norms obey
(9) |
with the understanding that .
The -norm is particularly important, and when the context is clear, we may simply write it as .
Definition 2.2.
For any matrix , the Schatten -norm is the -norm of its singular values.
As such, Schatten norms also obey the bounds of Proposition 2.1. Important matrix norms are the operator norm and the Frobenius norm , which are related by . Schatten norms also appear in a matrix version of Hölder’s inequality.
Proposition 2.3 (Hölder’s inequality).
Let . For any such that ,
(10) |
Next we consider a Hilbert space of qubits. Let be the -qubit identity operator. We will write when the dimension is clear from context.
Definition 2.4.
The Pauli matrices are defined as
(11) |
The set of -qubit Pauli operators is a complete orthogonal operator basis for , obeying the trace-orthogonality condition for all .
As such, any -qubit operator can be written as
(12) |
where are sometimes referred to as the Fourier coefficients or Pauli spectrum of . There is a close relation between the -norm of these coefficients, the Frobenius norm, and the norm of Choi-like vectors of .
Proposition 2.5 (Fourier coefficient identities).
Let be any complex matrix. Parseval’s identity states that
(13) |
An important related identity is
(14) |
where . This is a consequence of a more general result,
(15) |
We also standardize our notation for controlled unitaries.
Definition 2.6.
For any bit string , the -qubit controlled variant of an -qubit unitary operator is
(16) |
When it is unimportant to specify the precise control bits , or the context is otherwise clear, we will simply write .
Finally, we will also work with quantum channels: completely positive, trace-preserving maps on the space of operators. The natural norm to equip quantum channels with is the diamond norm, also known as the completely bounded trace norm.
Definition 2.7.
Let be a quantum channel. The diamond norm of is
(17) |
2.3 Block encodings
Here we collect some standard results about block encodings, namely the ability to take linear combinations of unitaries. Our presentation follows that of [GSLW19] for convenience.
Definition 2.8 ([GSLW19, Definition 43]).
Let be an -qubit operator and . An -qubit unitary operator is called an -block encoding of if
(18) |
For brevity, we will say that is an -BE. One may write an -BE as
(19) |
when the unspecified blocks are unimportant.
Definition 2.9 ([GSLW19, Definition 51]).
Let with , and . The pair of -qubit unitaries is called a -state-preparation pair if
(20) |
where and , and for all , .
Proposition 2.10 (Linear combination of block encodings [GSLW19, Lemma 52]).
Let be an -qubit operator, expressed as a linear combination of operators with . Let . For each , suppose that there is an -block encoding . Also, let be a -state-preparation pair for . Define the -qubit unitary
(21) |
Then is an -block encoding of .555Note that a typo in [GSLW19] mistakenly carried an factor of in the error term.
In this paper, we assume that we can produce state-preparation pairs with infinite precision. While some finite-precision machine error will be incurred in practice, this is vastly negligible to all other errors in our algorithm. Furthermore by the Solovay–Kitaev theorem, we only pay a polylogarithmic overhead for synthesizing the state-preparation pair (as well as any other quantum gates that we may need) from a finite gate set. If necessary, we trust the diligent reader to carry forth the machine precision throughout our exposition.
3 Parameter estimation from pseudo-Choi states
As the base learning algorithm, we will adapt the algorithm of [CW23], which learns the Hamiltonian from a unique resource called its pseudo-Choi state. Given access to such a resource state, their algorithm can efficiently learn any Hamiltonian supported on a polynomial number of terms, regardless of any constraints such as locality that typically hinder other approaches. In this section, we review and streamline the key components of this algorithm. Readers familiar with these results may skip ahead to Section 4.
In order to prepare the pseudo-Choi state from queries to the time evolution operator, [CW23] assumes the ability to perform controlled evolutions . Controlizing black-box unitaries is in general an impossible task [AFCB14, GST24]. However, because we have fractional-query access to through the real parameter , we can approximate to arbitrary accuracy with no overhead in the evolution time [OKSM24]. Instead, controlization only affects the time resolution and number of additional quantum gates; we carry out this analysis in Section 7. For simplicity of the present exposition, we will work with the exact controlled unitary.
3.1 State preparation from time-evolution queries
[CW23] defines the pseudo-Choi state with a reference control qubit as follows.
Definition 3.1.
Let be an -qubit Hamiltonian. The pseudo-Choi state of with reference666Later we will introduce a referenceless version, which is a -qubit state. is the -qubit state
(22) |
where and is a normalization factor.
The reference here will be vital for deducing the Hamiltonian parameters from measurements on the state. Observe that since , it follows that .
The actual pseudo-Choi state that we consider will be
(23) |
where is a normalization on the Hamiltonian. This normalization is invoked because the pseudo-Choi states will be prepared from a block encoding of the Hamiltonian. Block encoded matrices must have norm at most to ensure unitarity of the encoding operator; the further factor of ensures that the QSVT used to construct this block encoding is also properly normalized. [LC17, GSLW19] show how to block encode from forward- and reverse-time queries to its time evolution operator. We opt to restate the formulation from [GSLW19] here.
Proposition 3.2 ([GSLW19, Corollary 71]).
Let . Suppose that for some . There exists a -BE of , constructed from queries to and , and additional quantum gates.
Proof sketch..
We outline the proof idea for completeness and to clarify some details. We begin with the observation that
(24) |
where the matrix representation on the rhs is in the basis. This is a -BE of . Then, [GSLW19, Lemma 70] shows that there exists a polynomial which approximates the function on the domain . Specifically, for any , is a polynomial of odd degree obeying
(25) |
and . Thus, quantum signal processing or the quantum singular value transformation (on one ancilla qubit) can be applied to encode . A polynomial transformation of degree requires queries to and and additional quantum gates. This produces a -BE of ; letting furnishes the claim. ∎
From this block encoding, [CW23] shows that a pseudo-Choi state of an approximation to can be prepared with probability greater than .
Proposition 3.3 ([CW23, Lemma 19]).
Let be an -qubit Hamiltonian and . Let . With success probability at least , we can prepare a copy of , where is a Hermitian matrix obeying
(26) |
This procedure uses queries to , its inverse, and their multi-controlled forms.
To produce copies of the state, queries to the block encoding suffice. This is a standard result derived from bounding the tail of a sequence of i.i.d. Bernoulli trials. Following the argument given in [CW23, Appendix B.6], we prove the statement below in Appendix A for completeness.
Proposition 3.4.
Let be integers. Let be a state which is probabilistically prepared from a block encoding , with success probability . With high probability, we can prepare copies of from
(27) |
queries to .
3.2 Estimating with decoding operators
Recall the Pauli decomposition of . In parallel, we define
(28) |
which are the parameters in the block-encoded approximation. For each , [CW23] defines a set of so-called “decoding operators” on the qubits:
(29) |
These operators are defined such that their mean values with respect to recover , up to normalizations:
(30) |
While is chosen by the learner, is unknown. To access this quantity, is used:
(31) |
Note that if we had not prepared the pseudo-Choi state with the reference control qubit, this normalization would have been more difficult to infer.
The classical shadows protocol [HKP20] over the Clifford group can be used to estimate all of these mean values with very efficient copy complexity. In brief, the shadow tomography protocol involves applying a random Clifford circuit to each copy of the state and then measuring in the standard basis. After collecting a sufficient number of samples, the measurement data is then classically postprocessed by a linear-inversion estimator which converges to the mean with strong theoretical guarantees. A major selling point of the technique is that, in terms of the explicit dependence on the number of observables, the copy complexity only scales logarithmically.
Let be the classical-shadows estimator for . [CW23] showed that their variances under Clifford shadow tomography are bounded by a constant:
(32) |
However, the estimate of is a ratio of two point estimators,
(33) |
which requires more than just bounding the variance to control its error. The analysis of [CW23] shows that learning each up to precision suffices to achieve precision in . Plugging this into the sample complexity for classical shadows [HKP20], they find that
(34) |
copies of the pseudo-Choi state suffice to produce an estimate such that
(35) |
Note that the original analysis [CW23, Proposition 15] uses a propagation of uncertainty argument under a linear approximation; a more rigorous analysis by the Taylor expansion of the random variable leads to the same conclusion (up to constant factors).
Finally, one must translate this estimate of to an estimate of . By Hölder’s inequality, each parameter has systematic error at most
(36) |
Recall from Proposition 3.3 that . Meanwhile, the statistical error is controlled by the shadow tomography analysis, which guarantees with a sufficient number of measurements. Altogether, by a triangle inequality the total error is
(37) |
To bound this by , we take queries to the controlled time evolution to generate a sufficiently accurate copy of the pseudo-Choi state. Because the probability of successful preparation is , only a constant number of repetitions are required to generate a copy from the block encoding (Proposition 3.4). This bounds the first term of Eq. 37 by, say, . Consuming such copies via Clifford shadows bounds the second term, also by . The total number of queries is therefore , and since each query runs for time , the total evolution time is . This is one of the main results of [CW23].
Proposition 3.5 ([CW23, Theorem 26, modified for -error]).
Let be an unknown Hamiltonian and let . Assuming access to , , and their multi-controlled forms, there exists an algorithm which can learn each to additive error . The total number of queries to the time evolution operator is , the total evolution time is , and the minimum time resolution is . The amount of classical and quantum computation scales at most polynomially in all parameters.
Remark 3.6.
This algorithm can perform structure learning of arbitrary Hamiltonians, with some important caveats. First, suppose the pool of possible Hamiltonian terms is restricted to some polynomially sized set. In that case, shadow tomography gives sufficient samples to estimate all the possible terms, and the structure can be learned by selecting only those which have large magnitude. For example, the set of -local terms has size , and this is the setting originally considered in [CW23]. Second, if one allows an exponential amount of classical computation, then the structure can be learned by brute-force processing the shadows over all Pauli operators. Although computationally inefficient, the query complexity is only increased by a factor of for the shadow tomography step. [Car24] uses a similar idea that is also computationally inefficient in this setting for the same reasons.
4 Identifying Pauli spectra with pseudo-Choi states
In this section, we show how to identify the structure of , using a total evolution time of . In the literature, this problem is typically encountered as learning where the Pauli spectrum of a unitary operator has large mass [MO10, Ang23]. Here, we instead want to identify the Pauli spectrum of a non-unitary operator, . The pseudo-Choi state provides us a convenient resource to accomplish this task.
To deduce the identity of the unknown terms, the key idea is to sample from the referenceless pseudo-Choi state of . This state can be prepared by essentially the same techniques as for the pseudo-Choi state with reference, described in Section 3. The only changes are that we do not need to control the block encoding, and that we have different guarantees on the success probability.
First, let us define these states.
Definition 4.1.
Let be an -qubit Hamiltonian. The referenceless pseudo-Choi state of is the -qubit state
(38) |
where and .
Unlike the pseudo-Choi state with reference, this referenceless variant is invariant under rescaling for any . Let us briefly outline a method for preparing this state if one has access to both positive- and negative-time evolutions. Without negative-time dynamics, a different approach must be taken to block encode the Hamiltonian. This results in an entirely different query complexity; see Section 6 for that setting.
Lemma 4.2.
Let be an -qubit Hamiltonian, , and . With success probability , we can prepare a copy of , where is a Hermitian matrix obeying
(39) |
and . This procedure uses queries to , its inverse, and their controlled forms.
Proof.
The preparation of this state is similar to that of the variant with reference, see Proposition 3.3. The main difference here is that we do not need a controlled form of the block encoding. Let be the -BE of produced from queries, where . Applying to and measuring the first two qubits, if we obtain in that register then we have successfully prepared in the remaining qubits. This occurs with probability
(40) |
∎
Let us remark briefly about the applicability of this result. In the context of the bootstrap framework, we will apply Lemma 4.2 in the first step where . Once we have a nontrivial estimate , we synthesize the residual Hamiltonian and learn from that instead. In that case, the proof argument is the same but the constant becomes .
Returning to the present setting, note that the block-encoded approximation may have potentially superfluous error terms. For some , define the set
(41) |
This labels the terms in that have weight more than . It suffices only to learn the terms in because, if , then is a -accurate estimate of . For small enough , such terms can safely be ignored.
In order to learn this set, we will appeal to the technique of Bell sampling. In our case, we sample from the pseudo-Choi state (instead of two copies of an -qubit state, as was introduced in [Mon17]).
Lemma 4.3.
Bell measurements across the bipartition of the -qubit state sample from the distribution
(42) |
where .
Proof.
Bell measurements, implemented by a depth- Clifford circuit, sample from the basis . On the state , this induces the probability distribution
(43) |
We have if , and otherwise. ∎
Our goal then is to draw samples from until we see all elements of . The following lemma bounds the number of Bell measurements sufficient to accomplish this task with high probability.
Lemma 4.4.
Let . Let be a Hamiltonian and suppose we have access to copies of its referenceless pseudo-Choi state. With probability at least , taking Bell measurements of suffices to observe all elements of .
Proof.
This problem is a variant of the classic coupon collector problem: given a distribution over “coupons,” how many draws does it take until we have collected at least one of each coupon? Our bound follows from a modification of [Shi07, Lemma 12]. In particular, we relax the task by not aiming to collect the rare coupons (i.e., those outside of ).
Let be the random variable defined as the number of draws until all coupons from are obtained. For each integer , define as the event that, after drawing coupons, none of them are . The total probability that after draws, we have yet to obtain all coupons (from ), can be estimated by a union bound:
(44) |
Let . For each , we have that . The inequality then gets
(45) |
Setting this bound as the maximum failure probability and solving for proves the claim. ∎
Remark 4.5.
The Bell sampling algorithm can also discover elements . At this current stage in the algorithm, we do not have enough information to determine inclusion or exclusion, so we store all elements observed. As long as we have at least a superset of , we say that the structure learning algorithm has succeeded. This is fine because in the next stage, shadow tomography can easily estimate an overcomplete set of terms with only logarithmic cost to the sample complexity, per Eq. 34. Any terms that are too small can then be discarded. And because we only ever make a polynomial number of queries, we can never pick up more than a polynomial number of superfluous terms.
The last piece we need is a conversion from learning to learning , given that .
Lemma 4.6.
Let . Let be two Hamiltonians such that
(46) | ||||
(47) |
where . Suppose . Then , and furthermore, for all .
Proof.
For all we have . By a triangle inequality and Hölder’s inequality (Eq. 36), it holds that . Therefore all such also correspond to , which characterizes the set . This demonstrates the first claim.
For the second claim, consider any . Because has no support on such terms, and so . But since , this magnitude is strictly less than . Hence all such are too small to lie in . ∎
Combining Lemmas 4.2, 4.4 and 4.6, we arrive at the main result of this section.
Theorem 4.7.
Let be an unknown -qubit Hamiltonian, , and . Define the set . There exists an algorithm which learns, with high probability, at least all the elements of . The algorithm makes
(48) |
queries to and uses quantum and classical computation.
Proof.
Lemma 4.4 tell us that, with high probability,
(49) |
copies of suffice to learn via Bell measurements. Meanwhile, Lemma 4.2 tells us that the probability of successfully generating a single copy is
(50) |
By Proposition 3.4, we need to take trials to produce the copies. Therefore from
(51) |
queries to the block encoding of , or equivalently queries to the time evolution operator, we learn the set with high probability. Finally by Lemma 4.6, setting ensures that the set that we have learned contains as a subset. Lemma 4.6 also assures us that since it cannot contain any superfluous terms. Taking sufficiently small concludes the proof. ∎
Comparing this result to Proposition 3.5, we find remarkably that the query and evolution-time complexity of both parameter learning and structure learning from pseudo-Choi states are identical. Therefore even without invoking the precision bootstrap, we have demonstrated an algorithm that can perform structure learning on a completely arbitrary -qubit Hamiltonian, using total evolution time.
To conclude this section, we make a remark regarding a potential constant shift in .
Remark 4.8 (On traceful Hamiltonians).
If is not traceless, then the identity coefficient (which we are typically not interested in learning) may wash out the distribution. Asymptotically, this only slows down the learning procedure by a constant factor. However, we have a nicer guarantee using the approximate controlization described in Remark 7.2. Those techniques, which we use to prepare the psuedo-Choi states, will automatically extract the traceless part of . Hence moving forward we may suppose without loss of generality.
5 Learning with the residual Hamiltonian
Let be the current best estimate of , satisfying . The key to the precision bootstrap is the ability to learn from , where
(52) |
is the “residual” Hamiltonian. By scaling up the residual by a factor of , it suffices to learn with merely constant precision . In this section, we will show how to block encode and prepare its pseudo-Choi states. This will give us the majority of the proof of Theorem 1.5.
5.1 Block encoding by spectral amplification
Recall that Proposition 3.2 gave us a -BE of from queries to its time evolution, provided that . To make a concrete choice, since we shall fix
(53) |
here. This will guarantee that all the operators we aim to block encode have are sufficiently normalized.
Next, since our description of is already an LCU, we can synthesize its block encoding with standard techniques (Proposition 2.10). However, a technicality we need to address is to match normalizations between the two block encodings before we take their difference.777Alternatively we could make the appropriate choice of nonuniform coefficients in the linear combination; we opt for the approach as presented to simplify the exposition.
Lemma 5.1.
Let and . There exists a -BE of .
Proof.
Let be the -BE of as given by Proposition 2.10. We aim to convert the subnormalization on the block-encoded matrix according to
(54) |
Define single-qubit the rotation
(55) |
Set , which is a real-valued angle since . Then
(56) |
has the desired subnormalization in the top-left block, using one more ancilla qubit. ∎
The block encoding for the residual Hamiltonian immediately follows.
Lemma 5.2.
Define where . There exists a -BE of , using queries to .
Proof.
Here, let us interpret as a -BE of given by Proposition 3.2. Let be the -BE of given by Lemma 5.1, and let be a -state-preparation pair for . Then by Proposition 2.10, we can produce a -BE for using a single query to and each. In terms of the query complexity, a single call to costs queries to , as stated in Proposition 3.2. ∎
Observe that this synthesized residual matrix has small operator norm, because
(57) |
We need to boost this norm to in order to learn with sufficient precision. This can be accomplished by uniform spectral amplification [LC17], a matrix-generalization of amplitude amplification. Note that a further generalization called uniform singular value amplification exists for applications beyond Hermitian matrices [GSLW19], however we will not need the strength of that extension here.
Proposition 5.3 ([LC17, Theorem 2]).
Let be an -BE for a Hermitian matrix . Choose some . For any , we can construct a -BE of , using queries to and additional quantum gates.
With this technique, we can block encode the amplified residual Hamiltonian.
Theorem 5.4.
Let be an unknown -qubit Hamiltonian and an estimate such that for some . Let , , and fix some error parameters . There exists a -BE of
(58) |
which costs queries to multi-controlled variants of .
Proof.
Let
(59) |
be the “target” operator, which is what we want to amplify up to. Note that by Eq. 57 we have
(60) |
which is at most as long as . Let be the -BE of , given by Lemma 5.2. Equivalently, we interpret this as a -BE for . Choose , which is guaranteed to lie within the interval . Also note that . Then, we apply uniform spectral amplification to produce , which by Proposition 5.3 is a -BE of . The circuit for this transformation costs queries to , or equivalently
(61) |
queries to . To get the block-encoding parameters stated in the theorem, observe that is -close to the amplified matrix. Meanwhile, . Rescaling error parameters and concludes the proof. ∎
5.2 Parameter estimation
Now we consider the error parameters sufficient for learning the residual Hamiltonian, within the context of the bootstrap algorithm. Recall that we only need to learn up to constant error of .
Let be the matrix encoded by . By Theorem 5.4, it has error
(62) |
Define
(63) | ||||
(64) |
By Hölder’s inequality, we get
(65) |
We want to learn to within constant additive error , so it suffices to bound Eq. 65 by a small constant as well. Choosing for some sufficiently small constant , we get . By Theorem 5.4, the query complexity for producing the block encoding of such a is
(66) |
where we have suppressed the constants . Then given copies of , we can run the shadow tomography protocol of [CW23] to learn an estimate of . By Eq. 34, with high probability we get an -accurate estimate by measuring
(67) |
copies of . Note that, using the decoding operators and as defined in Eq. 29, our estimate using has slightly modified constants in its definition:
(68) |
Now we consider the preparation of . Conditioned on its successful preparation from a query to , the total number of queries required is . Meanwhile, parallel to the claim in Proposition 3.3, the success probability is above , since
(69) |
Therefore by Proposition 3.4, only a constant number of queries suffices to prepare a copy of the pseudo-Choi state with reference, for a total of queries and evolution time. The theorem below follows.
Theorem 5.5.
Let be an unknown -qubit Hamiltonian and an estimate such that for some . There exists a learning algorithm which, with high probability, produces an estimate of up to constant additive error (say, ) for each . The algorithm makes queries to and uses quantum and classical computation.
5.3 Structure learning
To perform structure learning with (nearly) Heisenberg-limited scaling, the algorithm of Section 4 must also be placed into the bootstrap framework. We describe how that procedure works in this subsection.
Suppose we have copies of the referenceless pseudo-Choi state , prepared from queries to . (We analyze the complexity of their preparation later.) We run the algorithm of Lemma 4.4 to acquire a set containing ; this consumes
(70) |
copies of . To relate this set to , we have the bound , which implies
(71) |
for all . Thus if we choose , we get .
At each step in the bootstrapping loop, we only need to consider terms in that we had not observed yet. Suppose the worst case, wherein at the current step we only have the minimally guaranteed set of terms . In other words, the estimate is supported only on , which means that
(72) |
We focus on the terms in the second sum, which we have yet to identify. Upon learning , the new terms we have acquired are at least those which obey . Appending these newly identified terms with gives us , as desired. Setting and recursing this procedure over allows us to identify with nearly Heisenberg-limited precision because the query complexity scales nearly linearly with ; we show this in the sequel.
The argument is parallel to the analysis given in Theorem 4.7. Take for a small constant , so that is also constant. The number of copies required is then . On the other hand, the success probability of preparing a copy from is . Thus by Proposition 3.4, queries to suffice. Finally, recall from Theorem 5.4 that we make queries to in order to produce the amplified block encoding. In sum, we get the following result about structure learning from residuals.
Theorem 5.6.
Let be an unknown -qubit Hamiltonian and an estimate such that for some . There exists a quantum algorithm which, with high probability, identifies all such that . The algorithm makes queries to and uses quantum and classical computation.
5.4 Heisenberg-limited bootstrap analysis
Here, we finish the complexity analysis by placing these results into the outer loop described in Algorithm 2. For completeness, let us state and quickly prove why the bootstrap framework achieves Heisenberg-limited scaling.
Proposition 5.7.
Suppose that the base algorithm uses evolution time under , for some and . Furthermore, suppose that has the property that uses evolution time , for any and any (known) Hermitian operator . Then, Algorithm 2 returns an -accurate estimate of the Hamiltonian coefficients with probability at least , using total evolution time .
Proof.
First we analyze the query complexity of the algorithm. At each step in the loop, the base algorithm only requires a fixed error of , so the dependence on precision is reduced to a constant. Meanwhile, we scale by . Thus we query the black box for time . With the choices , , and , the total evolution time is
(73) |
Next we check the correctness of the algorithm. By a union bound, the overall failure probability is at most
(74) |
Conditioned on the success at each step, each estimate of the residual is within of . Equivalently, this means that
(75) |
Thus at the final step, we are left with an estimate with error at most . ∎
Now we piece together the claim of Theorem 1.5, up to the time resolution which we address in Section 7.
Theorem 5.8.
Let be an unknown -qubit Hamiltonian but given to us. Let . There exists a structure learning algorithm that outputs , along with classical representations of corresponding , such that . The algorithm uses a total evolution time of to time-forward and time-reverse queries, and uses quantum and classical computation.
Proof.
We combine Theorems 5.5 and 5.6 to construct the base algorithm . (For the step, we can use Propositions 3.5 and 4.7 without using uniform spectral amplification, at the cost of slightly worse constant factors.) Let , , and . We first identify the terms in which are at least in magnitude, . By Theorem 5.6, this costs queries evolving for time . Then we estimate all corresponding to the Pauli terms in , which by Theorem 5.5 has the same query complexity. Repeating these procedures times and taking the median for each estimate suppresses the failure probability to at most .
Thus, uses evolution time
(76) |
where the notation here suppresses logarithmic factors not involving , , , or . Inserting this into the result of Proposition 5.7 with yields the claim. ∎
6 Structure learning without time reversal
To prepare Choi-like states of , we required a block encoding of for some appropriate normalization . The approach described up until now relied heavily on the ability to perform both the time evolution operator and its inverse, in order to make full use of the QSVT toolbox. (We also relied on multi-controlled time evolutions, however the universal controlization scheme described in Section 7 allows us to circumvent that concern.) In the access model for this section, we no longer assume access to time reversal. While some time-reversal techniques exist, they either require knowledge of the Hamiltonian structure [OKSM24] or scale exponentially in [QDSSM19]. Absent that information at the start of the learning algorithm, it is unclear how to utilize those techniques in our setting. Even more, it has been shown that there exists unitary learning tasks for which time reversal enables exponential speedups in query complexity [Sch+23, CSM23, SHH24]. Such results establish that, in general, time reversal is a powerful resource in learning.
The goal of this section is to prove Theorem 1.6, up to the time resolution which will be addressed in Section 7.
6.1 Block encoding the matrix logarithm
In this section we develop an alternative avenue for block encoding the logarithm of , using only forward-time queries. For exposition, we assume throughout that we know the value of and so can choose ; without that knowledge, all appearances of the norm can be replaced by . Our construction is based on the well-known power series for the matrix logarithm.
Proposition 6.1 ([Hal15]).
For any complex matrix , define
(77) |
This series converges whenever and satisfies
(78) |
Additionally, let obeying . Then and
(79) |
For an integer , we define the th partial sum of the series,
(80) |
The error of this truncation is easily bounded as follows.
Lemma 6.2.
Let such that . Then
(81) |
Proof.
Compute:
(82) |
∎
It remains to block encode , where is such that . It turns out we will need a slightly stronger subnormalization of , as was the case with the polynomial approximation. Let us fix a as usual. To simplify notation, we shall denote throughout this section.
The follow lemma expresses the truncated power series for as a linear combination of powers of .
Lemma 6.3.
The operator can be represented as
(83) |
where the coefficients are given by
(84) |
The -norm of the coefficients, , obeys the following bounds:
(85) |
Proof.
Since all terms in Eq. 80 are just powers of , we can treat as a commutative polynomial of degree . By binomial expansion of , we get
(86) |
where the rearrangement in the final line is valid because whenever . Thus the coefficients of the polynomial are
(87) |
The value at is clear. For , first we rewrite each summand as:
(88) |
To evaluate the sum over , we use the hockey-stick identity, which states that
(89) |
Inserting the appropriate values of for our context, we get
(90) |
from which Eq. 84 follows. To get bounds on the scaling of , the upper bound can be seen by
(91) |
Meanwhile for the lower bound, we have
(92) |
which finishes the claim. ∎
Remark 6.4.
Ideally, we would prefer to have constructed the LCU for using block encodings of , wherein the coefficients satisfy . However, each such block encoding is subnormalized by a factor of , which again would require inverses to be boosted via singular value amplification [GSLW19].
From this result, we immediately get an -BE of for some of our choosing, provided that .
Theorem 6.5.
Let be an -qubit Hamiltonian and . Fix some integer and denote . From access to for , we can construct a -BE of , where . This construction costs evolution time to the Hamiltonian.
Proof.
Lemma 6.3 describes the LCU representation for involving only for , with -norm . Proposition 2.10 gives the prescription for realizing the LCU circuit, using control qubits. Finally, observe that if , then
(93) |
Therefore if we take in Lemma 6.2, we get the error bound
(94) |
Note that the term is merely the identity; the evolution time from the remaining queries is . ∎
This block encoding of comes with a small subnormalization of . Unfortunately, without access to it is difficult to perform spectral amplification to boost this up to . Instead, we will resort to a more rudimentary approach, which is simply to prepare the (referenceless) pseudo-Choi states with low probability.
6.2 Structure learning
The following lemma bounds how much evolution time suffices to prepare copies of the referenceless pseudo-Choi state, .
Lemma 6.6.
Let be an -qubit Hamiltonian, , and . For any integers , with high probability we can prepare copies of from
(95) |
queries and evolution time to (and no queries to its inverse).
Proof.
Let
(96) |
be the block encoding described in Theorem 6.5. Recall that each instance of this unitary requires evolution time to construct. As is standard, we apply to the state with and measure the -qubit register. If we observe , then the remaining -qubit register contains as desired. This occurs with probability
(97) |
By Proposition 3.4, we need queries to to prepare copies with high probability. Each query to costs queries and evolution time to . Use the bound from Eq. 91 to conclude the statement. ∎
With this result, we are equipped to determine the complexity of structure learning arbitrary Hamiltonians, using our algorithm with only forward-time queries.
Theorem 6.7.
Fix . Let be the set of terms in which have weight greater than . Let . From only positive-time queries to , , we can learn all the elements of using
(98) |
with high probability.
Proof.
For notation, write which has the Pauli decomposition for some . We remark that because is not guaranteed to be Hermitian, although this will not affect our analysis. Let to be determined later and define the set
(99) |
We apply Lemma 4.4 to the state , allowing us to learn all the elements of with high probability. This requires
(100) |
copies of the referenceless pseudo-Choi state. Combined with Lemma 6.6, this costs queries and evolution time.
Next, we need to determine how close should be to to guarantee . Recall that if , then the error in each coefficient is also bounded by . Then as long as , we are ensured that . Set , which implies the choices and
(101) |
Thus and so the claim follows. ∎
6.3 Parameter estimation
Once we have identified the set , any learning algorithm appropriate for arbitrary Pauli terms can be deployed to learn the parameters to precision . The most straightforward algorithm to use is that of [Car24], which uses shadow tomography on copies of the proper Choi state . After a total evolution time of , the algorithm uses a polynomial interpolation postprocessing step to output -accurate estimates to for each .
However, the shadow tomography step in [Car24] requires a large quantum memory of qubits. This quantum memory is used to perform a gentle measurement on multiple coherently stored copies of the Choi states. The size of this register can be reduced to by recent advances in shadow tomography [KGKB24], namely the concept of a so-called mimicking state. This mimicking state serves the same effective purpose as the many coherent copies, but would only be the size of the system it is mimicking. However, constructing such a mimicking state takes quantum (and classical) gates, so this approach would not be computationally efficient.
Instead, let us apply the pseudo-Choi learning algorithm of [CW23] directly to our construction. The preparation of the pseudo-Choi state with reference, from a query to , still succeeds with probability at least . However, the state we produce has small norm in the encoded Hamiltonian:
(102) |
In this case, analogous to Eq. 33 we define the estimator from classical shadows as
(103) |
Then Eq. 34 gets the modification , leaving us with a new shadow tomography copy complexity of
(104) |
This learns to error . Since is -close to , as usual we take for sufficiently small and improve our sampling error to as well. This gets us a query complexity
(105) |
queries to , for . Combined with structure identification which has the same costs (Theorem 6.7), we get Theorem 1.6, up the time resolution claim which we establish in Section 7.
Theorem 6.8.
Let be an unknown -qubit Hamiltonian. For any and , there exists a structure learning algorithm which, with high probability, learns all to additive error . The algorithm requires no prior knowledge about the identity of the terms , makes queries to (and no queries to any form of ), and uses quantum and classical computation.
7 Approximate controlization of time evolutions
In this section we describe how to approximately implement multi-controlled versions of , given only the ability to apply it for arbitrary . While there are no-go results for this task in general [AFCB14, GST24], our fractional-query access model (since we can tune ) makes this task possible. A number of approaches have been developed, suitable for different scenarios [Jan02, Zho+11, FDDB14, NSM15, DNSM19, CBW24]. We take the one described in [OKTM23], which is universally applicable to any time evolution operator. We also describe how to generalize their technique to multi-controlled operations, at no additional query cost.
For any -bit string and -qubit operator , define
(106) |
Our goal is to approximately implement for any , where is the traceless part of . This is sufficient for our purposes, since we do not care about learning the identity component of the Hamiltonian. First, we will review the case. Afterwards, we build up the generalization to control qubits. Finally, we show how our learning algorithms should be modified to account for controlization.
7.1 A single control qubit
The main idea of controlization from [OKTM23] relies on the fact that twirling by the Pauli operators yields
(107) |
For , let . Appending a control qubit and replacing each Pauli gate with its controlled form (conditioned on ), we get
(108) |
where is the traceless part of . Denote this -qubit operator by . It generates the unitary
(109) |
where . Thus up to this global phase, we have a formalism for the desired controlled unitary. This can realized by a product formula over the terms of , each of which only requires the uncontrolled time evolution:
(110) |
Taking the product over all yields a product-formula approximation to . However, this naive implementation clearly requires an exponential amount of resources, as well as an exponentially small time resolution.
To circumvent this issue, one can instead appeal to the randomized product formula qDRIFT [Cam19]. In this approach the gate sequence is constructed by randomly sampling terms, whereby the distribution of terms is proportional to their operator norms. In this case, the distribution reduces to uniformly drawing elements from . Consider a qDRIFT sequence constructed from such random draws. That is, we have the circuit where each is of the form
(111) |
This random construction can be viewed as the mixed-unitary channel , where . Also, let be the quantum channel corresponding to the target unitary . The improved analysis of [CHKT21] guarantees that the qDRIFT channel converges to the desired controlled unitary as
(112) |
where is the diamond norm. Equivalently, building qDRIFT sequences of length suffices to achieve error .
7.2 Multiple control qubits
To generalize to multiple controls, first note that in the setting, we have
(113) |
This follows because . Now if we let , then for any , the multi-controlled gates that we need to implement are
(114) |
where there are blocks of above. Then, twirling over these gates gives us
(115) |
Simulating this twirled Hamiltonian with the qDRIFT protocol then yields an approximation to with the same performance guarantees as in the single-control setting.
Theorem 7.1.
Let be a Hamiltonian and suppose we have access to . Let for any . For a , there exists a mixed-unitary channel such that
(116) |
implemented from queries to and Pauli and controlled-Pauli gates, where
(117) |
Remark 7.2 (On the trace of the Hamiltonian).
This method of controlization isolates the traceless part of the Hamiltonian, so without further modifications it can only approximate . For our purposes this is actually favorable, because we can now always safely assume that the Hamiltonian to be learned is traceless, removing any potential complications with a nonzero identity component in .
7.3 Error in pseudo-Choi state preparation
All the algorithms presented in this work (even the time-reversal-free approach) require controlled variants of the time evolution operator. Using the controlization scheme of this section, all instances of appearing in this paper can be approximated by the qDRIFT circuit involving applications of . Let us briefly discuss how this approximation affects the learning guarantees.
To begin, observe that each query to runs for time when given access to time reversal, and for without time reversal. Thus to streamline exposition, suppose . According to Theorem 7.1, each query costs
(118) |
queries to the uncontrolled evolution , where the second bound follows from . Clearly, the total evolution time is the same. However, the time resolution is shortened to , so it behooves us to analyze what suffices. The controlization also introduces additional quantum gates, but this overhead is at most so we do not closely analyze it here.
Theorem 7.3.
In order to replace controlled time evolutions with approximate controlizations in any pseudo-Choi state learning algorithm, we can make queries to per instance of . Letting be the target error and the choice of Hamiltonian normalization, it suffices to take for:
-
•
Parameter estimation with time reversal: ;
-
•
Parameter estimation without time reversal: ;
-
•
Structure learning with time reversal: ;
-
•
Structure learning without time reversal: .
The time resolution becomes , and the total evolution time and number of experiments are unaffected.
We will prove this statement in two parts: first the effect of controlization on the parameter estimation stage, and then the effect on the term identification stage.
Lemma 7.4.
Let be an -BE that prepares any pseudo-Choi state with reference, and its corresponding channel. Suppose makes queries to controlled time evolutions. Let be the channel obtained by replacing all such queries with controlized approximations, each with diamond error . Let be any initial -qubit state, and define
(119) |
Analogously, define with respect to . Then for any decoding operator , we have
(120) |
Proof.
Let denote a (multi-)controlled instance of appearing in . Let be the mixed-unitary channel guaranteed by Theorem 7.1 to approximate to within diamond distance . By a telescoping inequality, we have
(121) |
For convenience, we will define and To incorporate the conditional state preparation into the observable estimation analysis, we use the fact that in expectation,
(122) |
where . Analogously, we also have
(123) |
To simplify notation, let and . By Hölder’s inequality with , we get
(124) |
Recall that preparing pseudo-Choi states with reference always succeeds with probability . By definition of the diamond norm, the first term is bounded by
(125) |
For the second term, we apply another round of Hölder’s inequality:
(126) |
Next we show how controlization affects the magnitude of the encoded Pauli coefficients of .
Lemma 7.5.
Let be an -BE that prepares the referenceless pseudo-Choi state of any . Fix , and let be analogously as in Lemma 7.4. For any , define and . These quantities obey
(127) |
Proof.
We start by reviewing the properties of the target state from our analysis in Section 4. Recall the basis states for , which define the distribution of Bell measurements:
(128) |
where and . We also have the probability of preparing from measuring the ancilla register of ,
(129) |
Multiplying these two quantities allows us to express as
(130) |
This observation motivates our definition for as the magnitude of the Pauli coefficients encoded in the controlized form of the block encoding. The error of this approximation is bounded by
(131) |
where we applied the same telescoping argument from Eq. 121 in the final inequality. Then we use for to get
(132) |
which implies the claim. ∎
With these two lemmas in hand, we are ready to prove Theorem 7.3.
Proof (of Theorem 7.3)..
Parameter estimation. Recall from Eq. 34 and the surrounding discussion that, in order to get precision in , we need precision in the decoding operators . In the bootstrap framework (time-reversal), we only need a constant precision , so we can tolerate a systematic error of . Therefore by Lemma 7.4, we should choose with a sufficiently small constant factor. Recall that , dominated by the cost of spectral amplification on the residual Hamiltonian (Theorem 5.4). Since , we get .
Without the bootstrap framework (time-forward), our error tolerance is -dependent, . However, since the number of queries in this setting is only , we end up with the same asymptotic time resolution, . Note that this applies to the originally proposed algorithm of [CW23], since it also has a query complexity of .
Structure learning. The Bell sampling protocol collects Pauli terms which have large coefficients, and the result of Lemma 7.5 tells us how to appropriately adjust the threshold for what we mean by “large” under controlization. In the bootstrap framework, recall that we learn the residual Hamiltonian, so make the notation change from Lemma 7.5. Also, from Theorem 5.4 we have that and . The algorithm only needs to find such that , so we run the protocol with the threshold . This holds with with a sufficiently small constant, which implies .
Without the bootstrap framework, we have instead and a desired threshold . We can achieve this by taking , where (Theorem 6.5). Hence it suffices to take , which results in a time resolution of . ∎
Acknowledgments
AZ thanks Andrew Baczewski, Matthias Caro, John Kallaugher, Danial Motlagh, and Ojas Parekh for helpful discussions. This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories, under the Gil Herrera Fellowship in Quantum Information Science. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA-0003525. AZ also acknowledges support from the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Accelerated Research in Quantum Computing.
References
- [AAKS21] Anurag Anshu, Srinivasan Arunachalam, Tomotaka Kuwahara and Mehdi Soleimanifar “Sample-efficient learning of interacting quantum systems” In Nature Physics 17.8 Nature Publishing Group UK London, 2021, pp. 931–935 DOI: 10.1038/s41567-021-01232-0
- [Aar20] Scott Aaronson “Shadow tomography of quantum states” In SIAM Journal on Computing 49.5 SIAM, 2020, pp. 368–394 DOI: 10.1137/18M120275X
- [AFCB14] Mateus Araújo, Adrien Feix, Fabio Costa and Časlav Brukner “Quantum circuits cannot control unknown operations” In New Journal of Physics 16.9 IOP Publishing, 2014, pp. 093026 DOI: 10.1088/1367-2630/16/9/093026
- [Alh23] Álvaro M. Alhambra “Quantum Many-Body Systems in Thermal Equilibrium” In PRX Quantum 4 American Physical Society, 2023, pp. 040201 DOI: 10.1103/PRXQuantum.4.040201
- [Ang23] Armando Angrisani “Learning unitaries with quantum statistical queries” In arXiv:2310.02254, 2023 URL: https://arxiv.org/abs/2310.02254
- [AS07] Alp Atıcı and Rocco A Servedio “Quantum algorithms for learning and testing juntas” In Quantum Information Processing 6.5 Springer, 2007, pp. 323–348 DOI: 10.1007/s11128-007-0061-6
- [BAL19] Eyal Bairey, Itai Arad and Netanel H. Lindner “Learning a Local Hamiltonian from Local Measurements” In Physical Review Letters 122 American Physical Society, 2019, pp. 020504 DOI: 10.1103/PhysRevLett.122.020504
- [BCCKS14] Dominic W Berry et al. “Exponential improvement in precision for simulating sparse Hamiltonians” In Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing, 2014, pp. 283–292 DOI: 10.1145/2591796.2591854
- [BCO24] Andreas Bluhm, Matthias C Caro and Aadil Oufkir “Hamiltonian Property Testing” In arXiv:2403.02968, 2024 URL: https://arxiv.org/abs/2403.02968
- [BIWH96] J.. Bollinger, Wayne M. Itano, D.. Wineland and D.. Heinzen “Optimal frequency measurements with maximally correlated states” In Physical Review A 54 American Physical Society, 1996, pp. R4649–R4652 DOI: 10.1103/PhysRevA.54.R4649
- [BKD14] Charles H. Baldwin, Amir Kalev and Ivan H. Deutsch “Quantum process tomography of unitary and near-unitary maps” In Physical Review A 90 American Physical Society, 2014, pp. 012110 DOI: 10.1103/PhysRevA.90.012110
- [BLMT24] Ainesh Bakshi, Allen Liu, Ankur Moitra and Ewin Tang “Learning quantum Hamiltonians at any temperature in polynomial time” In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, 2024, pp. 1470–1477 DOI: 10.1145/3618260.3649619
- [BLMT24a] Ainesh Bakshi, Allen Liu, Ankur Moitra and Ewin Tang “Structure learning of Hamiltonians from real-time evolution” In arXiv:2405.00082, 2024 URL: https://arxiv.org/abs/2405.00082
- [BMQ22] Jessica Bavaresco, Mio Murao and Marco Túlio Quintino “Unitary channel discrimination beyond group structures: Advantages of sequential and indefinite-causal-order strategies” In Journal of Mathematical Physics 63.4 AIP Publishing, 2022 DOI: 10.1063/5.0075919
- [BO21] Costin Bădescu and Ryan O’Donnell “Improved quantum data analysis” In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, 2021, pp. 1398–1411 DOI: 10.1145/3406325.3451109
- [Bra+22] Jochen Braumüller et al. “Probing quantum information propagation with out-of-time-ordered correlators” In Nature Physics 18.2 Nature Publishing Group UK London, 2022, pp. 172–178 DOI: 10.1038/s41567-021-01430-w
- [Cam19] Earl Campbell “Random Compiler for Fast Hamiltonian Simulation” In Physical Review Letters 123 American Physical Society, 2019, pp. 070503 DOI: 10.1103/PhysRevLett.123.070503
- [Car24] Matthias C Caro “Learning quantum processes and Hamiltonians via the Pauli transfer matrix” In ACM Transactions on Quantum Computing 5.2 ACM New York, NY, 2024, pp. 1–53 DOI: 10.1145/3670418
- [Cav81] Carlton M. Caves “Quantum-mechanical noise in an interferometer” In Physical Review D 23 American Physical Society, 1981, pp. 1693–1708 DOI: 10.1103/PhysRevD.23.1693
- [CBW24] Anirban Chowdhury, Ewout van den Berg and Pawel Wocjan “Controlization Schemes Based on Orthogonal Arrays” In arXiv:2407.09382, 2024 URL: https://arxiv.org/abs/2407.09382
- [CCHL22] Sitan Chen, Jordan Cotler, Hsin-Yuan Huang and Jerry Li “Exponential separations between learning with and without quantum memory” In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), 2022, pp. 574–585 IEEE DOI: 10.1109/FOCS52979.2021.00063
- [CHKT21] Chi-Fang Chen, Hsin-Yuan Huang, Richard Kueng and Joel A. Tropp “Concentration for Random Product Formulas” In PRX Quantum 2 American Physical Society, 2021, pp. 040305 DOI: 10.1103/PRXQuantum.2.040305
- [CNY23] Thomas Chen, Shivam Nadimpalli and Henry Yuen “Testing and learning quantum juntas nearly optimally” In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 2023, pp. 1163–1185 SIAM DOI: 10.1137/1.9781611977554.ch43
- [Col+22] Simone Colombo et al. “Time-reversal-based quantum metrology with many-body entangled states” In Nature Physics 18.8 Nature Publishing Group UK London, 2022, pp. 925–930 DOI: 10.1038/s41567-022-01653-5
- [CSM23] Jordan Cotler, Thomas Schuster and Masoud Mohseni “Information-theoretic hardness of out-of-time-order correlators” In Physical Review A 108 American Physical Society, 2023, pp. 062608 DOI: 10.1103/PhysRevA.108.062608
- [CW12] Andrew M Childs and Nathan Wiebe “Hamiltonian simulation using linear combinations of unitary operations” In Quantum Information & Computation 12.11-12 Rinton Press, Incorporated Paramus, NJ, 2012, pp. 901–924 DOI: 10.26421/QIC12.11-12
- [CW23] Juan Castaneda and Nathan Wiebe “Hamiltonian Learning via Shadow Tomography of Pseudo-Choi States” In arXiv:2308.13020, 2023 URL: https://arxiv.org/abs/2308.13020
- [DNSM19] Qingxiuxiong Dong, Shojun Nakayama, Akihito Soeda and Mio Murao “Controlled quantum operations and combs, and their applications to universal controllization of divisible unitary operations” In arXiv:1911.01645, 2019 URL: https://arxiv.org/abs/1911.01645
- [DOS23] Alicja Dutkiewicz, Thomas E O’Brien and Thomas Schuster “The advantage of quantum control in many-body Hamiltonian learning” In arXiv:2304.07172, 2023 URL: https://arxiv.org/abs/2304.07172
- [DRC17] C.. Degen, F. Reinhard and P. Cappellaro “Quantum sensing” In Reviews of Modern Physics 89 American Physical Society, 2017, pp. 035002 DOI: 10.1103/RevModPhys.89.035002
- [FDDB14] Nicolai Friis, Vedran Dunjko, Wolfgang Dür and Hans J. Briegel “Implementing quantum control for unknown subroutines” In Physical Review A 89 American Physical Society, 2014, pp. 030303 DOI: 10.1103/PhysRevA.89.030303
- [FW20] Steven T Flammia and Joel J Wallman “Efficient estimation of Pauli channels” In ACM Transactions on Quantum Computing 1.1 ACM New York, NY, USA, 2020, pp. 1–32 DOI: 10.1145/3408039
- [Gär+17] Martin Gärttner et al. “Measuring out-of-time-order correlations and multiple quantum spectra in a trapped-ion quantum magnet” In Nature Physics 13.8 Nature Publishing Group UK London, 2017, pp. 781–786 DOI: 10.1038/nphys4119
- [GCC24] Andi Gu, Lukasz Cincio and Patrick J Coles “Practical Hamiltonian learning with unitary dynamics and Gibbs states” In Nature Communications 15.1 Nature Publishing Group UK London, 2024, pp. 312 DOI: 10.1038/s41467-023-44008-1
- [GFWC12] Christopher E Granade, Christopher Ferrie, Nathan Wiebe and David G Cory “Robust online Hamiltonian learning” In New Journal of Physics 14.10 IOP Publishing, 2012, pp. 103013 DOI: 10.1088/1367-2630/14/10/103013
- [GL89] O. Goldreich and L.. Levin “A hard-core predicate for all one-way functions” In Proceedings of the Twenty-First Annual ACM Symposium on Theory of Computing Association for Computing Machinery, 1989, pp. 25–32 DOI: 10.1145/73007.73010
- [GLM11] Vittorio Giovannetti, Seth Lloyd and Lorenzo Maccone “Advances in quantum metrology” In Nature Photonics 5.4 Nature Publishing Group UK London, 2011, pp. 222–229 DOI: 10.1038/nphoton.2011.35
- [GSLW19] András Gilyén, Yuan Su, Guang Hao Low and Nathan Wiebe “Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics” In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, 2019, pp. 193–204 DOI: 10.1145/3313276.3316366
- [GST24] Zuzana Gavorová, Matan Seidel and Yonathan Touati “Topological obstructions to quantum computation with unitary oracles” In Physical Review A 109 American Physical Society, 2024, pp. 032625 DOI: 10.1103/PhysRevA.109.032625
- [Gut24] Francisco Escudero Gutiérrez “Simple algorithms to test and learn local Hamiltonians” In arXiv:2404.06282, 2024 URL: https://arxiv.org/abs/2404.06282
- [Hal15] Brian C. Hall “Lie Groups, Lie Algebras, and Representations”, Graduate Texts in Mathematics Springer, 2015
- [HBCP15] M. Holzäpfel, T. Baumgratz, M. Cramer and M.. Plenio “Scalable reconstruction of unitary processes and Hamiltonians” In Physical Review A 91 American Physical Society, 2015, pp. 042129 DOI: 10.1103/PhysRevA.91.042129
- [HCP23] Hsin-Yuan Huang, Sitan Chen and John Preskill “Learning to Predict Arbitrary Quantum Processes” In PRX Quantum 4 American Physical Society, 2023, pp. 040337 DOI: 10.1103/PRXQuantum.4.040337
- [HKOT23] Jeongwan Haah, Robin Kothari, Ryan O’Donnell and Ewin Tang “Query-optimal estimation of unitary channels in diamond distance” In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), 2023, pp. 363–390 IEEE DOI: 10.1109/FOCS57990.2023.00028
- [HKP20] Hsin-Yuan Huang, Richard Kueng and John Preskill “Predicting many properties of a quantum system from very few measurements” In Nature Physics 16, 2020, pp. 1050–1057 DOI: 10.1038/s41567-020-0932-7
- [HKT24] Jeongwan Haah, Robin Kothari and Ewin Tang “Learning quantum Hamiltonians from high-temperature Gibbs states and real-time evolutions” In Nature Physics 20 Nature Publishing Group UK London, 2024, pp. 1027–1031 DOI: 10.1038/s41567-023-02376-x
- [HTFS23] Hsin-Yuan Huang, Yu Tong, Di Fang and Yuan Su “Learning Many-Body Hamiltonians with Heisenberg-Limited Scaling” In Physical Review Letters 130 American Physical Society, 2023, pp. 200403 DOI: 10.1103/PhysRevLett.130.200403
- [Hug+22] William J. Huggins et al. “Nearly Optimal Quantum Algorithm for Estimating Multiple Expectation Values” In Physical Review Letters 129 American Physical Society, 2022, pp. 240501 DOI: 10.1103/PhysRevLett.129.240501
- [IBFBP20] Luca Innocenti et al. “Supervised learning of time-independent Hamiltonians for gate design” In New Journal of Physics 22.6 IOP Publishing, 2020, pp. 065001 DOI: 10.1088/1367-2630/ab8aaf
- [Jan02] Dominik Janzing “Quantum algorithm for measuring the energy of qubits with unknown pair-interactions” In Quantum Information & Computation 2.3 Rinton Press, Incorporated Paramus, NJ, 2002, pp. 198–207 DOI: 10.26421/QIC2.3-3
- [JW28] P. Jordan and E. Wigner “Über das Paulische Äquivalenzverbot” In Z. Phys. 47, 1928, pp. 631–651 DOI: 10.1007/BF01331938
- [KCM22] Or Katz, Marko Cetina and Christopher Monroe “-Body Interactions between Trapped Ion Qubits via Spin-Dependent Squeezing” In Physical Review Letters 129 American Physical Society, 2022, pp. 063603 DOI: 10.1103/PhysRevLett.129.063603
- [KGKB24] Robbie King, David Gosset, Robin Kothari and Ryan Babbush “Triply efficient shadow tomography” In arXiv:2404.19211, 2024 URL: https://arxiv.org/abs/2404.19211
- [KLY15] Shelby Kimmel, Guang Hao Low and Theodore J. Yoder “Robust calibration of a universal single-qubit gate set via robust phase estimation” In Physical Review A 92 American Physical Society, 2015, pp. 062315 DOI: 10.1103/PhysRevA.92.062315
- [KU18] Naoto Kura and Masahito Ueda “Finite-error metrological bounds on multiparameter Hamiltonian estimation” In Physical Review A 97 American Physical Society, 2018, pp. 012101 DOI: 10.1103/PhysRevA.97.012101
- [LC17] Guang Hao Low and Isaac L Chuang “Hamiltonian simulation by uniform spectral amplification” In arXiv:1707.05391, 2017 URL: https://arxiv.org/abs/1707.05391
- [Li+23] Zeyang Li et al. “Improving metrology with quantum scrambling” In Science 380.6652 American Association for the Advancement of Science, 2023, pp. 1381–1384 DOI: 10.1126/science.adg9500
- [LTGNY24] Haoya Li et al. “Heisenberg-limited Hamiltonian learning for interacting bosons” In npj Quantum Information 10.1 Nature Publishing Group UK London, 2024, pp. 83 DOI: 10.1038/s41534-024-00881-2
- [MFPT24] Muzhou Ma, Steven T Flammia, John Preskill and Yu Tong “Learning -body Hamiltonians via compressed sensing” In arXiv:2410.18928, 2024 URL: https://arxiv.org/abs/2410.18928
- [MH24] Arjun Mirani and Patrick Hayden “Learning interacting fermionic Hamiltonians at the Heisenberg limit” In arXiv:2403.00069, 2024 URL: https://arxiv.org/abs/2403.00069
- [Mi+21] Xiao Mi et al. “Information scrambling in quantum circuits” In Science 374.6574 American Association for the Advancement of Science, 2021, pp. 1479–1483 DOI: 10.1126/science.abg5029
- [MO10] Ashley Montanaro and Tobias J Osborne “Quantum boolean functions” In Chicago Journal of Theoretical Computer Science 1, 2010, pp. 1–45 DOI: 10.4086/cjtcs.2010.001
- [Mon17] Ashley Montanaro “Learning stabilizer states by Bell sampling” In arXiv:1707.04012, 2017 DOI: https://arxiv.org/abs/1707.04012
- [MR24] John M Martyn and Patrick Rall “Halving the Cost of Quantum Algorithms with Randomization” In arXiv:2409.03744, 2024 URL: https://arxiv.org/abs/2409.03744
- [NLY24] Hongkang Ni, Haoya Li and Lexing Ying “Quantum Hamiltonian Learning for the Fermi-Hubbard Model” In Acta Applicandae Mathematicae 191.1 Springer, 2024, pp. 1–16 DOI: 10.1007/s10440-024-00651-4
- [NSM15] Shojun Nakayama, Akihito Soeda and Mio Murao “Quantum Algorithm for Universal Implementation of the Projective Measurement of Energy” In Physical Review Letters 114 American Physical Society, 2015, pp. 190501 DOI: 10.1103/PhysRevLett.114.190501
- [OKSM24] Tatsuki Odake, Hlér Kristjánsson, Akihito Soeda and Mio Murao “Higher-order quantum transformations of Hamiltonian dynamics” In Physical Review Research 6 American Physical Society, 2024, pp. L012063 DOI: 10.1103/PhysRevResearch.6.L012063
- [OKTM23] Tatsuki Odake, Hlér Kristjánsson, Philip Taranto and Mio Murao “Universal algorithm for transforming Hamiltonian eigenvalues” In arXiv:2312.08848, 2023 URL: https://arxiv.org/abs/2312.08848
- [QDSSM19] Marco Túlio Quintino et al. “Reversing Unknown Quantum Transformations: Universal Quantum Circuit for Inverting General Unitary Operations” In Physical Review Letters 123 American Physical Society, 2019, pp. 210502 DOI: 10.1103/PhysRevLett.123.210502
- [QR19] Xiao-Liang Qi and Daniel Ranard “Determining a local Hamiltonian from a single eigenstate” In Quantum 3 Verein zur Förderung des Open Access Publizierens in den Quantenwissenschaften, 2019, pp. 159 DOI: 10.22331/q-2019-07-08-159
- [Ram50] Norman F. Ramsey “A Molecular Beam Resonance Method with Separated Oscillating Fields” In Physical Review 78 American Physical Society, 1950, pp. 695–699 DOI: 10.1103/PhysRev.78.695
- [Sch+23] Thomas Schuster et al. “Learning quantum systems via out-of-time-order correlators” In Physical Review Research 5 American Physical Society, 2023, pp. 043284 DOI: 10.1103/PhysRevResearch.5.043284
- [SHH24] Thomas Schuster, Jonas Haferkamp and Hsin-Yuan Huang “Random unitaries in extremely low depth” In arXiv:2407.07754, 2024 URL: https://arxiv.org/abs/2407.07754
- [Shi07] Shigeo Shioda “Some upper and lower bounds on the coupon collector problem” In Journal of Computational and Applied Mathematics 200.1 Elsevier, 2007, pp. 154–167 DOI: 10.1016/j.cam.2005.12.011
- [Shu+14] Michael D Shulman et al. “Suppressing qubit dephasing using real-time Hamiltonian estimation” In Nature Communications 5.1 Nature Publishing Group UK London, 2014, pp. 5156 DOI: 10.1038/ncomms6156
- [SLP11] Marcus P. Silva, Olivier Landon-Cardinal and David Poulin “Practical Characterization of Quantum Devices without Tomography” In Physical Review Letters 107 American Physical Society, 2011, pp. 210404 DOI: 10.1103/PhysRevLett.107.210404
- [SMDWB24] Daniel Stilck França et al. “Efficient and robust estimation of many-qubit Hamiltonians” In Nature Communications 15.1 Nature Publishing Group UK London, 2024, pp. 311 DOI: 10.1038/s41467-023-44012-5
- [SMLKR11] A. Shabani et al. “Estimation of many-body quantum Hamiltonians via compressive sensing” In Physical Review A 84 American Physical Society, 2011, pp. 012107 DOI: 10.1103/PhysRevA.84.012107
- [VZ24] Alexander Volberg and Haonan Zhang “Noncommutative Bohnenblust–Hille inequalities” In Mathematische Annalen 389.2 Springer, 2024, pp. 1657–1676 DOI: 10.1007/s00208-023-02680-0
- [Wan+17] Jianwei Wang et al. “Experimental quantum Hamiltonian learning” In Nature Physics 13.6 Nature Publishing Group UK London, 2017, pp. 551–555 DOI: 10.1038/nphys4074
- [WGFC14] Nathan Wiebe, Christopher Granade, Christopher Ferrie and D.. Cory “Hamiltonian Learning and Certification Using Quantum Resources” In Physical Review Letters 112 American Physical Society, 2014, pp. 190501 DOI: 10.1103/PhysRevLett.112.190501
- [WGFC14a] Nathan Wiebe, Christopher Granade, Christopher Ferrie and David Cory “Quantum Hamiltonian learning using imperfect quantum resources” In Phys. Rev. A 89 American Physical Society, 2014, pp. 042314 DOI: 10.1103/PhysRevA.89.042314
- [YSHY23] Wenjun Yu, Jinzhao Sun, Zeyao Han and Xiao Yuan “Robust and efficient Hamiltonian learning” In Quantum 7 Verein zur Förderung des Open Access Publizierens in den Quantenwissenschaften, 2023, pp. 1045 DOI: 10.22331/q-2023-06-29-1045
- [Zha+23] Haimeng Zhao et al. “Learning quantum states and unitaries of bounded gate complexity” In arXiv:2310.19882, 2023 DOI: https://arxiv.org/abs/2310.19882
- [Zho+11] Xiao-Qi Zhou et al. “Adding control to arbitrary unknown quantum operations” In Nature Communications 2.1 Nature Publishing Group UK London, 2011, pp. 413 DOI: 10.1038/ncomms1392
- [ZYLB21] Assaf Zubida, Elad Yitzhaki, Netanel H Lindner and Eyal Bairey “Optimal short-time measurements for Hamiltonian learning” In arXiv:2108.08824, 2021 URL: https://arxiv.org/abs/2108.08824
Appendix A Copies to queries reduction
Let be the probability of preparing a state , say from querying a block encoding and measuring some of its ancilla qubits. As a series of Bernoulli trials, it is a standard result that queries is necessary and sufficient to successfully prepare copies of the state. In this appendix we prove the statement for completeness, adapting the argument of [CW23, Appendix B.6] to the appropriate level of generality for our purposes.
Lemma A.1.
Fix integers . Let be i.i.d. Bernoulli random variables with . Define . In order to observe the event with probability at least ,
(133) |
trials are necessary and sufficient.
Proof.
We invoke Chernoff’s inequality to bound the tail probability for the event . For any , the multiplicative Chernoff bound states that
(134) |
Since , we can set to get
(135) |
This bounds the failure probability of the entire process, so set the rhs to be equal to for some constant . Taking logs and rearranging yields
(136) |
Solving for , we find
(137) |
is sufficient. Conversely, observe that for to hold, we must have , so is also necessary.
Finally, we remark that only when , so let us consider the case separately. It is clear that . If then the result is trivial; meanwhile for each , there exists some such that . Combined with , we get
(138) |
which implies that when as well. Note that even when we can choose to get some small failure probability . ∎
Appendix B The galactic algorithm
Here we prove Corollary 1.7. Let us restate it below.
Corollary B.1 (Galactic structure learning without time reversal).
There exists a quantum algorithm that solves 1.2 under the discrete control access model alone. For any real constant , the algorithm queries for a total evolution time of , requires a time resolution of , runs experiments, and uses quantum and classical computation. The discrete control operations act on an ancillary computational register of qubits.
Proof.
The key modification to make is shrinking the bound from Lemma 6.2. Let be some real constant and for simplicity suppose . Then , which by Lemma 6.2 means that
(139) |
where is a constant. To make this error at most , we can take . Taking , we get
(140) |
Then recall from Theorems 6.7 and 104 that the query complexity for either parameter or structure learning is , and the time resolution from Theorem 7.3 is . ∎
Remark B.2.
One may be concerned that for large , but this is not an issue: always holds because . On the other hand, this means that for very large , we are essentially just constructing a block encoding of , which is why this approximation is so accurate for just two terms of the matrix logarithm. Of course, the trade-off is that suppresses the estimation of the parameters of by a factor of as well, hence the blow-up in this constant.