Maximum-Likelihood Quantum State Tomography by Cover’s Method with Non-Asymptotic Analysis
Abstract
We propose an iterative algorithm that computes the maximum-likelihood estimate in quantum state tomography. The optimization error of the algorithm converges to zero at an rate, where denotes the number of iterations and denotes the dimension of the quantum state. The per-iteration computational complexity of the algorithm is , where denotes the number of measurement outcomes. The algorithm can be considered as a parameter-free correction of the method [A. I. Lvovsky. Iterative maximum-likelihood reconstruction in quantum homodyne tomography. J. Opt. B: Quantum Semiclass. Opt. 2004] [G. Molina-Terriza et al. Triggered qutrits for quantum communication protocols. Phys. Rev. Lett. 2004.].
1 Introduction
Quantum state tomography aims to estimate the quantum state of a physical system, given measurement outcomes (see, e.g., [Pv04] for a complete survey). There are various approaches to quantum state tomography, such as trace regression [FGLE12, GLF+10, OWV97, YJZS20, YFT19], maximum-likelihood estimation [Hra97, HvFJ04], Bayesian estimation [BK10a, BK10b], and recently proposed deep learning-based methods [AMnNK20, QFN21]111A confusion the authors frequently encounter is that many people mix state tomography with the notion of shadow tomography introduced by Aaronson [ACH+18, Aar20]. State tomography aims at estimating the quantum state, whereas shadow tomography aims at estimating the probability distribution of measurement outcomes. Indeed, one interesting conclusion by Aaronson is that shadow tomography requires much less data than state tomography. . Among existing approaches, the maximum-likelihood estimation approach has been standard in practice and enjoys favorable asymptotic statistical guarantees (see, e.g., [HvFJ04, SBK18]). The maximum-likelihood estimator is given by the optimization problem:
(1) |
for some Hermitian positive semi-definite matrices , where denotes the set of quantum density matrices, i.e.,
and denotes the number of measurement outcomes. We write for the conjugate transpose of .
is a numerical method developed to solve (1) [Lvo04, MTVv+04]. Given a positive definite , iterates as
where the mapping scales its input such that , and denotes the gradient mapping of . is parameter-free (i.e., it does not require parameter tuning) and typically converges fast in practice. Unfortunately, one can construct a synthetic data-set on which does not converge [vHKL07].
According to [Lvo04], is inspired by Cover’s method222Indeed, Cover’s method coincides with the expectation maximization method for solving (2) and hence is typically called expectation maximization in literature. Nevertheless, Cover’s and our derivations and convergence analyses do not need and are not covered by existing results on expectation maximization, so we do not call the method expectation maximization to avoid possible confusions. for solving the optimization problem [Cov84]:
(2) |
for some entry-wise non-negative vectors , where denotes the probability simplex in , i.e.,
and the inner product is the one associated with the Euclidean norm. The optimization problem appears when one wants to compute the growth-optimal portfolio for long-term investment (see, e.g., [MTZ12]). Given an entry-wise strictly positive initial iterate , Cover’s method iterates as
where denotes the entry-wise product, aka the Schur product. Cover’s method is guaranteed to converge to the optimum [Cov84]. Indeed, if the matrices in (1) share the same eigenbasis, then it is easily checked that (1) is equivalent to (2) but is not equivalent to Cover’s method333Cover’s method does not need the scaling mapping . One can check that its iterates are already in .. This explains why does not inherit the convergence guarantee of Cover’s method.
Rehacek et al. proposed a diluted correction of [vHKL07]. Given a positive definite initial iterate , diluted iterates as
where the parameter is chosen by exact line search. Later, Goncalves et al. proposed a variant of diluted by adopting Armijo line search [GGRL14]. Both versions of diluted are guaranteed to converge to the optimum. Unfortunately, the convergence guarantees of both versions of diluted are asymptotic and do not allow us to characterize the iteration complexity, the number of iterations required to obtain an approximate solution of (1). In particular, the dimension grows exponentially with the number of qubits and can be huge, but the dependence of the iteration complexities of the diluted methods on is unclear.
We propose the following algorithm.
-
•
Set , where denotes the identity matrix.
-
•
For each , compute
where and denote matrix exponential and logarithm, respectively.
Notice that the objective function implicitly requires that are strictly positive; otherwise, does not exist as is not well-defined. Our initialization and iteration rule guarantee that are full-rank and are strictly positive.
Let us discuss the computational complexity of the proposed algorithm. The computational complexity of computing is . The computational complexities of computing matrix logarithm and exponential are . The per-iteration computational complexity is hence .
We can observe that the proposed algorithm recovers Cover’s method when and share the same eigenbasis. We show that the proposed algorithm indeed converges and its iteration complexity is logarithmic in the dimension .
Theorem 1
Assume that . Let be the sequence of iterates generated by the proposed method. Define . Then, for every , we have if . □
Remark 1
Suppose . Let be a matrix whose columns form an orthogonal basis of . Then, it suffices to solve (1) on a lower-dimensional space by replacing with in the objective function. □
Recall that (2) and Cover’s method are special cases of (1) and the proposed algorithm, respectively. Moreover, Cover’s method is equivalent to the expectation maximization method for Poisson inverse problems [VL93]. Theorem 1 is hence of independent interest even for computing the growth-optimal portfolio by Cover’s method and solving Poisson inverse problems by expectation maximization, showing that the iteration complexities of both are also . This supplements the asymptotic convergence results in [Cov84] and [VSK85]. Whereas the same iteration complexity bound for Cover’s method in growth-optimal portfolio is immediate from a lemma due to Iusem [Ius92, Lemma 2.2], it is currently unclear to us how to extend Iusem’s analysis to the quantum setup.
2 Proof of Theorem 1
For convenience, let be a random matrix following the empirical probability distribution of 444Notice the derivation here is not restricted to the empirical probability distribution.. Then, we have , where denotes the mathematical expectation. Define
where the inner product, as well as all inner products in the rest of this section, is the Hilbert-Schmidt inner product. We start with an error upper bound.
Lemma 1
For any density matrix such that exists,
□
Remark 2
Notice that is always positive definite and is always well-defined. Otherwise, suppose there exists some vector such that . Then, as are positive semi-definite, we have for all , violating the assumption that . □
Proof (Lemma 1)
By Jensen’s inequality, we write
■
Deriving the following lemma is the major technical challenge in our convergence analysis. The lemma shows that the mapping is operator convex.
Lemma 2
For any density matrix , the function is convex. □
Proof
Equivalently, we want to show that for all and Hermitian . Define . By [HP14, Example 3.22 and Exercise 3.24], we have
Define . Recall the chain rule for the second-order Fréchet derivative (see, e.g., [Bha97, p. 316]):
Then, we write
where
Since is obviously positive semi-definite, it suffices to show that is positive semi-definite for all . We write
which is positive semi-definite by an extension of the Cauchy-Schwarz inequality due to Lavergne [Lav08]555Whereas Lavergne considers the real matrix case, we notice the proof directly extends to the Hermitian matrix case.. ■
Now, we are ready to prove Theorem 1.
Proof (Theorem 1)
By the Golden-Thompson inequality, we write
Therefore, by the iteration rule of the proposed algorithm and operator motononicity of the matrix logarithm, we have
(3) |
Then, for any , we write
where the first inequality follows from Lemma 1, the second follows from Lemma 2 and Jensen’s inequality, the third follows from (3), and the last follows from the fact that . ■
Acknowledgement
C.-M. Lin and Y.-H. Li are supported by the Young Scholar Fellowship (Einstein Program) of the Ministry of Science and Technology of Taiwan under grant numbers MOST MOST 109-2636-E-002-025 and MOST 110-2636-E-002-012. H.-C. Cheng is supported by the Young Scholar Fellowship (Einstein Program) of the Ministry of Science and Technology in Taiwan (R.O.C.) under grant number MOST 109-2636-E-002-001 & 110-2636-E-002-009, and is supported by the Yushan Young Scholar Program of the Ministry of Education in Taiwan (R.O.C.) under grant number NTU-109V0904 & NTU-110V0904.
References
- [Aar20] Scott Aaronson. Shadow tomography of quantum states. SIAM J. Comput., 49(5):STOC18–368–STOC18–394, 2020.
- [ACH+18] Scott Aaronson, Xinyi Chen, Elad Hazan, Satyen Kale, and Ashwin Nayak. Online learning of quantum states. In Adv. Neural Information Processing Systems 31, 2018.
- [AMnNK20] Shahnawaz Ahmed, Carlos Sánchez Muñoz, Franco Nori, and Anton Frisk Kockum. Quantum state tomography with conditional generative adversarial networks, 2020. arXiv:2008.03240.
- [Bha97] Rajendra Bhatia. Matrix Analysis. Springer, New York, NY, 1997.
- [BK10a] Robin Blume-Kohout. Hedged maximum likelihood quantum state estimation. Phys. Rev. Lett., 105, 2010.
- [BK10b] Robin Blume-Kohout. Optimal, reliable estimation of quantum states. New J. Phys., 12, 2010.
- [Cov84] Thomas M. Cover. An algorithm for maximizing expected log investment return. IEEE Trans. Inf. Theory, IT-30(2):369–373, 1984.
- [FGLE12] Steven T. Flammia, David Gross, Yi-Kai Liu, and Jens Eisert. Quantum tomography via compressed sensing: Error bounds, sample complexity and efficient estimators. New J. Phys., 14, 2012.
- [GGRL14] D. S. Gonçalves, M. A. Gomes-Ruggiero, and C. Lavor. Global convergence of diluted iterations in maximum-likelihood quantum tomography. Quantum Inf. Comput., 14(11&12):966–980, 2014.
- [GLF+10] David Gross, Yi-Kai Liu, Steven T. Flammia, Stephen Becker, and Jens Eisert. Quantum state tomography via compressed sensing. Phys. Rev. Lett., 105, 2010.
- [HP14] Fumio Hiai and Dénes Petz. Introduction to Matrix Analysis and Applications. Springer, Cham, 2014.
- [Hra97] Z. Hradil. Quantum-state estimation. Phys. Rev. A, 55(3), 1997.
- [HvFJ04] Zdeněk Hradil, Jaroslav Řeháček, Jaromír Fiurášek, and Miroslav Ježek. Maximum-likelihood methods in quantum mechanics. In Quantum State Estimation, chapter 3, pages 59–112. Springer, Berlin, 2004.
- [Ius92] Alfredo N. Iusem. A short convergence proof of the EM algorithm for a specific Poisson model. Rev. Bras. Probab. Estat., 6(1):57–67, 1992.
- [Lav08] Pascal Lavergne. A Cauchy-Schwarz inequality for expectation of matrices. Discussion Papers dp08-07, Department of Economics, Simon Fraser University, 2008.
- [Lvo04] A. I. Lvovsky. Iterative maximum-likelihood reconstruction in quantum homodyne tomography. J. Opt. B: Quantum Semiclass. Opt., 6, 2004.
- [MTVv+04] G. Molina-Terriza, A. Vaziri, J. Řeháček, Z. Hradil, and A. Zeilinger. Triggered qutrits for quantum communication protocols. Phys. Rev. Lett., 92(16), 2004.
- [MTZ12] Leonard C. MacLean, Edward O. Thorp, and William T. Ziemba, editors. The Kelly Capital Growth Investment Criterion. World Scientific, Singapore, 2012.
- [OWV97] T. Opatrný, D.-G. Welsch, and W. Vogel. Least-squares inversion for density-matrix reconstruction. Phys. Rev. A, 56(3), 1997.
- [Pv04] Matteo Paris and Jaroslav Řeháček, editors. Quantum State Estimation. Springer, Berlin, 2004.
- [QFN21] Yihui Quek, Stanislav Fort, and Hui Khoon Ng. Adaptive quantum staet tomography with neural networks. npj Quantum Inf., 7, 2021.
- [SBK18] Travis L. Scholten and Robin Blume-Kohout. Behavior of maximum likelihood in quantum state tomography. New J. Phys., 20, 2018.
- [vHKL07] Jaroslav Řeháček, Zdeněk Hradil, E. Knill, and A. I. Lvovsky. Diluted maximum-likelihood algorithm for quantum tomography. Phys. Rev. A, 75, 2007.
- [VL93] Y. Vardi and D. Lee. From image deblurring to optimal investments: Maximum likelihood solutions for positive linear inverse problems. J. R. Stat. Soc., Ser. B, 55(3):569–612, 1993.
- [VSK85] Y. Vardi, L. A. Shepp, and L. Kaufman. A statistical model for positron emission tomography. J. Am. Stat. Assoc., 80(389):8–20, 1985.
- [YFT19] Akram Youssry, Christopher Ferrie, and Marco Tomamichel. Efficient online quantum state estimation using a matrix-exponentiated gradient method. New J. Phys., 21(033006), 2019.
- [YJZS20] Feidiao Yang, Jiaqing Jiang, Jialin Zhang, and Xiaoming Sun. Revisiting online quantum state learning. In Proc. AAAI Conf. Artificial Intelligence, 2020.