Learning Quantum Dissipation by Neural Ordinary Differential Equation
Abstract
Quantum dissipation arises from the unavoidable coupling between a quantum system and its surrounding environment, which is known as a major obstacle in the quantum processing of information. Apart from its existence, how to trace the dissipation from observational data is a crucial topic that may stimulate manners to suppress the dissipation. In this paper, we propose to learn the quantum dissipation from dynamical observations using the neural ordinary differential equation, and then demonstrate this method concretely on two open quantum-spin systems — a large spin system and a spin-1/2 chain. We also investigate the learning efficiency of the dataset, which provides useful guidance for data acquisition in experiments. Our work promisingly facilitates effective modeling and decoherence suppression in open quantum systems.
I Introduction
Quantum dissipation is closely related to such phenomena as decoherence, spectrum broadening and heating, all of which stand as serious obstacles in research areas ranging from quantum computation Ladd2010 ; Nielsen2010 ; Preskill2018 and simulation Georgescu2014 ; Bloch2012 , quantum information storage Lvovsky2009 , to quantum metrology Giovannetti2011 ; Pezze2018 , and sensing Degen2017 . The microscopic origin of dissipation is the breaking of isolation of the quantum system, i.e., the system inevitably interacts with its surrounding environment such that the information leakage occurs as the ambient degrees of freedom are traced out. Dissipation severely impairs the accuracy of modeling and manipulation of quantum systems.
Considerable efforts have been made to counteract the negative effects of dissipation. For example, quantum-nondemolition-mediated feedback has been used in suppressing the decoherence of a Schrödinger cat state in a cavity Wiseman1993 ; Vitali1997 ; spin-echo Hahn1950 and dynamical decoupling protocols Viola1999 have been widely applied to nitrogen-vacancy centers Lange2010 ; Du2009 ; Abobeih2018 ; Laraoui2013 , cold atoms Almog2010 , trapped ions Wang2017 and super-conducting circuits Guo2018 ; Pokharel2018 . These techniques often work with certain prior knowledge of the system-environment interactions. For example, in the nitrogen-vacancy centers, strong bias fields were introduced to circumvent the transverse coupling Du2009 or to effectively establish the bath correlations of the carbon nucleus Laraoui2013 . Furthermore, the coupling manner may also be recognized in advance, if it is magnetic or electric, linear or quadratical. However, such prior information is generally unachievable, especially for many-body systems interacting with complex environments.
In this paper, we propose a data-driven scheme to reconstruct the Markovian open quantum systems, where the dataset comes from the discrete observations of the relaxation dynamics under certain probes. Based on the dataset, we adopt the neural ordinary differential equation (NODE), a recently developed machine learning algorithm Chen2019 , to learn the Liouvillian of the open system by inversely solving the Lindblad (or Gorini-Kossakowski-Sudarshan-Lindblad) master equation Lindblad1976 ; Gorini1976 ; Gardiner2004 . A number of relevant works have been reported on such a topic. The operator expansion Franco2009 and the eigensystem realization algorithm (ERA) Zhang2014 ; Sone2017A were firstly used in the time-trace of the Hamiltonians; the ERA was later extended to the Markovian dissipative systems Zhang2015 and the estimation of the system size Sone2017B . Several eigenstate- Qi2019 ; Bairey2019 or steady-state-based approaches Bairey2020 were further developed. A generic approach for local Hamiltonian tomography was reported using the initial and the final observations of quench dynamics Li2020 . Also for this topic, several traditional machine-learning architectures, such as the fully connected neural network Xin2019 , long short-term memory Che2021 , and convolutional neural network Ma2021 were recently involved.
The novelty and contributions of our present work mainly lie in the following points. At first, our approach can be applied to either the Hamiltonian tomography (closed system) or the reconstruction of the Markovian dissipations (open system), with no need of prior information on the specific structures of the Hamiltonian or Liouvilian. Second, we adopt a relatively new machine-learning algorithm (namely the NODE Chen2019 ) to deal with the gradient, with such issues encountered by the traditional machine-learning algorithms as the gradient explosion and vanishing Pascanu2012 being avoided. Furthermore, we also study the learning efficiency of datasets to facilitate the data acquisition in realistic experiments. We thus expect our work can play an active role in effective modeling and guiding new system-environment decoupling protocols for various open quantum systems.

II General Method
We consider a quantum system being coupled to the environment, as is schematically displayed in Fig. 1(a). Under the Born-Markov approximation, the equation of motion of the system is governed by the Lindblad master equation Lindblad1976 ; Gorini1976 ; Gardiner2004
(1) |
where is the density matrix of the quantum system and is the Liouvilian super-operator in the form of (setting )
(2) |
Here, denotes the dissipation-independent Hamiltonian, is the dissipative operator (or jumping operator) in the -th channel, and accounts for the normalization of as if no jump occurs.
Conventionally, with certain Liouvilian (namely and ), one can obtain the solution by numerically solving the Lindblad equation (Eq.(2)) using ordinary differential equation (ODE) solvers such as the Euler or the Runge-Kutta Scherer2013 . In contrast, the goal of this paper is to determine the dissipation and even when the dynamical behaviors of density matrix or certain observations are given. In other words, we aim to deal with an inverse problem to reproduce from the sequential dataset , as illustrated in Fig. 1(b). Additionally, for the sake of experimental application, we consider that is purely constructed by observations in the following discussion, i.e. , since compared to , acquiring is experimentally simpler and more feasible.
We adopt two different probes to generate the dataset . i) Time-dependent probe: for a fixed , one imposes a t-dependent control , i.e.,
(3) |
where with being some smooth series and the according control operators. ii) Time-independent probe: with fixed and , one diversifies by varying , which suits the experimental scenarios in which preparing different initial states is more convenient. For each evolution, we perform the measurement uniformly at discrete time within the time range , where with being the time steps of measurement, which forms a data batch. The entire dataset is constructed by different batches, i.e.,
(4) |
where denotes the batch-size. Therefore, the total number of data points in is equal to . Here, we mentioned that the discrete-time observations with equal intervals are considered for experimental convenience, which is however not a necessary condition for the learning algorithm that we will show below.
To learn the Liouvilian from , we adopt a machine learning algorithm called the NODE Chen2019 . The NODE builds upon an ansatz with , where and denote the parameters to be learnt in and , respectively. The learning process is illustrated in Fig. 1(c), and it contains two parts. First, with and the initial state , we propagate the Lindblad equation (2) forward by using certain ODE solvers, and then obtain a series of predictive solutions ; the loss function is defined as a functional of and , which effectively measures the distance between and the NODE predictions. The purpose of learning is to minimize by adjusting . To this end, the NODE introduces an adjoint field defined by . Through backward propagating from to , we obtain which is then be used to update the parameters in the way of with being the learning rate. More calculation details about the NODE algorithm can be found in Appendix A.
III Examples
We apply this method to two concrete examples. In the first example, we consider a one-body spin system with the spin quantum number being . Large spin systems are active in various research areas of quantum physics ranging from high-spin quantum dots Klochan2011 ; Doherty2013 , multi-component quantum gases Kawaguchi2012 to unconventional superconductors Wang2018 . In the second example, our system is a many-body spin-1/2 chain. Spin chains stand as fundamental models in condensed matter physics and quantum computation, which are closely related to quantum criticality Sachdev2011 , topological phase of matters Chiu2016 ; Wen2017 , etc.
III.1 Spin-3/2 system
The general spin-3/2 system refers to the four-level system with spin-vector operators being the generalized Pauli matrices. Since the spin-tensor operators may also be involved due to, for example, the quadratic Zeeman effect, we generally expand the Hamiltonian by the SU(4) generators, i.e.,
(5) |
where denote the 15 Hermitian generators satisfying traceless and orthogonal conditions, and are the corresponding coefficients. Furthermore, we assume that the system possesses weak dissipations in these Hermitian channels, namely
(6) |
with being the dissipative strength. The weakness is reflected in , where and respectively denote the mean strength of and , and are the number of parameters. Remark that our goal is to learn the parameters from the dataset .
The dataset is generated by evolving the Lindblad equation Eq. (1) within and by uniformly measuring the transverse magnetization, i.e., with the generalized Pauli operators. For general consideration, and are randomized within and , where . As mentioned before, we adopt two different probes, and for the t-dependent probe [probe i)], we fix to be a magnetized state along and introduce a control magnetic field with and being randomized within ; for the t-independent probe [probe ii)], we take to be random pure states that satisfy . The total data numbers of is with and .

In the learning process, we minimize the mean square error (MSE) loss function
(7) |
and monitor the averaged relative error with respect to the parameters, i.e.,
(8) |
where the additional overscore on the right-hand side means to take a further average on different initializations of . Here, we take .
In Fig. 2, we present the learning results of using the t-dependent probe, while leave those using the t-independent probe in Appendix B. Specifically, Fig. 2(a) shows the variation of , and on training epochs, from which one can read that the algorithm converges at approximately epoch. After convergence, we compare the predictive parameters and (labeled by circles with error bars) with their realistic values (solid triangles) in subfigures (c) and (d), respectively, where the error bars indicate the standard deviation because of different initializations. Clearly, the algorithm successfully reproduces the parameters with relative errors and . Fig. 2(b) presents the spin dynamics of a typical batch, where the solid, dashed and dot-dashed lines correspond to the realistic , the predictive curve before training, and the predictive one at the training epoch respectively, with shadings indicating the predictive fluctuations due to initializations. As the training progresses, gradually approaches accompanied by the diminishment of predictive fluctuations.
III.2 Spin-1/2 Chain

The second example is a spin-1/2 chain with nearest-neighbor interactions, whose Hamiltonian is written as
(9) |
The first term accounts for the local terms with the spin-1/2 Pauli operators with the corresponding strength. The second term characterizes the two-body interactions with strength . The one-body and two-body dissipations are considered in the according channels
(10) |
with and the corresponding strength. As a proof-of-principle demonstration, we practically set the chain length to 5 under a periodical boundary condition, which keeps the numerical complexity within the computational power of a PC with two GPUs Complexity . In such a case, there is a total of 120 parameters that need to be learned, 60 each for and . Again, we generate two datasets using the t-dependent and the t-independent probes. For the former, we set and with the total spin operators and ; for the latter, are chosen to be random product states. The measured observables are the total spins, i.e., . More detailed settings are listed below: , with and , randomized , , , and .
In Fig. 3, we display the learning task with the t-independent probe while leaving the other task (using t-dependent probe) in Appendix B. All instructions of the Fig. 3 are similar to those used in Fig. 2 except that now denotes the total magnetization. The accurate predictions with and shown in Figs. 3(c) and (d) clearly demonstrate the feasibility of the algorithm on many-body spin systems.
Here, we would like to make several additional comments. At first, the system shown above carries no symmetry, which in principle allows us to learn all the parameters by looking at a global (or local) observable . However, if the system intrinsically carries certain symmetry, then all the accessible operators associated with may be limited within certain subspaces, which forms the accessible set Zhang2014 . In this case, only the parameters related to the accessible set can be determined by the current time-trace approach. Second, there is a case in which the NODE fails to make unique predictions even for systems without intrinsic symmetry. The NODE can access to three parts of information — the initial state , the probe field , and the measured data , and hence it cannot make unique predictions if all the three parts share a common ”symmetry”. For example in the learning task using t-dependent probe (see Appendix B), , and are all translational invariant (namely independent on local spins), which leads to unfavorable learning results because of the ”symmetry”-induced ambiguity. This problem, however, does not occur for the task with the t-independent probe illustrated in Fig. 3, since the employed initial states explicitly break the translational ”symmetry”.
IV Learning Efficiency
In the above, we illustrated the capability of the algorithm in learning open quantum systems. Now, we turn to the question of learning efficiency — how to collect the data points can make the learning more efficient? We emphasize the importance of this question on data acquisition in realistic experiments, especially when collecting data points is expensive or time-consuming.
To make this question simple and clear, let us focus on the situation that are the only unknown parameters that we would like to learn. Moreover, since different batches are independent of each other in , we consider only contains one batch such that the total number of data points is . As mentioned before, the NODE adjusts parameters according to , which motivates us to define
(11) |
where means the local loss function with respect to a single data point . The physical meaning of is quite clear, it characterizes the averaged sensitivity of on such that a large would speed up the learning process. To further simplify the Eq. (11), we replace the average of individual derivatives by the derivative to the mean value of , i.e.,
(12) |
For weak dissipation , is closely related to
(13) |
which is composed of two parts: is a fast-oscillating term characterized by the energy scale of , while is a slowly varying envelop depicting the dissipation-induced damping of , where is the real part of the Liouvilian gap. Obviously, the envelop of is a non-monotonic function being maximized at (see and the in Fig. 4(a)), with being the decoherent time.
The most commonly used two loss functions, mean absolute error (MAE) and MSE, are linearly and quadratically proportional to as approaches its realistic value , i.e.,
(14) | ||||
where MAE is defined by , while the definition of MSE is presented in Eq. (7). For uniform measurement , the summations in Eq. (14) can be analytically calculated in the large limit, i.e., ( in the unit of )
(15) | ||||
where both are non-monotonic with maximal values lying at and , respectively. This indicates that the optimal strategy of uniform data acquisition is to take , as is schematically illustrated by dots in Fig. 4(a). We benchmark the above analysis on the spin-3/2 learning task using the t-dependent probe and the MSE loss. This task is with . Practically, we set and plot the variations of and in Figs. 4(b) and (c) respectively, where is numerically calculated using Eq. (11). Clearly, exhibits the largest and the fastest decay rate of throughout the learning process, which is in good agreement with our previous discussion.

Finally, we briefly discuss the effect of the total data number . Therefore, we fix and show the dependence of on in Fig. 4(d). An apparent feature is that the cases with exhibit poor learning results, which is understandable since we need at least data points to uniquely determine all , resembling that one needs at least equations to determine variables. Furthermore, an increasing can accelerate the learning process, however, this trend will not continue endlessly (comparing the curves , and in Fig. 4(d)). For a fairly large , neighboring data points exhibit tiny distinguishment such that adding more data points would be of little help for the learning efficiency.
V Summary and Outlook
We proposed a scheme to learn the quantum dissipation of open systems based on the NODE, a machine learning algorithm being able to reproduce the Liouvilian from dynamical observations. The learning process can be accelerated by optimizing the strategy of data collection. There are many follow-up questions. The Lindblad master equation relies on the Born-Markov approximation, which limits the current method to weakly dissipative systems with short correlation time. The generalization from Markov to non-Markov is not so straightforward, since the memory effect leads to an integro-differential master equation Gardiner2000 ; Vega2017 . More advanced techniques are expected to deal with the gradients in relation to the integral kernel. Whether an effective Markovian description can be found for non-Markovian dynamics is still left open. Furthermore, the full-Liouvilian calculation shown above cannot be easily extended to large-scale many-body systems due to the exponential growth of the Hilbert space. One possible solution is to combine the NODE with quantum-trajectory approaches Gardiner2000 ; Daley2014 such as the truncated Wigner method Schachenmayer2015 ; Huber2022 and the tDMRG-quantum-trajectory method Daley2009 . In this regard, several recently developed machine-learning-based solvers Carleo2017 ; Nagy2019 ; Hartmann2019 ; Vicentini2019 ; Mazza2021 ; Liu2022 ; Luo2022 may provide valuable insights. Additionally, with a dissipative model, can machine learning algorithms help design the corresponding protocols for system-environment decoupling? We expect this work, as well as these questions, to stimulate more interdisciplinary studies in the fields of machine learning and open quantum systems.
Acknowledgements.
L.C. would like to thank Hui Zhai, Ce Wang, Juan Yao, Lei Pan, Yang Shen, and Sen Yang for the fruitful discussion. L.C. acknowledges support from the National Natural Science Foundation of China (Grant Nos. 12174236 and 12147215) and the postdoctoral fellowship offered by Hui Zhai. Part of this work was done during the fellowship at Tsinghua University, Beijing.Appendix A Neural Ordinary Differential Equation
The NODE Chen2019 inherits the basic idea of the residual network He2016 and is able to reconstruct the differential equations satisfied by certain sequential data . The NODE makes the ansatz . If the specific form of is priorly unknown, a deep neural network can be used to establish the mapping from to . However, if certain prior information is available, e.g., satisfies the Lindblad equation as the situation considered in this work, the question comes to determine the unknown parameters in . Given an initial state , one can propagate the NODE forwardly from to using certain ODE solvers and obtain predictive solutions . The loss function is defined as the effective distance between the and the realistic values . Next, we show how to obtain the derivative , which is based on the the adjoint field method Chen2019 .
The augmented adjoint state is defined as the derivative of with respect to the augmented states , i.e.,
(16) | ||||
which satisfies the differential equation
(17) |
where
(18) |
is the derivative of the augmented state with respect to , and
(19) |
is the Jacobean matrix. Substituting Eqs. (19) and (16) into Eq. (17) we have
(20) |
and through backwardly propagating which from to we obtain the derivative . Particularly, the boundary condition for the backward propagation is given by
(21) | |||||
where can be directly calculated using the automatic differentiation toolbox Baydin2018 ; Paszke2017 .
Note that, for a sequential data with intermediate data points at , the total loss function is a summation of individual , and hence is simply the average over all the periods of back propagation from to , i.e., .
Appendix B Complementary Result of the Two Examples
Here, we provide more information on the two examples. The 4-by-4 Hamiltonian of the spin-3/2 model can be expanded by the SU(4) Hermitian generators in the fundamental representation. We practically take the matrices as Bertlmann2008
(22) | ||||

These generators naturally satisfy the traceless condition and the orthogonal condition , which are also the properties that are supposed to be satisfied by the Lindblad dissipative operators . One may check that the one-body and the two-body dissipative operators of the second example (spin-1/2 chain) also satisfy these two conditions. Both conditions would be broken if any is with a finite trace . However, since the Lindblad equation Eq. (2) is invariant under the transformation
(23) | ||||
one can always recover the conditions by absorbing into . Note that, the Lindblad equation has another invariance with being an arbitrary phase factor, which implies that the phase of is meaningless, and hence we set positive throughout this work.
The first row of Fig. 5 shows the learning result of the spin-3/2 model using the t-independent probe, where subfigures denote the dependence of , and on training epochs (a1), the spin dynamics before and during training (a2), and the predictive parameters (a3) and (a4) after training, respectively. It is indicated that the NODE can also accurately reproduce the spin-3/2 model with predictive errors and . The second row of Fig. 5 displays the result with respect to the spin-chain model using the t-dependent probe. Specifically in Fig. 5(b1), the data points were collected on the measurement of total spins. There, the loss function decreases but the predictive errors barely decrease. In contrast, if the measurement is performed on the local spins, favorable results are obtained as is illustrated in the Figs. 5(b2)-(b4). These behaviors verify the ”symmetry” remarks discussed in the main text, since the local observations explicitly break the translational ”symmetry”.
Appendix C General Solution of Lindblad Equation
We obtain the solution of the Lindblad equation by mapping the Liouvilian operator into the double Hilbert space, which is generally known as the Choi-Jamiolkwski isomorphism Choi1975 ; Jamiolkowski1972 or vectorization, i.e.,
(24) | ||||
where is the vectorized Liouvilian operator and is an identity operator with the same shape as that of . Generally, is non-Hermitian with the spectrum structure where denotes the Liouvilian spectrum, and and are the right and the left eigenvectors, respectively. In such a framework, the general solution of the Lindblad equation can be obtained as
(25) |
where is the vectorized density operator, and
(26) |
characterizes the projective coefficients with respect to the initial state .
The Liouvilian spectrum is generally complex, where the imaginary part characterizes the undamped oscillations, while the real part accounts for the damping of during evolution. In the spectrum, it is well-known that there exists an undamped state with , which is the steady state to which the system relaxes after a long evolution. The energy gap between the slowest damping state and the steady state is the Liouvilian gap. The real gap determines the decoherent time in the way of . For weak dissipation , is linearly proportional to the dissipative strength , i.e., , with a non-universal factor depending on the number of dissipation channels and the particular form of external probes, whereas the imaginary gap is characterized by the energy scale of . Hence, we generally have .
The dataset is constructed by the measurement
(27) | ||||
where is the vectorized observable and In the second line, we have neglected the contributions of faster damping states with . Clearly, the first term is a t-independent constant; the second term exhibits a fast oscillation modulated by a slowly damped envelop . Taking derivative of Eq. (27) with respect to leads to [Eq. (13)], based on which one can obtain as approaches the realistic value . Particularly for the MAE loss , we have
(28) | ||||
On the other hand, for the MSE loss function, can be calculated by
(29) | ||||
References
- (1) T. D. Ladd, F. Jelezko, R. Laflamme, Y. Nakamura, C. Monroe, and J. L. O’Brien, Quantum Computers, Nature 464, 45 (2010).
- (2) M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information: 10th Anniversary Edition (Cambridge University Press, Cambridge, 2010).
- (3) J. Preskill, Quantum Computing in the NISQ Era and Beyond, Quantum 2, 79 (2018).
- (4) I. M. Georgescu, S. Ashhab, and F. Nori, Quantum Simulation, Rev. Mod. Phys. 86, 153 (2014).
- (5) I. Bloch, J. Dalibard, and S. Nascimbène, Quantum Simulations with Ultracold Quantum Gases, Nat. Phys. 8, 267 (2012).
- (6) A. I. Lvovsky, B. C. Sanders, and W. Tittel, Optical Quantum Memory, Nature Photon 3, 706 (2009).
- (7) V. Giovannetti, S. Lloyd, and L. Maccone, Advances in Quantum Metrology, Nature Photon 5, 222 (2011).
- (8) L. Pezzè, A. Smerzi, M. K. Oberthaler, R. Schmied, and P. Treutlein, Quantum Metrology with Nonclassical States of Atomic Ensembles, Rev. Mod. Phys. 90, 035005 (2018).
- (9) C. L. Degen, F. Reinhard, and P. Cappellaro, Quantum Sensing, Rev. Mod. Phys. 89, 035002 (2017).
- (10) H. M. Wiseman and G. J. Milburn, Quantum Theory of Optical Feedback via Homodyne Detection, Phys. Rev. Lett. 70, 548 (1993).
- (11) D. Vitali, P. Tombesi, and G. J. Milburn, Controlling the Decoherence of a “Meter” via Stroboscopic Feedback, Phys. Rev. Lett. 79, 2442 (1997).
- (12) E. L. Hahn, Spin Echoes, Phys. Rev. 80, 580 (1950).
- (13) L. Viola, E. Knill, and S. Lloyd, Dynamical Decoupling of Open Quantum Systems, Phys. Rev. Lett. 82, 2417 (1999).
- (14) G. de Lange, Z. H. Wang, D. Ristè, V. V. Dobrovitski, and R. Hanson, Universal Dynamical Decoupling of a Single Solid-State Spin from a Spin Bath, Science 330, 60 (2010).
- (15) J. Du, X. Rong, N. Zhao, Y. Wang, J. Yang, and R. B. Liu, Preserving Electron Spin Coherence in Solids by Optimal Dynamical Decoupling, Nature 461, 1265 (2009).
- (16) M. H. Abobeih, J. Cramer, M. A. Bakker, N. Kalb, M. Markham, D. J. Twitchen, and T. H. Taminiau, One-Second Coherence for a Single Electron Spin Coupled to a Multi-Qubit Nuclear-Spin Environment, Nat Commun 9, 2552 (2018).
- (17) A. Laraoui, F. Dolde, C. Burk, F. Reinhard, J. Wrachtrup, and C. A. Meriles, High-Resolution Correlation Spectroscopy of 13C Spins near a Nitrogen-Vacancy Centre in Diamond, Nat. Commun. 4, 1651 (2013).
- (18) Y. Sagi, I. Almog, and N. Davidson, Process Tomography of Dynamical Decoupling in a Dense Cold Atomic Ensemble, Phys. Rev. Lett. 105, 053201 (2010).
- (19) Y. Wang, M. Um, J. Zhang, S. An, M. Lyu, J.-N. Zhang, L.-M. Duan, D. Yum, and K. Kim, Single-Qubit Quantum Memory Exceeding Ten-Minute Coherence Time, Nature Photon 11, 646 (2017).
- (20) Q. Guo, S.-B. Zheng, J. Wang, C. Song, P. Zhang, K. Li, W. Liu, H. Deng, K. Huang, D. Zheng, X. Zhu, H. Wang, C.-Y. Lu, and J.-W. Pan, Dephasing-Insensitive Quantum Information Storage and Processing with Superconducting Qubits, Phys. Rev. Lett. 121, 130501 (2018).
- (21) B. Pokharel, N. Anand, B. Fortman, and D. A. Lidar, Demonstration of Fidelity Improvement Using Dynamical Decoupling with Superconducting Qubits, Phys. Rev. Lett. 121, 220502 (2018).
- (22) R. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud, Neural Ordinary Differential Equations, ArXiv:1806.07366 [Cs, Stat] (2018).
- (23) G. Lindblad, On the Generators of Quantum Dynamical Semigroups, Commun. Math. Phys. 48, 119 (1976).
- (24) V. Gorini, A. Kossakowski, and E. C. G. Sudarshan, Completely Positive Dynamical Semigroups of N‐level Systems, J. Math. Phys. 17, 821 (1976).
- (25) C. W. Gardiner and P. Zoller, Quantum Noise: A Handbook of Markovian and Non-Markovian Quantum Stochastic Methods with Applications to Quantum Optics, 3rd ed (Springer, Berlin; New York, 2004).
- (26) C. Di Franco, M. Paternostro, and M. S. Kim, Hamiltonian Tomography in an Access-Limited Setting without State Initialization, Phys. Rev. Lett. 102, 187203 (2009).
- (27) J. Zhang and M. Sarovar, Quantum Hamiltonian Identification from Measurement Time Traces, Phys. Rev. Lett. 113, 080401 (2014).
- (28) A. Sone and P. Cappellaro, Hamiltonian Identifiability Assisted by a Single-Probe Measurement, Phys. Rev. A 95, 022335 (2017).
- (29) J. Zhang and M. Sarovar, Identification of Open Quantum Systems from Observable Time Traces, Phys. Rev. A 91, 052121 (2015).
- (30) A. Sone and P. Cappellaro, Exact Dimension Estimation of Interacting Qubit Systems Assisted by a Single Quantum Probe, Phys. Rev. A 96, 062334 (2017).
- (31) X.-L. Qi and D. Ranard, Determining a Local Hamiltonian from a Single Eigenstate, Quantum 3, 159 (2019).
- (32) E. Bairey, I. Arad, and N. H. Lindner, Learning a Local Hamiltonian from Local Measurements, Phys. Rev. Lett. 122, 020504 (2019).
- (33) E. Bairey, C. Guo, D. Poletti, N. H. Lindner, and I. Arad, Learning the Dynamics of Open Quantum Systems from Their Steady States, New J. Phys. 22, 032001 (2020).
- (34) Z. Li, L. Zou, and T. H. Hsieh, Hamiltonian Tomography via Quantum Quench, Phys. Rev. Lett. 124, 160502 (2020).
- (35) T. Xin, S. Lu, N. Cao, G. Anikeeva, D. Lu, J. Li, G. Long, and B. Zeng, Local-Measurement-Based Quantum State Tomography via Neural Networks, Npj Quantum Inf 5, 109 (2019).
- (36) L. Che, C. Wei, Y. Huang, D. Zhao, S. Xue, X. Nie, J. Li, D. Lu, and T. Xin, Learning Quantum Hamiltonians from Single-Qubit Measurements, Phys. Rev. Research 3, 023246 (2021).
- (37) X. Ma, Z. C. Tu, and S.-J. Ran, Deep Learning Quantum States for Hamiltonian Estimation, Chinese Phys. Lett. 38, 110301 (2021).
- (38) R. Pascanu, T. Mikolov, and Y. Bengio, On the Difficulty of Training Recurrent Neural Networks, (2012).
- (39) P. O. J. Scherer, Computational Physics: Simulation of Classical and Quantum Systems (Springer International Publishing, Heidelberg, 2013).
- (40) M. W. Doherty, N. B. Manson, P. Delaney, F. Jelezko, J. Wrachtrup, and L. C. L. Hollenberg, The Nitrogen-Vacancy Colour Centre in Diamond, Physics Reports 528, 1 (2013).
- (41) O. Klochan, A. P. Micolich, A. R. Hamilton, K. Trunov, D. Reuter, and A. D. Wieck, Observation of the Kondo Effect in a Spin- Hole Quantum Dot, Phys. Rev. Lett. 107, 076805 (2011).
- (42) Y. Kawaguchi and M. Ueda, Spinor Bose-Einstein Condensates, Physics Reports 520, 253 (2012).
- (43) H. Kim, K. Wang, Y. Nakajima, R. Hu, S. Ziemak, P. Syers, L. Wang, H. Hodovanets, J. D. Denlinger, P. M. R. Brydon, D. F. Agterberg, M. A. Tanatar, R. Prozorov, and J. Paglione, Beyond Triplet: Unconventional Superconductivity in a Spin-3/2 Topological Semimetal, Sci. Adv. 4, eaao4513 (2018).
- (44) S. Sachdev and B. Keimer, Quantum Criticality, Physics Today 64, 29 (2011).
- (45) C.-K. Chiu, J. C. Y. Teo, A. P. Schnyder, and S. Ryu, Classification of Topological Quantum Matter with Symmetries, Rev. Mod. Phys. 88, 035005 (2016).
- (46) X.-G. Wen, Colloquium: Zoo of Quantum-Topological Phases of Matter, Rev. Mod. Phys. 89, 041004 (2017).
- (47) To deal with an open quantum system with spins is equivalent to deal with a closed system with spins, since the open system can be viewed as two coupled closed systems under the Choi-Jamiolkwski isomorphism [see Eq. (24) in Appendix C]. The computational complexity are further enlarged by times due to batches and initializations. Practically, we rent two Nvidia GTX 1080 Ti graphics cards to speed up the numerics, which keeps the training time of the spin-chain task with at about one day.
- (48) C. W. Gardiner and P. Zoller, Quantum Noise (Springer Berlin Heidelberg, Berlin, Heidelberg, 2000).
- (49) I. de Vega and D. Alonso, Dynamics of Non-Markovian Open Quantum Systems, Rev. Mod. Phys. 89, 015001 (2017).
- (50) A. J. Daley, Quantum Trajectories and Open Many-Body Quantum Systems, Advances in Physics 63, 77 (2014).
- (51) J. Schachenmayer, A. Pikovski, and A. M. Rey, Many-Body Quantum Spin Dynamics with Monte Carlo Trajectories on a Discrete Phase Space, Phys. Rev. X 5, 011022 (2015).
- (52) J. Huber, A. M. Rey, and P. Rabl, Realistic Simulations of Spin Squeezing and Cooperative Coupling Effects in Large Ensembles of Interacting Two-Level Systems, Phys. Rev. A 105, 013716 (2022).
- (53) A. J. Daley, J. M. Taylor, S. Diehl, M. Baranov, and P. Zoller, Atomic Three-Body Loss as a Dynamical Three-Body Interaction, Phys. Rev. Lett. 102, 040402 (2009).
- (54) G. Carleo and M. Troyer, Solving the Quantum Many-Body Problem with Artificial Neural Networks, Science 355, 602 (2017).
- (55) A. Nagy and V. Savona, Variational Quantum Monte Carlo Method with a Neural-Network Ansatz for Open Quantum Systems, Phys. Rev. Lett. 122, 250501 (2019).
- (56) M. J. Hartmann and G. Carleo, Neural-Network Approach to Dissipative Quantum Many-Body Dynamics, Phys. Rev. Lett. 122, 250502 (2019).
- (57) F. Vicentini, A. Biella, N. Regnault, and C. Ciuti, Variational Neural-Network Ansatz for Steady States in Open Quantum Systems, Phys. Rev. Lett. 122, 250503 (2019).
- (58) P. P. Mazza, D. Zietlow, F. Carollo, S. Andergassen, G. Martius, and I. Lesanovsky, Machine Learning Time-Local Generators of Open Quantum Dynamics, Phys. Rev. Research 3, 023084 (2021).
- (59) Z. Liu, L.-M. Duan, and D.-L. Deng, Solving Quantum Master Equations with Deep Quantum Neural Networks, Phys. Rev. Research 4, 013097 (2022).
- (60) D. Luo, Z. Chen, J. Carrasquilla, and B. K. Clark, Autoregressive Neural Network for Simulating Open Quantum Systems via a Probabilistic Formulation, Phys. Rev. Lett. 128, 090501 (2022).
- (61) K. He, X. Zhang, S. Ren, and J. Sun, Deep Residual Learning for Image Recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, Las Vegas, NV, USA, 2016), pp. 770–778.
- (62) R. A. Bertlmann and P. Krammer, Bloch Vectors for Qudits, J. Phys. A: Math. Theor. 41, 235303 (2008).
- (63) A. G. Baydin, B. A. Pearlmutter, A. A. Radul, J. M. Siskind. Automatic Differentiation in Machine Learning: a Survey. The Journal of Machine Learning Research, 18, 1 (2018).
- (64) A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, Automatic Differentiation in PyTorch, in NIPS 2017 Workshop on Autodiff (Long Beach, California, USA, 2017).
- (65) M.-D. Choi, Completely Positive Linear Maps on Complex Matrices, Linear Algebra and Its Applications 10, 285 (1975).
- (66) A. Jamiołkowski, Linear Transformations Which Preserve Trace and Positive Semidefiniteness of Operators, Reports on Mathematical Physics 3, 275 (1972).