This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

From short-sighted to far-sighted: A comparative study of recursive machine learning approaches for open quantum systems

Arif Ullah [email protected] School of Physics and Optoelectronic Engineering, Anhui University, Hefei, 230601, Anhui, China
Abstract

Accurately modeling the dynamics of open quantum systems is critical for advancing quantum technologies, yet traditional methods often struggle with balancing accuracy and efficiency. Machine learning (ML) offers a promising alternative, particularly through recursive models that predict system evolution based on the past history. While these models have shown success in predicting single observables, their effectiveness in more complex tasks, such as forecasting the full reduced density matrix (RDM), remains unclear. In this work, we extend history-based recursive ML approaches to complex quantum systems, comparing four physics-informed neural network (PINN) architectures: (i) single-RDM-predicting PINN (SR-PINN), (ii) SR-PINN with simulation parameters (PSR-PINN), (iii) multi-RDMs-predicting PINN (MR-PINN), and (iv) MR-PINN with simulation parameters (PMR-PINN). We apply these models to two representative open quantum systems: the spin-boson (SB) model and the Fenna-Matthews-Olson (FMO) complex. Our results demonstrate that single-RDM-predicting models (SR-PINN and PSR-PINN) are limited by a narrow history window, failing to capture the full complexity of quantum evolution and resulting in unstable long-term predictions, especially in nonlinear and highly correlated dynamics. In contrast, multi-RDMs-predicting models (MR-PINN and PMR-PINN) provide more accurate predictions by extending the forecast horizon, incorporating long-range temporal correlations, and mitigating error propagation. Surprisingly, including simulation parameters explicitly, such as temperature and reorganization energy, in PSR-PINN and PMR-PINN does not consistently improve accuracy and, in some cases, even reduces performance. This suggests that these parameters are already implicitly encoded in the RDM evolution, making their inclusion redundant and adding unnecessary complexity. These findings highlight the limitations of short-sighted recursive forecasting in complex quantum systems and demonstrate the superior stability and accuracy of far-sighted approaches for long-term predictions.

Open quantum systems describe quantum systems interacting with their environment, playing a fundamental role in quantum computing, quantum memory, quantum transport, proton tunneling in DNA and energy transfer in photosynthesis.[1, 2, 3, 4, 5] Their dynamics are captured by the reduced density matrix (RDM), which evolves under both the system internal dynamics and the influence of its environment.

Modeling the influence of environment is challenging due to its high-dimensional nature. Mixed quantum-classical methods[6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17] simplify the problem by treating the system quantum mechanically while approximating the environment classically, significantly reducing computational cost. However, these methods often struggle to capture detailed balance[18, 19, 20] or subtle quantum correlations.[21] Fully quantum approaches, including path-integral[22, 23, 24, 25, 26, 27, 28, 29, 23, 30] and quantum master equation-based methods,[31, 32, 33, 34, 35, 36, 37, 38] provide more accurate descriptions but are computationally expensive, particularly in regimes with strong system-environment coupling or where fine discretization is needed for numerical stability.

Recently, machine learning (ML) has emerged as a promising tool for learning complex spatiotemporal dynamics in high-dimensional systems.[39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61] One widely used ML strategy is the recursive approach, where the future evolution of a quantum state is predicted iteratively based on a short history of past evolution. This method has been successfully applied to the relaxation dynamics of the two-state spin-boson (SB) model,[40, 43, 44, 60] even enabling extrapolation beyond the trained time window.[40] However, previous applications have been limited to predicting a single observable—such as the population difference in the SB model—and have relied solely on single-step prediction models.

In this work, we extend recursive ML approaches to more complex quantum systems, focusing on predicting the full RDM rather than just a single observable. We examine four physics-informed neural network (PINN)-based architectures: (i) the single-RDM-predicting PINN (SR-PINN), (ii) the SR-PINN with simulation parameters (PSR-PINN), (iii) the multi-RDMs-predicting PINN (MR-PINN), and (iv) the MR-PINN with simulation parameters (PMR-PINN). These architectures are tested on the relaxation dynamics of the SB model and the exciton energy transfer (EET) process in the Fenna-Matthews-Olson (FMO) complex.

From our results, we underscore the limitations of short-sighted, single-RDM-predicting models (SR-PINN and PSR-PINN) in capturing long-term system dynamics, especially in systems with intricate behavior. These models, constrained by a narrow history window, fail to predict long-term quantum evolution accurately, as they cannot fully capture the complexity of system evolution. In contrast, far-sighted models—such as MR-PINN and PMR-PINN—overcome these limitations by extending the forecast horizon, allowing them to incorporate long-range temporal correlations and achieve more stable predictions.

Although we initially anticipated that incorporating simulation parameters such as reorganization energy (λ\lambda), characteristic frequency (γ\gamma), and temperature (TT) would improve accuracy, our findings show that these parameters do not consistently enhance performance and, in some instances, actually degrade it. This suggests that the relevant effects of these parameters are already implicitly encoded in the RDM evolution, making their explicit inclusion unnecessary in certain cases.

To build our case, let’s consider an open quantum system (S), consisting of n\ignorespaces n states interacting with an external environment (E). As stated before, the dynamics of the system is governed by the RDM, which evolves non-unitarily due to environmental effects. While the full system follows unitary evolution described by the Liouville–von Neumann equation, tracing out the environmental degrees of freedom introduces a superoperator 𝐑\mathcal{\mathbf{R}} that encodes dissipation and decoherence. Under the assumption that the initial state is separable between the system and environment (ρ(0)=ρS(0)ρE(0)\mathbf{\rho}(0)=\mathbf{\rho}_{\rm S}(0)\otimes\mathbf{\rho}_{\rm E}(0)), mathematically it can be described as

ρS(t)\displaystyle\mathbf{\rho}_{\rm S}(t) =𝐓𝐫E(𝐔(t,0)ρ(0)𝐔(t,0))\displaystyle=\mathbf{Tr}_{\rm E}\left(\mathbf{U}(t,0)\rho(0)\mathbf{U}^{\dagger}(t,0)\right)
=i[𝐇S,ρS(t)]+𝐑[ρS(t)],\displaystyle=-i[\mathbf{H}_{\rm S},\mathbf{\rho}_{\rm S}(t)]+\mathcal{\mathbf{R}}[\mathbf{\rho}_{\rm S}(t)], (1)

where ρS(t)\mathbf{\rho}_{\rm S}(t) is the RDM of the system at time tt, 𝐓𝐫E\mathbf{Tr}_{\rm E} denotes the partial trace over the environment, 𝐑\mathcal{\mathbf{R}} is a superoperator that encodes the effects of the environment and 𝐔(t,0)\mathbf{U}(t,0) and 𝐔(t,0)\mathbf{U}^{\dagger}(t,0) are the forward and backward time-evolution operators, respectively.

In the recursive ML framework, modeling the time evolution of Eq. (From short-sighted to far-sighted: A comparative study of recursive machine learning approaches for open quantum systems) is formulated as learning a mapping function \ignorespaces\mathcal{M} that maps the input descriptors into predicted RDMs. In general, we have

:{n×n}k{n×n}l,\mathcal{M}:\{\mathbb{R}^{n\times n}\}^{k^{\prime}}\to\{\mathbb{R}^{n\times n}\}^{l}\,, (2)

where {n×n}k\ignorespaces\{\mathbb{R}^{n\times n}\}^{k^{\prime}} is a collection of k\ignorespaces k^{\prime} input matrices (of size n×n\ignorespaces n\times n ) that encode physical information such as historical RDM data, initial conditions, and simulation parameters, and {n×n}l\ignorespaces\{\mathbb{R}^{n\times n}\}^{l} is a sequence of l\ignorespaces l predicted RDMs corresponding to different time steps. In our study, we consider four distinct approaches for predicting the time evolution of RDM:

The SR-PINN approach: This method predicts the RDM at the next time step based solely on a fixed-length history of past RDMs. The recursive mapping function is defined as

rec:{n×n}kn×n,\mathcal{M}_{\rm rec}:\{\mathbb{R}^{n\times n}\}^{k^{\prime}}\to\mathbb{R}^{n\times n}\,, (3)

with

rec[ρS(tkk+1),ρS(tkk+2),,ρS(tk)]=ρS(tk+1).\mathcal{M}_{\rm rec}\Big{[}\rho_{\rm S}(t_{k-k^{\prime}+1}),\,\rho_{\rm S}(t_{k-k^{\prime}+2}),\,\dots,\,\rho_{\rm S}(t_{k})\Big{]}=\rho_{\rm S}(t_{k+1})\,. (4)

The procedure is applied iteratively: after predicting ρS(tk+1)\ignorespaces\rho_{\rm S}(t_{k+1}) , this new RDM is appended to the history while the oldest entry is removed, keeping the memory size constant at k\ignorespaces k^{\prime} .

The PSR-PINN approach: To improve prediction accuracy, additional simulation parameters 𝐩\ignorespaces\mathbf{p} (e.g., system–environment coupling, characteristic frequency, temperature) are incorporated into the input. The mapping function becomes

rec:p×{n×n}kn×n,\mathcal{M}_{\rm rec}:\mathbb{R}^{p}\times\{\mathbb{R}^{n\times n}\}^{k^{\prime}}\to\mathbb{R}^{n\times n}\,, (5)

such that

rec[𝐩,[ρS(tkk+1),,ρS(tk)]]=ρS(tk+1).\mathcal{M}_{\rm rec}\Big{[}\mathbf{p},\,[\rho_{\rm S}(t_{k-k^{\prime}+1}),\,\dots,\,\rho_{\rm S}(t_{k})]\Big{]}=\rho_{\rm S}(t_{k+1})\,. (6)

As with the standard SR-PINN, the process is applied recursively with a fixed history length.

The MR-PINN approach: Rather than predicting a single RDM at a time, the MR-PINN approach forecasts a block of future RDMs in one step. Its mapping function is defined by

rec:{n×n}k{n×n}Nf,\mathcal{M}_{\rm rec}:\{\mathbb{R}^{n\times n}\}^{k^{\prime}}\to\{\mathbb{R}^{n\times n}\}^{N_{f}}\,, (7)

with

rec[ρS(tkk+1),,ρS(tk)]=\displaystyle\mathcal{M}_{\rm rec}\Big{[}\rho_{\rm S}(t_{k-k^{\prime}+1}),\,\dots,\,\rho_{\rm S}(t_{k})\Big{]}=
[ρS(tk+1),ρS(tk+2),,ρS(tk+Nf)].\displaystyle\Big{[}\rho_{\rm S}(t_{k+1}),\,\rho_{\rm S}(t_{k+2}),\,\dots,\,\rho_{\rm S}(t_{k+N_{f}})\Big{]}\,. (8)

In this case, the model outputs NfN_{f} future RDMs simultaneously, thus providing a multi-step prediction without requiring iterative updating.

The PMR-PINN approach: This variant extends the MR-PINN method by including simulation parameters in the prediction. The mapping is defined as

rec:p×{n×n}k{n×n}Nf,\mathcal{M}_{\rm rec}:\mathbb{R}^{p}\times\{\mathbb{R}^{n\times n}\}^{k^{\prime}}\to\{\mathbb{R}^{n\times n}\}^{N_{f}}\,, (9)

so that

rec[𝐩,[ρS(tkk+1),,ρS(tk)]]=\displaystyle\mathcal{M}_{\rm rec}\Big{[}\mathbf{p},\,[\rho_{\rm S}(t_{k-k^{\prime}+1}),\,\dots,\,\rho_{\rm S}(t_{k})]\Big{]}=
[ρS(tk+1),ρS(tk+2),,ρS(tk+Nf)].\displaystyle\Big{[}\rho_{\rm S}(t_{k+1}),\,\rho_{\rm S}(t_{k+2}),\,\dots,\,\rho_{\rm S}(t_{k+N_{f}})\Big{]}\,. (10)

By integrating the simulation parameters 𝐩\ignorespaces\mathbf{p} , the model can adjust its predictions to account for different physical conditions while forecasting multiple future time steps concurrently.

Each of these approaches leverages the past history of RDMs (and optionally simulation parameters) to predict the future dynamics of the system, differing primarily in whether they predict a single RDM or multiple RDMs in one go.

To evaluate the proposed methods, we analyze the relaxation dynamics of the SB model and the EET process in the FMO complex (see Methods section for details). The models are implemented using a hybrid deep learning architecture that integrates convolutional neural networks (CNNs) with long short-term memory (LSTM) layers, followed by fully connected dense layers (CNN-LSTM). Following the approach outlined in Ref. 39, training is optimized using a composite loss function, expressed as:

=α11+α22+α33+α44,\mathcal{L}=\alpha_{1}\mathcal{L}_{1}+\alpha_{2}\mathcal{L}_{2}+\alpha_{3}\mathcal{L}_{3}+\alpha_{4}\mathcal{L}_{4}\,, (11)

where each loss term is defined as follows.

The first term, 1\ignorespaces\mathcal{L}_{1} , represents the mean squared error (MSE) between the predicted elements of the RDM, ρS\ignorespaces\mathbf{\rho}_{\text{S}} , and the reference values, ρ~S\ignorespaces\tilde{\mathbf{\rho}}_{\text{S}} :

1=1Ntn2t=1Nti,j=1n(ρ~S,i,j(t)ρS,i,j(t))2.\mathcal{L}_{1}=\frac{1}{N_{t}\cdot n^{2}}\sum_{t=1}^{N_{t}}\sum_{i,j=1}^{n}\left(\tilde{\mathbf{\rho}}_{\text{S},i,j}(t)-\mathbf{\rho}_{\text{S},i,j}(t)\right)^{2}\,. (12)

Here, Nt\ignorespaces N_{t} denotes the number of time steps.

To ensure trace conservation of the density matrix, the second loss term, 2\ignorespaces\mathcal{L}_{2} , penalizes deviations of the trace from unity:

2=1Ntt=1Nt(𝐓𝐫ρS(t)1)2.\mathcal{L}_{2}=\frac{1}{N_{t}}\sum_{t=1}^{N_{t}}\left(\mathbf{Tr}\,\mathbf{\rho}_{\text{S}}(t)-1\right)^{2}\,. (13)

The third term, 3\ignorespaces\mathcal{L}_{3} , enforces positive semi-definiteness by penalizing negative eigenvalues μi(t)\ignorespaces\mu_{i}(t) of the density matrix:

3=1Ntnt=1Nti=1nmax(0,μi(t))2.\mathcal{L}_{3}=\frac{1}{N_{t}\cdot n}\sum_{t=1}^{N_{t}}\sum_{i=1}^{n}\mathbf{\text{max}}(0,-\mu_{i}(t))^{2}\,. (14)

Additionally, 4\ignorespaces\mathcal{L}_{4} ensures that all eigenvalues remain within the valid range [0,1]\ignorespaces[0,1] , enforcing a key physical constraint of the RDM:

4=1Ntnt=1Nti=1n(clip(μi(t),0,1)μi(t))2.\mathcal{L}_{4}=\frac{1}{N_{t}\cdot n}\sum_{t=1}^{N_{t}}\sum_{i=1}^{n}\left(\mathbf{\text{clip}}\left(\mu_{i}(t),0,1\right)-\mu_{i}(t)\right)^{2}\,. (15)

The clipping function used here is defined as:

clip(μi(t),0,1)={0,if μi(t)<0,μi(t),if 0μi(t)1,1,if μi(t)>1.\text{clip}(\mu_{i}(t),0,1)=\begin{cases}0,&\text{if }\mu_{i}(t)<0,\\ \mu_{i}(t),&\text{if }0\leq\mu_{i}(t)\leq 1,\\ 1,&\text{if }\mu_{i}(t)>1.\end{cases} (16)

The weighting coefficients α1,α2,α3,α4\ignorespaces\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4} control the relative contributions of these loss terms. In our case, we set them all to unity ( α1=α2=α3=α4=1.0\ignorespaces\alpha_{1}=\alpha_{2}=\alpha_{3}=\alpha_{4}=1.0 ). Collectively, these loss components ensure that the predicted RDM satisfies key physical properties: accuracy ( 1\ignorespaces\mathcal{L}_{1} ), trace conservation ( 2\ignorespaces\mathcal{L}_{2} ), positive semi-definiteness ( 3\ignorespaces\mathcal{L}_{3} ), and eigenvalue constraints ( 4\ignorespaces\mathcal{L}_{4} ).

For demonstration, we use data from the publicly available QD3SET-1 database[62] for both the SB model and the FMO complex. The models are trained on 80% of the simulations, with the remaining 20% reserved for testing. Further details on the dataset and training process can be found in the Methods section.

Refer to caption
Figure 1: Time evolution of the RDM elements, including both population and coherence terms, as predicted by the SR-PINN, PSR-PINN, MR-PINN, and PMR-PINN models. The first column shows the RDM evolution for the symmetric (Sym) SB model, while the second column displays the corresponding dynamics for the asymmetric (Asym) SB model. Predictions are generated recursively, starting from an initial seed dynamics of time-length 4/Δ\ignorespaces 4/\Delta , and are compared to reference results (shown as dots). For the symmetric case, the parameters used correspond to an unseen set: ε/Δ=0.0\ignorespaces\varepsilon/\Delta=0.0 , γ/Δ=3.0\ignorespaces\gamma/\Delta=3.0 , λ/Δ=0.6\ignorespaces\lambda/\Delta=0.6 , and βΔ=1.0\ignorespaces\beta\Delta=1.0 . In the asymmetric case, the parameters are ε/Δ=1.0\ignorespaces\varepsilon/\Delta=1.0 , γ/Δ=9.0\ignorespaces\gamma/\Delta=9.0 , λ/Δ=0.6\ignorespaces\lambda/\Delta=0.6 , and βΔ=1.0\ignorespaces\beta\Delta=1.0 .
Refer to caption
Figure 2: Time evolution of the RDM elements for the FMO complex with initial excitation on site-1, as predicted by SR-PINN, PSR-PINN, MR-PINN, and PMR-PINN. The first column presents the population dynamics of exciton energy transfer (EET), while the second highlights selected coherence elements. Predictions are generated recursively using an initial seed dynamics of 0.2 ps and are compared to reference dynamics (shown as dots). The test trajectory corresponds to simulation parameters γ=400cm1\ignorespaces\gamma=400~{}\text{cm}^{-1} , λ=40cm1\ignorespaces\lambda=40~{}\text{cm}^{-1} , and T=90K\ignorespaces T=90~{}\text{K} .

Figure 1 presents the predictive performance of all the four models for the time evolution of RDM elements in both symmetric and asymmetric SB models. Each model is provided with an initial short-time seed ( 4/Δ\ignorespaces 4/\Delta ) and tasked with recursively forecasting the system’s future evolution.

The results highlight the limitations of SR-PINN, which exhibits significant errors in both diagonal and off-diagonal terms (population and coherence), leading to a rapid divergence from the expected dynamics. PSR-PINN, despite incorporating simulation parameters, further degrades accuracy, indicating that the past history window of the model remains insufficient for stable recursive predictions. In contrast, MR-PINN, which leverages a longer forecasting horizon, effectively mitigates error accumulation and successfully captures both population and coherence dynamics across the prediction window. PMR-PINN performs similarly to MR-PINN, suggesting that the inclusion of simulation parameters does not provide additional benefits in this setting.

To test a larger system, in Figure 2 we showcase the predicted evolution of RDM elements for the FMO complex under initial excitation on site-1. The models are trained with an initial short-time seed (0.2 ps) and recursively predict the system’s future dynamics.

SR-PINN exhibits considerable inaccuracies, particularly in long-term dynamics, leading to deviations from the expected population transfer trends. PSR-PINN, despite integrating simulation parameters, fails to improve performance and even amplifies errors, especially in diagonal elements. As in the SB model, MR-PINN achieves significantly enhanced accuracy, demonstrating robust predictions of both energy transfer and coherence decay. PMR-PINN yields results comparable to MR-PINN, reinforcing the observation that the longer forecasting window is the primary factor driving predictive stability.

A quantitative analysis of model performance, summarized in the accompanying table (Table 1), further substantiates these findings. As in the SB model, SR-PINN struggles to maintain accuracy, particularly for coherence elements, and PSR-PINN further aggravates errors. MR-PINN consistently outperforms both single-time-step approaches, achieving the lowest mean absolute errors (MAE) across all RDM elements. The inclusion of simulation parameters in PMR-PINN does not lead to meaningful improvements over MR-PINN.

Notably, the errors are higher for the asymmetric SB model compared to the symmetric case, indicating that prediction accuracy degrades as system complexity increases. This trend suggests that more intricate dynamical behaviors impose additional challenges for PINN-based models, particularly when using single-RDM-predicting training strategies.

For the FMO complex, the disparity between methods remains evident. SR-PINN and PSR-PINN perform poorly, with PSR-PINN producing the highest errors for both population and coherence terms, particularly when the initial excitation occurs at site-6 (as shown in the table). MR-PINN and PMR-PINN provide a marked improvement, although predictive errors remain higher compared to the SB model, reflecting the increased complexity of the system. The trend observed in the SB model, where errors increase with system complexity, is also evident in the FMO complex. The lack of significant gains from PMR-PINN over MR-PINN suggests that the longer prediction window is the dominant factor in improving accuracy, while the inclusion of simulation parameters has a limited effect.

Table 1: Time-averaged mean absolute error (MAE) for the diagonal (Diag) and off-diagonal (Off-diag) elements of the RDMs predicted by the SR-PINN, PSR-PINN, MR-PINN, and PMR-PINN models for the test trajectory of the SB model and FMO complex. Off-diagonal errors represent the average MAE for both real and imaginary components. Values are expressed in the form 10x\ignorespaces 10^{x} .
Model SB Model (Sym) SB Model (Asym)
Diag Off-diag Diag Off-diag
(Real, Imag) (Real, Imag)
SR-PINN 1.6e-2 (4.4e-2, 1.7e-3) 2.9e-2 (3.8e-2, 7.8e-3)
PSR-PINN 1.3e-2 (1.3e-1, 1.1e-3) 1.0e-1 (7.9e-2, 3.6e-3)
MR-PINN 4.9e-4 (7.3e-4, 5.1e-4) 1.4e-3 (3.9e-3, 9.1e-4)
PMR-PINN 6.7e-4 (5.1e-3, 8.8e-4) 1.3e-3 (1.2e-2, 1.2e-3)
Model FMO Complex (site-1) FMO Complex (site-6)
Diag Off-diag Diag Off-diag
(Real, Imag) (Real, Imag)
SR-PINN 1.4e-2 (4.1e-3, 8.6e-4) 1.3e-2 (2.3e-3, 1.6e-4)
PSR-PINN 6.3e-2 (1.6e-2, 4.8e-4) 1.1e-1 (2.9e-2, 2.9e-3)
MR-PINN 2.6e-2 (6.1e-3, 2.8e-4) 2.5e-2 (2.3e-3, 1.5e-4)
PMR-PINN 2.5e-2 (5.9e-3, 4.2e-4) 2.6e-2 (2.3e-3, 1.4e-4)

In summary, this work investigated four PINN-based architectures for predicting the time evolution of the RDM in selected open quantum systems: single-RDM-predicting models (SR-PINN, PSR-PINN) and multi-RDMs-predicting models (MR-PINN, PMR-PINN). These models use historical RDM data to predict future dynamics, with some incorporating environment-specific parameters such as temperature and system-bath coupling.

Our findings reveal the limitations of single-RDM-predicting models (SR-PINN and PSR-PINN) in capturing long-term quantum dynamics. These models, constrained by a narrow history window, struggle to predict long-term quantum evolution accurately, as they fail to capture the full complexity of system evolution. However, as demonstrated in previous works, [40, 43, 44, 60] when trained on simpler observables—such as the population difference in the spin-boson model—they yield reasonable predictions, suggesting that single-variable evolution is easier to propagate recursively.

In contrast, multi-RDMs-predicting models (MR-PINN and PMR-PINN) consistently provide stable and accurate long-term predictions across various scenarios. By predicting multiple RDMs in one step, these models mitigate cumulative errors and better capture long-range temporal correlations, improving their ability to generalize to unseen conditions. This emphasizes that extending the forecast horizon is more effective than merely increasing the historical input length, as explicitly forecasting future states stabilizes predictions more effectively than relying solely on past dynamics.

Surprisingly, explicitly incorporating simulation parameters—such as reorganization energy, characteristic frequency, and temperature (in PSR-PINN and PMR-PINN)—did not consistently improve predictive accuracy and, in some cases, slightly reduced performance. This suggests that the effects of these parameters are already implicitly captured in the RDM evolution, rendering their explicit inclusion redundant and potentially introducing unnecessary complexity.

Overall, this work underscores the limitations of short-sighted, single-step recursive models in complex quantum systems and reinforces the advantages of far-sighted, multi-step approaches for robust, long-term predictions. Our findings highlight that incorporating a longer predictive horizon is key to improving prediction stability, capturing complex dynamics, and reducing the impact of short-term fluctuations in open quantum systems.

I Methods

Hamiltonians of the SB model and FMO complex: The SB model describes a two-level system interacting with an environment composed of independent harmonic oscillators. The Hamiltonian of the system is given by:

H=ϵσz+Δσx+kωk𝐛k𝐛k+σzkck(𝐛k+𝐛k),H=\epsilon\mathbf{\sigma}_{z}+\Delta\mathbf{\sigma}_{x}+\sum_{k}\omega_{k}\mathbf{b}_{k}^{\dagger}\mathbf{b}_{k}+\mathbf{\sigma}_{z}\sum_{k}c_{k}(\mathbf{b}_{k}^{\dagger}+\mathbf{b}_{k}), (17)

where σz\ignorespaces\mathbf{\sigma}_{z} and σx\ignorespaces\mathbf{\sigma}_{x} are Pauli matrices, ϵ\ignorespaces\epsilon denotes the energy difference between the two states, and Δ\ignorespaces\Delta represents their coupling strength. The surrounding environment consists of harmonic oscillators characterized by creation and annihilation operators 𝐛k\ignorespaces\mathbf{b}_{k}^{\dagger} and 𝐛k\ignorespaces\mathbf{b}_{k} , corresponding to mode k\ignorespaces k with frequency ωk\ignorespaces\omega_{k} . The system-bath interaction is governed by the coupling coefficient ck\ignorespaces c_{k} for each mode.

Our next system of interest, the FMO complex is a trimeric protein found in green sulfur bacteria, where it plays a crucial role in photosynthetic energy transfer. Each monomer of the FMO complex contains multiple chlorophyll molecules—typically seven or eight—that facilitate exciton transport.[63] The excitonic dynamics within a monomer can be described by the Frenkel exciton model Hamiltonian:[64]

𝐇\displaystyle\mathbf{H} =i=1n|iϵii|+ijn|iJijj|\displaystyle=\sum_{i=1}^{n}\ket{i}\epsilon_{i}\bra{i}+\sum_{i\neq j}^{n}\ket{i}J_{ij}\bra{j}
+i=1jk=1(12𝐏k,i2+12ωk,i2𝐐k,i2)𝐈\displaystyle+\sum_{i=1}^{j}\sum_{k=1}\left(\frac{1}{2}\mathbf{P}_{k,i}^{2}+\frac{1}{2}\omega_{k,i}^{2}\mathbf{Q}_{k,i}^{2}\right)\mathbf{I}
i=1nk=1|ick,i𝐐k,ii|+i=1n|iλii|,\displaystyle-\sum_{i=1}^{n}\sum_{k=1}\ket{i}c_{k,i}\mathbf{Q}_{k,i}\bra{i}+\sum_{i=1}^{n}\ket{i}\lambda_{i}\bra{i}, (18)

where n\ignorespaces n denotes the number of chlorophyll sites, ϵi\ignorespaces\epsilon_{i} is the site energy, and Jij\ignorespaces J_{ij} represents the electronic coupling between sites i\ignorespaces i and j\ignorespaces j . The operators 𝐏k,i\ignorespaces\mathbf{P}_{k,i} and 𝐐k,i\ignorespaces\mathbf{Q}_{k,i} correspond to the momentum and position of the k\ignorespaces k -th vibrational mode associated with site i\ignorespaces i , while ωk,i\ignorespaces\omega_{k,i} is its frequency. The identity matrix 𝐈\ignorespaces\mathbf{I} ensures proper dimensional consistency in the model. The coupling strength between site i\ignorespaces i and the k\ignorespaces k -th vibrational mode is given by ck,i\ignorespaces c_{k,i} , and λi\ignorespaces\lambda_{i} represents the reorganization energy of site i\ignorespaces i .

In both the SB model and the FMO complex, the environmental influence is characterized by the Debye spectral density:

J(ω)=2λγωω2+γ2,J(\omega)=2\lambda\frac{\gamma\omega}{\omega^{2}+\gamma^{2}}, (19)

where λ\ignorespaces\lambda is the reorganization energy, and γ\ignorespaces\gamma is the characteristic frequency, defined as the inverse of the relaxation time ( γ=1/τ\ignorespaces\gamma=1/\tau ). For the FMO complex, we assume that all chlorophyll sites experience the same environmental conditions.

Data extraction: For training our models, we utilized precomputed RDMs provided by the QD3SET-1 database,[62] which contains simulations based on the hierarchical equations of motion (HEOM) method.[22, 65, 66, 27] In the case of the SB model, our dataset, labeled 𝒟sb\ignorespaces\mathcal{D}_{\mathrm{sb}} , comprises 1000 simulations covering a four-dimensional parameter space defined by ε/Δ\varepsilon/\Delta, λ/Δ\lambda/\Delta, γ/Δ\gamma/\Delta, and βΔ\beta\Delta, corresponding to the system bias, bath reorganization energy, bath relaxation rate, and inverse temperature, respectively. For the seven-site FMO complex, we also used 1000 simulations from QD3SET-1 that detail the exciton dynamics starting from excitations at site-1 and site-6, spanning the parameter set (λ,γ,T)(\lambda,\gamma,T). In this dataset, the dynamics was generated using the trace-conserving local thermalizing Lindblad master equation (LTLME),[67] with Hamiltonian parameters taken from the work of Adolphs and Renger.[68] Specifically, the FMO Hamiltonian, 𝐇S\mathbf{H}_{\mathrm{S}}, is expressed as

𝐇S=(20087.75.55.96.713.79.987.732030.88.20.711.84.35.530.8053.52.29.66.05.98.253.511070.717.063.66.70.72.270.727081.11.313.711.89.617.081.142039.79.94.36.063.61.339.7230),\mathbf{H}_{\mathrm{S}}=\begin{pmatrix}200&-87.7&5.5&-5.9&6.7&-13.7&-9.9\\ -87.7&320&30.8&8.2&0.7&11.8&4.3\\ 5.5&30.8&0&-53.5&-2.2&-9.6&6.0\\ -5.9&8.2&-53.5&110&-70.7&-17.0&-63.6\\ 6.7&0.7&-2.2&-70.7&270&81.1&-1.3\\ -13.7&11.8&-9.6&-17.0&81.1&420&39.7\\ -9.9&4.3&6.0&-63.6&-1.3&39.7&230\end{pmatrix}, (20)

with an added diagonal offset of 12210cm112210\,\mathrm{cm}^{-1}.

Data Preparation: To construct the training dataset, each RDM, ρS(t)\ignorespaces\rho_{\rm S}(t) , along with its associated coefficients, is flattened into a one-dimensional vector. Given the Hermitian property of the RDM ( ρS,ij(t)=ρS,ji(t)\ignorespaces\rho_{\text{S},ij}(t)=\rho_{\text{S},ji}(t)^{*} ), we retain only the real components of the diagonal elements while including both the real and imaginary parts of the upper triangular off-diagonal elements.

Each simulation trajectory is then divided into multiple training samples. In the recursive training framework, an initial segment of the system’s dynamics, {ρS(t0),ρS(t1),,ρS(tk)}\ignorespaces\{\rho_{\rm S}(t_{0}),\rho_{\rm S}(t_{1}),\dots,\rho_{\rm S}(t_{k})\} , serves as the input sequence. For the single-RDM-predicting models (SR-PINN and PSR-PINN), the dataset is structured to predict the immediate next RDM, ρS(tk+1)\ignorespaces\rho_{\rm S}(t_{k+1}) . The input sequence is updated at each step by appending the newly predicted RDM while discarding the earliest one ( ρS(t0)\ignorespaces\rho_{\rm S}(t_{0}) ), maintaining a fixed sequence length. This iterative process continues until the final time step tK\ignorespaces t_{K} is reached.

For the multi-RDMs-predicting prediction models (MR-PINN and PMR-PINN), the target output consists of a block of Nf\ignorespaces N_{f} future RDMs. In the SB model, the prediction window is set to Nf=40\ignorespaces N_{f}=40 time steps, computed as Nf=2/Δ×Δ/dt\ignorespaces N_{f}=2/\Delta\times\Delta/dt . For the FMO complex, Nf=80\ignorespaces N_{f}=80 time steps, which corresponds to 0.4ps\ignorespaces 0.4\,\text{ps} at the selected time step.

More generally, given a propagation period from tk\ignorespaces t_{k} to tK\ignorespaces t_{K} with a prediction window of dt\ignorespaces dt , the total number of training samples per simulation is determined by (tKtk)/dt\ignorespaces(t_{K}-t_{k})/dt . For the single-RDM-predicting approaches, the SB model uses tK=20/Δ\ignorespaces t_{K}=20/\Delta , tk=4/Δ\ignorespaces t_{k}=4/\Delta , and dt=0.05\ignorespaces dt=0.05 , while the FMO complex adopts tK=1ps\ignorespaces t_{K}=1\,\text{ps} , tk=0.2ps\ignorespaces t_{k}=0.2\,\text{ps} , and dt=0.005ps\ignorespaces dt=0.005\,\text{ps} . In the multi-RDMs setting, the effective dt\ignorespaces dt is 2/Δ\ignorespaces 2/\Delta for the SB model and 0.4ps\ignorespaces 0.4\,\text{ps} for the FMO complex.

It is important to note that in the PSR-PINN and PMR-PINN models, simulation parameters were normalized by their respective maximum values. The normalized simulation parameters are expressed as λ/λmax\ignorespaces\lambda/\lambda_{\rm max} , γ/γmax\ignorespaces\gamma/\gamma_{\rm max} , β/βmax\ignorespaces\beta/\beta_{\rm max} and T/Tmax\ignorespaces T/T_{\rm max} , where λmax\ignorespaces\lambda_{\rm max} , γmax\ignorespaces\gamma_{\rm max} , βmax\ignorespaces\beta_{\rm max} and Tmax\ignorespaces T_{\rm max} correspond to the maximum values of λ\ignorespaces\lambda , γ\ignorespaces\gamma , β\ignorespaces\beta and T\ignorespaces T , respectively.

Training and Prediction Strategies: To improve training efficiency, we utilize farthest point sampling[69, 41] to select a representative subset of simulation trajectories. For each case in SB model ( ε/Δ=0\ignorespaces\varepsilon/\Delta=0 and 1\ignorespaces 1 ) and the FMO complex (initial excitations on site-1 and site-6), 400 simulation trajectories are allocated for training, with the remaining data reserved for testing. The training is conducted using a CNN-LSTM architecture, where convolutional layers are followed by LSTM layers and fully connected dense layers. For the SB model, a single CNN-LSTM model is trained for both cases ( ε/Δ=0\ignorespaces\varepsilon/\Delta=0 and 1\ignorespaces 1 ), whereas for the FMO complex, separate models are trained for initial excitations on site-1 and site-6. To ensure a fair comparison, all models share an identical architecture, and during inference, models are selected based on comparable training and validation loss values.

Inference follows the same approach as the training data preparation. In single RDM prediction, a short sequence of past RDMs, {ρS(t0),ρS(t1),,ρS(tk)}\ignorespaces\{\mathbf{\rho}_{\text{S}}(t_{0}),\mathbf{\rho}_{\text{S}}(t_{1}),\dots,\mathbf{\rho}_{\text{S}}(t_{k})\} , serves as the input seed. The model predicts the next RDM, ρS(tk+1)\ignorespaces\mathbf{\rho}_{\text{S}}(t_{k+1}) , which is then appended to the input sequence while the oldest RDM is removed. This iterative process continues until the entire trajectory is predicted.

For multi-RDMs prediction, a similar strategy is employed, but instead of predicting a single RDM, the model generates a window of Nf\ignorespaces N_{f} future RDMs in each step. The input sequence is then updated with the last Nk\ignorespaces N_{k} RDMs from the newly predicted block, allowing for the efficient generation of extended dynamics.

II Acknowledgments

A.U. acknowledges funding from the National Natural Science Foundation of China (No. W2433037) and Natural Science Foundation of Anhui Province (No. 2408085QA002).

III Data availability

The code and data supporting this work are available at https://github.com/Arif-PhyChem/rc-pinn-comparison.

IV Competing interests

The author declare no competing interests.

References

References

  • [1] Breuer HP, Laine EM, Piilo J, Vacchini B. Colloquium: Non-Markovian dynamics in open quantum systems. Reviews of Modern Physics. 2016;88(2):021002.
  • [2] Khodjasteh K, Sastrawan J, Hayes D, Green TJ, Biercuk MJ, Viola L. Designing a practical high-fidelity long-time quantum memory. Nature Communications. 2013;4(1):2045.
  • [3] Cui P, Li XQ, Shao J, Yan Y. Quantum transport from the perspective of quantum open systems. Physics Letters A. 2006;357(6):449-53.
  • [4] Slocombe L, Sacchi M, Al-Khalili J. An open quantum systems approach to proton tunnelling in DNA. Communications Physics. 2022;5(1):109.
  • [5] Zerah Harush E, Dubi Y. Do photosynthetic complexes use quantum coherence to increase their efficiency? Probably not. Science advances. 2021;7(8):eabc4631.
  • [6] Miller WH. The Semiclassical Initial Value Representation: A Potentially Practical Way for Adding Quantum Effects to Classical Molecular Dynamics Simulations. J Phys Chem A. 2001;105(13):2942-55.
  • [7] Cotton SJ, Miller WH. Symmetrical windowing for quantum states in quasi-classical trajectory simulations: Application to electronically non-adiabatic processes. The Journal of chemical physics. 2013;139(23).
  • [8] Liu J, He X, Wu B. Unified formulation of phase space mapping approaches for nonadiabatic quantum dynamics. Accounts of chemical research. 2021;54(23):4215-28.
  • [9] Runeson JE, Richardson JO. Spin-mapping approach for nonadiabatic molecular dynamics. The Journal of Chemical Physics. 2019;151(4):044119.
  • [10] Runeson JE, Richardson JO. Generalized spin mapping for quantum-classical dynamics. The Journal of chemical physics. 2020;152(8).
  • [11] Mannouch JR, Richardson JO. A partially linearized spin-mapping approach for nonadiabatic dynamics. I. Derivation of the theory. The Journal of chemical physics. 2020;153(19).
  • [12] Mannouch JR, Richardson JO. A partially linearized spin-mapping approach for nonadiabatic dynamics. II. Analysis and comparison with related approaches. The Journal of chemical physics. 2020;153(19).
  • [13] Mannouch JR, Richardson JO. A partially linearized spin-mapping approach for simulating nonlinear optical spectra. The Journal of Chemical Physics. 2022;156(2).
  • [14] Tao G. A multi-state trajectory method for non-adiabatic dynamics simulations. The Journal of Chemical Physics. 2016;144(9).
  • [15] Mannouch JR, Richardson JO. A mapping approach to surface hopping. The Journal of Chemical Physics. 2023;158(10).
  • [16] Crespo-Otero R, Barbatti M. Recent advances and perspectives on nonadiabatic mixed quantum–classical dynamics. Chemical reviews. 2018;118(15):7026-68.
  • [17] Qiu J, Lu Y, Wang L. Multilayer subsystem surface hopping method for large-scale nonadiabatic dynamics simulation with hundreds of thousands of states. Journal of Chemical Theory and Computation. 2022;18(5):2803-15.
  • [18] Schmidt JR, Parandekar PV, Tully JC. Mixed quantum-classical equilibrium: Surface hopping. J Chem Phys. 2008;129(4):044104.
  • [19] Amati G, Runeson JE, Richardson JO. On detailed balance in nonadiabatic dynamics: From spin spheres to equilibrium ellipsoids. J Chem Phys. 2023;158:064113.
  • [20] Amati G, Mannouch JR, Richardson JO. Detailed balance in mixed quantum–classical mapping approaches. J Chem Phys. 2023;159:214114.
  • [21] Mannouch JR, Kelly A. Toward a Correct Description of Initial Electronic Coherence in Nonadiabatic Dynamics Simulations. J Phys Chem Lett. 2024;15:11687-95.
  • [22] Tanimura Y, Kubo R. Time evolution of a quantum system in contact with a nearly Gaussian-Markoffian noise bath. Journal of the Physical Society of Japan. 1989;58(1):101-14.
  • [23] Makarov DE, Makri N. Path integrals for dissipative systems by tensor multiplication. Condensed phase quantum dynamics for arbitrarily long time. Chemical physics letters. 1994;221(5-6):482-91.
  • [24] Su Y, Chen ZH, Wang Y, Zheng X, Xu RX, Yan Y. Extended dissipaton equation of motion for electronic open quantum systems: Application to the Kondo impurity model. The Journal of Chemical Physics. 2023;159(2).
  • [25] Yan Y, Xu M, Li T, Shi Q. Efficient propagation of the hierarchical equations of motion using the Tucker and hierarchical Tucker tensors. The Journal of Chemical Physics. 2021;154(19).
  • [26] Gong H, Ullah A, Ye L, Zheng X, Yan Y. Quantum entanglement of parallel-coupled double quantum dots: A theoretical study using the hierarchical equations of motion approach. Chinese Journal of Chemical Physics. 2018;31(4):510.
  • [27] Xu M, Yan Y, Shi Q, Ankerhold J, Stockburger J. Taming quantum noise for efficient low temperature simulations of open quantum systems. Physical Review Letters. 2022;129(23):230601.
  • [28] Bai S, Zhang S, Huang C, Shi Q. Hierarchical Equations of Motion for Quantum Chemical Dynamics: Recent Methodology Developments and Applications. Accounts of Chemical Research. 2024;57(21):3151-60.
  • [29] Wang Y, Mulvihill E, Hu Z, Lyu N, Shivpuje S, Liu Y, et al. Simulating open quantum system dynamics on NISQ computers with generalized quantum master equations. Journal of Chemical Theory and Computation. 2023;19(15):4851-62.
  • [30] Makri N. Quantum Dynamics Methods Based on the Real-Time Path Integral. In: Comprehensive Computational Chemistry, First Edition: Volume 1-4. Elsevier; 2023. p. V4-293.
  • [31] Han L, Chernyak V, Yan YA, Zheng X, Yan Y. Stochastic Representation of Non-Markovian Fermionic Quantum Dissipation. Physical review letters. 2019;123(5):050601.
  • [32] Han L, Ullah A, Yan YA, Zheng X, Yan Y, Chernyak V. Stochastic equation of motion approach to fermionic dissipative dynamics. I. Formalism. The Journal of Chemical Physics. 2020;152(20):204105.
  • [33] Ullah A, Han L, Yan YA, Zheng X, Yan Y, Chernyak V. Stochastic equation of motion approach to fermionic dissipative dynamics. II. Numerical implementation. The Journal of Chemical Physics. 2020;152(20):204106.
  • [34] Chen L, Bennett DI, Eisfeld A. Simulation of absorption spectra of molecular aggregates: A hierarchy of stochastic pure state approach. The Journal of Chemical Physics. 2022;156(12).
  • [35] Dan X, Xu M, Yan Y, Shi Q. Generalized master equation for charge transport in a molecular junction: Exact memory kernels and their high order expansion. The Journal of Chemical Physics. 2022;156(13).
  • [36] Stockburger JT. Exact propagation of open quantum systems in a system-reservoir context. EPL (Europhysics Letters). 2016;115(4):40010.
  • [37] Lyu N, Mulvihill E, Soley MB, Geva E, Batista VS. Tensor-train thermo-field memory kernels for generalized quantum master equations. Journal of Chemical Theory and Computation. 2023;19(4):1111-29.
  • [38] Liu Yy, Yan Ym, Xu M, Song K, Shi Q. Exact generator and its high order expansions in time-convolutionless generalized master equation: Applications to spin-boson model and excitation energy transfer. Chinese Journal of Chemical Physics. 2018;31(4):575-83.
  • [39] Ullah A, Richardson JO. Machine learning meets 𝔰𝔲(n)\mathfrak{su}(n) Lie algebra: Enhancing quantum dynamics learning with exact trace conservation. arXiv preprint arXiv:250215141. 2025.
  • [40] Ullah A, Dral PO. Speeding up quantum dissipative dynamics of open systems with kernel methods. New Journal of Physics. 2021.
  • [41] Ullah A, Dral PO. Predicting the future of excitation energy transfer in light-harvesting complex with artificial intelligence-based quantum dynamics. Nature communications. 2022;13(1930):1-8.
  • [42] Ullah A, Dral PO. One-Shot Trajectory Learning of Open Quantum Systems Dynamics. The Journal of Physical Chemistry Letters. 2022;13(26):6037-41.
  • [43] Rodríguez LEH, Ullah A, Espinosa KJR, Dral PO, Kananenka AA. A comparative study of different machine learning methods for dissipative quantum dynamics. Machine Learning: Science and Technology. 2022;3(4):045016.
  • [44] Herrera Rodríguez LE, Kananenka AA. Convolutional neural networks for long time dissipative quantum dynamics. The Journal of Physical Chemistry Letters. 2021;12(9):2476-83.
  • [45] Ge F, Zhang L, Hou YF, Chen Y, Ullah A, Dral PO. Four-dimensional-spacetime atomistic artificial intelligence models. The Journal of Physical Chemistry Letters. 2023;14(34):7732-43.
  • [46] Zhang L, Ullah A, Pinheiro Jr M, Dral PO, Barbatti M. Excited-state dynamics with machine learning. In: Quantum Chemistry in the Age of Machine Learning. Elsevier; 2023. p. 329-53.
  • [47] Wu D, Hu Z, Li J, Sun X. Forecasting nonadiabatic dynamics using hybrid convolutional neural network/long short-term memory network. The Journal of Chemical Physics. 2021;155(22):224104.
  • [48] Lin K, Peng J, Xu C, Gu FL, Lan Z. Automatic evolution of machine-learning-based quantum dynamics with uncertainty analysis. Journal of Chemical Theory and Computation. 2022;18(10):5837-55.
  • [49] Bandyopadhyay S, Huang Z, Sun K, Zhao Y. Applications of neural networks to the simulation of dynamics of open quantum systems. Chemical Physics. 2018;515:272-8.
  • [50] Yang B, He B, Wan J, Kubal S, Zhao Y. Applications of neural networks to dynamics simulation of Landau-Zener transitions. Chemical Physics. 2020;528:110509.
  • [51] Lin K, Peng J, Xu C, Gu FL, Lan Z. Trajectory Propagation of Symmetrical Quasi-classical Dynamics with Meyer-Miller Mapping Hamiltonian Using Machine Learning. The Journal of Physical Chemistry Letters. 2022;13:11678-88.
  • [52] Tang D, Jia L, Shen L, Fang WH. Fewest-Switches Surface Hopping with Long Short-Term Memory Networks. The Journal of Physical Chemistry Letters. 2022;13(44):10377-87.
  • [53] Shakiba M, Philips AB, Autschbach J, Akimov AV. Machine Learning Mapping Approach for Computing Spin Relaxation Dynamics. The Journal of Physical Chemistry Letters. 2024. In press.
  • [54] Lin K, Gao X. Enhancing Open Quantum Dynamics Simulations Using Neural Network-Based Non-Markovian Stochastic Schr\\backslash" odinger Equation Method. arXiv preprint arXiv:241115914. 2024.
  • [55] Zeng H, Kou Y, Sun X. How Sophisticated Are Neural Networks Needed to Predict Long-Term Nonadiabatic Dynamics? Journal of Chemical Theory and Computation. 2024;20(22):9832-48.
  • [56] Long C, Cao L, Ge L, Li QX, Yan Y, Xu RX, et al. Quantum neural network approach to Markovian dissipative dynamics of many-body open quantum systems. The Journal of Chemical Physics. 2024 08;161(8):084105. Available from: https://doi.org/10.1063/5.0220357.
  • [57] Cao L, Ge L, Zhang D, Li X, Wang Y, Xu RX, et al. Neural Network Approach for Non-Markovian Dissipative Dynamics of Many-Body Open Quantum Systems. arXiv preprint arXiv:240411093. 2024.
  • [58] Zhang J, Chen L. A non-Markovian neural quantum propagator and its application in the simulation of ultrafast nonlinear spectra. Phys Chem Chem Phys. 2025;27:182-9. Available from: http://dx.doi.org/10.1039/D4CP03736G.
  • [59] Zhang J, Benavides-Riveros CL, Chen L. Artificial-Intelligence-Based Surrogate Solution of Dissipative Quantum Dynamics: Physics-Informed Reconstruction of the Universal Propagator. The Journal of Physical Chemistry Letters. 2024;15(13):3603-10.
  • [60] Herrera Rodríguez LE, Kananenka AA. A short trajectory is all you need: A transformer-based model for long-time dissipative quantum dynamics. The Journal of Chemical Physics. 2024;161(17).
  • [61] Zhang J, Benavides-Riveros CL, Chen L. Neural quantum propagators for driven-dissipative quantum dynamics. Phys Rev Res. 2025 Jan;7:L012013. Available from: https://link.aps.org/doi/10.1103/PhysRevResearch.7.L012013.
  • [62] Ullah A, Rodríguez LEH, Dral PO, Kananenka AA. QD3SET-1: A Database with Quantum Dissipative Dynamics Data Sets. Frontiers in Physics. 2023;11:1223973.
  • [63] Am Busch MS, Müh F, Madjet MEA, Renger T. The eighth bacteriochlorophyll completes the excitation energy funnel in the FMO protein. The journal of physical chemistry letters. 2011;2(2):93-8.
  • [64] Ishizaki A, Fleming GR. Unified treatment of quantum coherent and incoherent hopping dynamics in electronic energy transfer: Reduced hierarchy equation approach. The Journal of chemical physics. 2009;130(23):234111.
  • [65] Shi Q, Chen L, Nan G, Xu RX, Yan Y. Efficient hierarchical Liouville space propagator to quantum dissipative dynamics. The Journal of chemical physics. 2009;130(8):084105.
  • [66] Chen ZH, Wang Y, Zheng X, Xu RX, Yan Y. Universal time-domain Prony fitting decomposition for optimized hierarchical quantum master equations. The Journal of Chemical Physics. 2022;156:221102.
  • [67] Mohseni M, Rebentrost P, Lloyd S, Aspuru-Guzik A. Environment-assisted quantum walks in photosynthetic energy transfer. J Chem Phys. 2008;129(17):11B603.
  • [68] Adolphs J, Renger T. How proteins trigger excitation energy transfer in the FMO complex of green sulfur bacteria. Biophysical journal. 2006;91(8):2778-97.
  • [69] Dral PO. MLatom: A program package for quantum chemical research assisted by machine learning. Journal of computational chemistry. 2019;40(26):2339-47.