This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

institutetext: Department of Physics, College of Sciences, Shanghai University, Shanghai 200444, People’s Republic of China**institutetext: Shanghai Key Laboratory of High Temperature Superconductors, Shanghai University, Shanghai 200444, People’s Republic of Chinainstitutetext: Center for Gravitation and Cosmology, Colleage of Physical Science and Technology, Yangzhou University, Yangzhou 225009, People’s Republic of China

Toward the nonequilibrium thermodynamic analog of complexity and the Jarzynski identity

Chen Bai, †,*    Wen-hao Li †,*,‡,1    Xian-hui Ge111Corresponding author. [email protected] [email protected] [email protected]
Abstract

The Jarzynski identity can describe small-scale nonequilibrium systems through stochastic thermodynamics. The identity considers fluctuating trajectories in a phase space. The complexity geometry frames the discussions on quantum computational complexity using the method of Riemannian geometry, which builds a bridge between optimal quantum circuits and classical geodesics in the space of unitary operators. Complexity geometry enables the application of the methods of classical physics to deal with pure quantum problems. By combining the two frameworks, i.e., the Jarzynski identity and complexity geometry, we derived a complexity analog of the Jarzynski identity using the complexity geometry. We considered a set of geodesics in the space of unitary operators instead of the trajectories in a phase space. The obtained complexity version of the Jarzynski identity strengthened the evidence for the existence of a well-defined resource theory of uncomplexity and presented an extensive discussion on the second law of complexity. Furthermore, analogous to the thermodynamic fluctuation-dissipation theorem, we proposed a version of the fluctuation-dissipation theorem for the complexity. Although this study does not focus on holographic fluctuations, we found that the results are surprisingly suitable for capturing their information. The results obtained using nonequilibrium methods may contribute to understand the nature of the complexity and study the features of the holographic fluctuations.

1 Introduction

After Wheeler proposed the “It from bit” 1 , an increasing number of concepts in information theory have been introduced into every corner of physics and have played important roles. One fascinating and novel example of such concepts is the quantum computational complexity222“Quantum computational complexity” is referred to as “complexity”, defined as the minimal number of primitive quantum gates required to generate a given unitary operator UU:

U=gNg2g1Complexity=N,U=\underbrace{g_{N}...g_{2}g_{1}}_{\mathrm{Complexity}=N}, (1)

where the fixed gate set {g1,g2,,gN}\{g_{1},g_{2},\dots,g_{N}\} comprises the primitive gates required to generate UU. Complexity has been introduced as a theoretical tool for quantifying the difficulty faced in implementing a desired quantum computational task. It measures the hardness in constructing a given unitary operator UU (unitary complexity) or approximating a target quantum state from a reference state (state complexity).

In the context of AdS/CFT correspondence 12 ; 13 ; 14 , several information quantities have natural duality in terms of geometric objects, which is considered to encode the features of holographic spacetime (e.g., entanglement entropy is dual to the area of extremal surfaces) 15 . Similar to the entanglement entropy, complexity was recently conjectured to have a holographic dual. The two main holographic correspondences for complexity are “Complexity=Volume16 ; 17 ; 18 and “Complexity=Action19 ; 20 . Chemissany and Osborne 21 developed a procedure to directly associate a pseudo-Riemannian manifold with the dual AdS space (bulk spacetime) arising from a natural causal set induced by local quantum circuits. Additionally, they studied the fluctuations of the AdS space, which is caused by the dynamics of the dual boundary quantum system via the principle of minimal complexity333This will be explained in the next section., and argued that the Brownian motion in the space of unitary operators might simulate such a fluctuation. Furthermore, they introduced a partition function by introducing a path integral in the space of unitary operators to capture information on holographic fluctuations.

To obtain a more quantitative comprehension of the complexity, a geometric treatment was proposed for the complexity in 2 and solidified in 3 ; 4 ; 5 . Based on previous works, a framework, called “complexity geometry” was gradually established 6 ; 7 . We summarized the main idea of complexity geometry as follows: introduction of a Riemannian (or a Finsler) metric in the space of unitary operators444The group manifold, the space of unitary operators, and the configuration space in this article all refer to special unitary group with an introduced metric structure. (note that the elements of the space of unitary operators act on a given number of qubits). Accordingly, the distance or action functional obtained from the metric is defined as the two measures of the complexity. Therefore, pure quantum (quantum scenario) complexity-related problems are changed into geometric problems that can be solved with classical mechanics (classical scenario) 8 . In particular, geodesics in the space of unitary operators can be obtained by geodesic equations. Recently, complexity and its geometry have been used as an efficient tool to study extensive topics, such as the second law of complexity 8 , black hole thermodynamics 9 , the accelerated expansion of the universe 10 , and quantum gravity 11 .

In the past few decades, the study of nonequilibrium systems in high energy physics has become increasingly popular. Accordingly, several remarkable theoretical frameworks have been implemented to capture the features of nonequilibrium systems. One of the most eye-catching frameworks is the Jarzynski identity 22 ; 23 , which connects equilibrium quantities with nonequilibrium processes. In particular, the Jarzynski identity builds a bridge between the equilibrium free energy difference, ΔF\Delta F, and the work done on the system during a non-equilibrium process, WW. The Jarzynski identity is expressed in the following form:

exp(βW)=exp(βΔF),\left\langle\mathrm{exp}(-\beta W)\right\rangle=\mathrm{exp}(-\beta\Delta F), (2)

where β\beta denotes the inverse temperature. The bracket, \left\langle\cdots\right\rangle, represents the ensemble average of all possible values of WW. There are several proofs of the Jarzynski identity 24 ; 25 ; 26 ; 27 ; 28 . Hummer and Szabo proposed an elegant path integral proof 27 of the Jarzynski identity based on the Feynman-Kac formula 29 ; 30 ; 31 ; 32 . This has played a pivotal role in stochastic thermodynamics. Furthermore, as a novel tool, the Jarzynski identity has been used as diverse as renormalization group 33 , Out-of-Time-Order correlators (OTOCs) 34 , and the Re´\acute{\mathrm{e}}nyi entropy 35 in holography and quantum information. Thus, it is natural to propose that the Jarzynski identity can be generalized to connect with another important information quantity, the complexity, which may provide us with deeper insights into high-energy physics.

The Jarzynski identity not only interrelates with the second law of thermodynamics but also characterizes a few fluctuation relations, including the fluctuation-dissipation theorem that significantly connects with entropy. Based on this, one of the core ideas we explored in this study was the derivation of a version of the Jarzynski identity for complexity using the path integral approach 27 . In addition, we generalized the discussions in 8 for systems with time-dependent Hamiltonians and derived a version of the fluctuation-dissipation relation for complexity. Because 21 found that the fluctuations of a boundary quantum system are diametrically associated with those of a bulk spacetime, we suggested that the proposed fluctuation-dissipation theorem is a feasible tool for quantitatively exploring the holographic fluctuations555We will not delve into this issue or invoke any gravitational model in our paper. In this paper, we mainly focus on presenting the relationship between the complexity itself and the Jarzynski identity..

In this study, we derived a complexity version of the Jarzynski identity, which is our main result. In addition, we argued that the obtained identity might bring us insights into several topics about complexity, in particular uncomplexity as a computational resource, generalization of the second law of complexity, complexity fluctuation-dissipation theorem, and holographic fluctuations. The remainder of this paper is organized as follows. In Section 2, we briefly review the central concepts of the complexity geometry and the complexity version of the least action principle, namely the principle of minimal complexity. We review a special derivation of the Jarzynski identity based on the path integral method. In Section 3, we introduce the path integral in the space of unitary operators, and apply it to obtain the complexity version of the Jarzynski identity. The application of the Hamilton-Jacobi equation helps us rewrite the identity to a more intuitive form. In Section 4, we discuss four issues about the complexity based on the obtained Jarzynski identity, which are the resource theory of uncomplexity, generalization of the second law of complexity for stochastic auxiliary system 𝒜\mathcal{A}, fluctuation-dissipation theorem in the context of quantum complexity, and holographic fluctuations. In Section 5, we perform a numerical simulation of the transverse field Ising model to support our discussions on the second law of complexity, where the complexity version of the Jarzynski identity plays a vital role. Finally, in Section 6, we summarize our results and provide the conclusions and outlooks. This paper is structured in accordance with the flowchart presented in Fig. 1.

Refer to caption
Figure 1: Flowchart of the paper. The reason for connecting “Fluctuation-dissipation theorem” and “Holographic fluctuations” with a dashed line is that the former might be a potential tool to help us quantitatively study the latter.

2 Preliminaries

In this section, we first briefly review the notion of the complexity geometry. Two significant concepts are retrospected: the 𝒬\mathcal{Q}-𝒜\mathcal{A} correspondence and the principle of minimal complexity. Second, we present a review of the derivation of the Jarzynski identity presented in 27 , which relies on the Feynman-Kac formula and path integral method. Finally, we generalize the derivation to adapt to our later discussions.

2.1 Complexity geometry

The complexity geometry is a powerful tool for quantifying the hardness to generate a specific unitary operator USU(dim[])U\in\mathrm{SU}(\mathrm{dim}[\mathcal{H}]) from the identity ISU(dim[])I\in\mathrm{SU}(\mathrm{dim}[\mathcal{H}]), where “dim[]\mathrm{dim}[\mathcal{H}]” denotes the dimensions of the Hilbert space of the quantum systems (i.e., \mathcal{H}) comprising a fixed number of qubits.

Refer to caption
Figure 2: The minimal number of polyline segments (black line) can be understood as the gate complexity, and the action of trajectory γ(t)\gamma(t) (red line) can be understood as a smooth representation of the complexity.

Instead of using gate complexity, we minimize a smooth function (the cost) in a smooth manifold (the space of unitary operators) 12 . Then, our purpose changes from how to determine the optimal quantum circuit comprising quantum gates to how to generate the target unitary operator

U=γ(T)=𝒫ei0TH(t)dtU=\gamma(T)=\mathcal{P}e^{-i\int_{0}^{T}H(t)\mathrm{d}t} (3)

from a given Hamiltonian H(t)H(t) with minimal cost (the central idea is presented in Fig. 2) in a certain time interval TT, where 𝒫\mathcal{P} represents the time-ordered operator. γ(t)\gamma(t) denotes the trajectory in the space of unitary operators. A cost with fixed boundary conditions γ(0)=I\gamma(0)=I, γ(T)=U\gamma(T)=U is defined as follows:

Aa(U)0TLa[γ(t),γ˙(t)]dt,A_{a}(U)\equiv\int_{0}^{T}L_{a}[\gamma(t),\dot{\gamma}(t)]\mathrm{d}t, (4)

where LaL_{a} is a local functional of the γ(t)SU(dim[])\gamma(t)\in SU(\mathrm{dim}[\mathcal{H}]) in the space of unitary operators666The subscript “aa” indicates an auxiliary system, which will be described in Section 2.1.2. that has the four following characteristics:

  • (1)

    Continuity: La[γ,γ˙]CL_{a}[\gamma,\dot{\gamma}]\in C^{\infty}.

  • (2)

    Non-negativity: La[γ,γ˙]0L_{a}[\gamma,\dot{\gamma}]\geq 0 takes an equal sign if and only if γ˙=0\dot{\gamma}=0.

  • (3)

    Positive homogeneity: λ\forall\lambda\in\mathbb{R}, La[γ,λγ˙]=λLa[γ,γ˙]L_{a}[\gamma,\lambda\dot{\gamma}]=\lambda L_{a}[\gamma,\dot{\gamma}].

  • (4)

    Triangle inequality: γ˙,γ˙\forall\dot{\gamma},\dot{\gamma}^{\prime}, LaL_{a} satisfies the triangle inequality such that La[γ,γ˙]+La[γ,γ˙]La[γ,γ˙+γ˙]L_{a}[\gamma,\dot{\gamma}]+L_{a}[\gamma,\dot{\gamma}^{\prime}]\geq L_{a}[\gamma,\dot{\gamma}+\dot{\gamma}^{\prime}].

If we regard LaL_{a} in Eq. (4) as a Lagrangian, Aa(U)A_{a}(U) becomes the action functional of the trajectory connecting endpoints II and UU in the space of unitary operators. Thus, complexity is defined as the minimal value of AaA_{a},

C(U)inf𝛾Aa(U)=inf𝛾0TLa[γ(t),γ˙(t)]dt,C(U)\equiv\underset{\gamma}{\mathrm{inf}}A_{a}(U)=\underset{\gamma}{\mathrm{inf}}\int_{0}^{T}L_{a}[\gamma(t),\dot{\gamma}(t)]\mathrm{d}t, (5)

and the infimum is over all possible trajectories. The four properties of LaL_{a} define a smooth manifold \mathcal{M}777To be precise, the Finsler manifold is a type of generalization of the Riemannian manifold. See Chapter 8 of 36 for the Finsler geometry. equipped with a local metric, called the complexity metric,

Gγ(,):Tγ×Tγ,G_{\gamma}(\cdot,\cdot):T_{\gamma}\mathcal{M}\times T_{\gamma}\mathcal{M}\to\mathbb{R}, (6)

where TγT_{\gamma}\mathcal{M} is the tangent space at γ\gamma\in\mathcal{M}. Note that in this study, \mathcal{M} is nothing but the SU(dim[])\mathrm{SU}(\mathrm{dim}[\mathcal{H}]) group manifold. We will use \mathcal{M} to represent the group manifold SU(dim[])\mathrm{SU}(\mathrm{dim}[\mathcal{H}]) (the space of unitary operators) throughout this study. More details can be found in 2 ; 3 ; 4 ; 5 ; 6 ; 7 .

2.1.1 Real quantum system “𝒬\mathcal{Q}

The quantum system we considered comprises KK qubits. The interactions between these qubits are taken as kk-local. Here, kk-local means that the Hamiltonian of the system contains the interaction terms of not more than kk qubits. For example, a 22-local Hamiltonian

H(t)=i,jhij(t),H(t)=\sum_{i,j}h_{ij}(t), (7)

where hij(t)h_{ij}(t) is a Hermitian operator acting on two arbitrary qubits ii and jj. The general expression of a kk-local Hamiltonian is

H(t)=i1<i2<<ika1={x,y,z}ak={x,y,z}Ji1,,ika1,,ak(t)σi1a1σi2a2σikak.H(t)=\sum_{i_{1}<i_{2}<...<i_{k}}\sum_{a_{1}=\{x,y,z\}}\cdots\sum_{a_{k}=\{x,y,z\}}J^{a_{1},\cdots,a_{k}}_{i_{1},\cdots,i_{k}}(t)\sigma_{i_{1}}^{a_{1}}\sigma_{i_{2}}^{a_{2}}\cdots\sigma_{i_{k}}^{a_{k}}. (8)

Schematically, it can be written as

H(t)=JM(t)σM(t),H(t)=J^{M}(t)\sigma_{M}(t), (9)

where σI\sigma_{I} and JI(t)J^{I}(t) are the set of generalized Pauli matrices and coupling functions. II runs over all (4K1)(4^{K}-1) Pauli matrices with corresponding nonzero couplings888(4K1)(4^{K}-1) is also the dimensions of SU(2K)\mathrm{SU}(2^{K}) group.. We follow the Einstein’s summation convention here and thereafter. This time-dependent Hamiltonian generates Eq. (3) and determines the dynamics of a quantum system.

What are the dynamics of the quantum system 𝒬\mathcal{Q}? The system 𝒬\mathcal{Q} is a standard quantum system. Thus, when referring to its dynamics, we usually consider the evolution process of the states in its Hilbert space (the space of states) with 2K2^{K} dimensions. We choose a reference state |Ω\ket{\Omega} and a target state |Ψ(T)\ket{\Psi(T)} as the initial and final system states, respectively. We then define a moving point γ(t)\gamma(t)\in\mathcal{M}. According to the Schrödinger’s picture, the evolution starting from the initial state at time t=0t=0 to the final state at t=Tt=T is achieved by applying a particular unitary operator U=γ(T)U=\gamma(T),

|Ψ(T)=U|Ω,\ket{\Psi(T)}=U\ket{\Omega}, (10)

and the time-evolution (with constraints γ(0)=I\gamma(0)=I and γ(T)=U\gamma(T)=U) of γ(t)\gamma(t) itself satisfies the Schrödinger equation

dγ(t)dt=iH(t)γ(t),\frac{\mathrm{d}\gamma(t)}{\mathrm{d}t}=-iH(t)\gamma(t), (11)

where H(t)H(t) is a traceless Hermitian operator, i.e., Eq. (9).

In this study, we mainly focus on the time-dependent Hamiltonian form of Eq. (9), which is a generalization of the case presented in 8 . The only difference between these two is that the coupling JJ in the present study varies with time but is constant in 8 . Two typical examples with the time-independent Hamiltonian are the SYK model 37 ; 38 ; 39 ; 40 and thermofield-double (TFD) state 41 ; 42 .

2.1.2 Auxiliary classical system “𝒜\mathcal{A}

Because of the application of the complexity geometry, we define a classical auxiliary system 𝒜\mathcal{A}, along the lines of 8 , as a system that describes the evolution of the unitary operators of the quantum system 𝒬\mathcal{Q}. We must consider the following questions to define such a classical auxiliary system:

  • (1)

    What does system 𝒜\mathcal{A} look like?

  • (2)

    How do we define the distance (metric) in the configuration space of 𝒜\mathcal{A}?

  • (3)

    What is the equation of motion?

Let us answer these questions one by one. The classical auxiliary system 𝒜\mathcal{A} describes the evolution of the unitary operators in \mathcal{M}; thus, the configuration space is the space of unitary operators \mathcal{M} and each point in \mathcal{M} corresponds to an element of the special unitary group. The number of degrees of freedom of system 𝒜\mathcal{A} is equal to the dimension of \mathcal{M}. Moreover, the evolution of a unitary operator in the space of unitary operators can be regarded as the motion of a fictitious nonrelativistic free particle with unit mass in the configuration space. The particle velocity is described by a tangent vector γ˙\dot{\gamma} along the trajectory in \mathcal{M}. The tangent vector’s dimension is consistent with the dimension of \mathcal{M} (the number of degrees of freedom of system 𝒜\mathcal{A}). For instance, consider a system 𝒬\mathcal{Q} comprising KK qubits and =SU(2K)\mathcal{M}=\mathrm{SU}(2^{K}). Then, the dual auxiliary system 𝒜\mathcal{A} has (4K1)(4^{K}-1) degrees of freedom and the tangent vectors of \mathcal{M} are described by (4K1)(4^{K}-1)-dimensional variables.

Because any Hermitian operator can be expanded in generalized Pauli matrices, such as Eq. (9), we can consider the set of Pauli matrices as a set of basis in \mathcal{M}. The generalized Pauli matrices satisfy

TrσMσN=δMN,\mathrm{Tr}\sigma_{M}\sigma_{N}=\delta_{MN}, (12)

where we assume that the trace “Tr\mathrm{Tr}” is always normalized and δMN\delta_{MN} denotes the Kronecker-delta. Therefore, coupling JM(t)J^{M}(t) can be solved as

JM(t)=δMNTr[iγ˙(t)γ(t)σN].J^{M}(t)=\delta^{MN}\mathrm{Tr}[i\dot{\gamma}(t)\gamma^{\dagger}(t)\sigma_{N}]. (13)

If we set γ(0)=I\gamma(0)=I, then at t=0t=0 Eq. (13) is in form of

JM(0)=δMNTr[iγ˙(t)σN]|t=0,J^{M}(0)=\delta^{MN}\mathrm{Tr}[i\dot{\gamma}(t)\sigma_{N}]|_{t=0}, (14)

where the right-hand side is the projection of the initial velocity onto the tangent space axes oriented along the Pauli basis. Brown and Susskind regarded JM(0)J^{M}(0) as the initial velocity of a fictitious particle of system 𝒜\mathcal{A} (i.e., VM(0)JM(0)V^{M}(0)\equiv J^{M}(0)). This is called the velocity-coupling correspondence 8 . Hence, JM(t)J^{M}(t) plays the role of a time-dependent velocity. Then, couplings can be written in terms of general (local) coordinates, that is, {JM(t)}{X˙M(t)J^{M}(t)\}\to\{\dot{X}^{M}(t)}, where {XM(t)X^{M}(t)} denotes the coefficients (components) of some vectors in \mathcal{M} expanded in the Pauli basis. Notably, the selection of local coordinates is not unique. An example of different choices is presented in 43 .

Next, let us introduce the standard inner-product metric (bi-invariant) in \mathcal{M}, that is,

ds2|innerproduct\displaystyle\mathrm{d}s^{2}|_{\mathrm{inner-product}} =Tr[dUdU]\displaystyle=\mathrm{Tr}[\mathrm{d}U^{\dagger}\mathrm{d}U] (15)
=δMNTr[iUdUσM]Tr[iUdUσN],\displaystyle=\delta_{MN}\mathrm{Tr}[iU^{\dagger}\mathrm{d}U\sigma^{M}]\mathrm{Tr}[iU^{\dagger}\mathrm{d}U\sigma^{N}],

which equally treats all tangent directions σI\sigma_{I}. Mathematically, it means that if \mathcal{M} is equipped with a bi-invariant metric, then \mathcal{M} is homogeneous and isotropic. Such a bi-invariant metric induces the system 𝒜\mathcal{A} with characteristics similar to those in a classical system in the Euclidean space.

Recall that the complexity is a tool for measuring how difficult it is to generate a target unitary UU. We generalize δMN\delta_{MN} to a symmetric positive-definite penalty factor GMNG_{MN} by extending Eq. (15) to the complexity geometry condition 6 . The metric then becomes

ds2=GMNTr[iUdUσM]Tr[iUdUσN],\mathrm{d}s^{2}=G_{MN}\mathrm{Tr}[iU^{\dagger}\mathrm{d}U\sigma^{M}]\mathrm{Tr}[iU^{\dagger}\mathrm{d}U\sigma^{N}], (16)

which is a right-invariant local metric on \mathcal{M}. The metric is rewritten in terms of general coordinates as follows:

ds2=GMNdXMdXN.\mathrm{d}s^{2}=G_{MN}\mathrm{d}X^{M}\mathrm{d}X^{N}. (17)

Eq. (17) provides a homogeneous but anisotropic curved space (with negative curvature for a large number of qubits 6 ; 44 ) as the configuration space. “Anisotropic” means that it is hard for a particle of the system 𝒜\mathcal{A} to move in some directions. In quantum computation, this means that it is tough to impose quantum gates in some directions to generate the unitary operator UU, because these directions are severely penalized. A more detailed discussion can be found in 6 . We mainly consider the (irreducible999In this study we consider that all Markov chains are irreducible.) Markov processes in \mathcal{M} in the following sections. The state-space 𝒮\mathcal{S} of these processes consists of the possible trajectories starting at the origin II\in\mathcal{M} and has a fixed endpoint UU\in\mathcal{M}, 𝒮{γi}\mathcal{S}\equiv\{\gamma_{i}\}, where i{0,1,,N}i\in\{0,1,\cdots\cdots,N\} indicates each moment and satisfies γ0=γ(t0)=I\gamma_{0}=\gamma(t_{0})=I and γN=γ(tN)=U\gamma_{N}=\gamma(t_{N})=U. We can obtain the action functional of the trajectories in \mathcal{M} using the metric presented in Eq. (17)

Aa=12GMNX˙MX˙Ndt,A_{a}=\int\frac{1}{2}G_{MN}\dot{X}^{M}\dot{X}^{N}\mathrm{d}t, (18)

where subscript “aa” denotes a quantity of the system 𝒜\mathcal{A}. Eq. (18) is a rewritten form of Eq. (4) after considering the Lagrangian

La=12GMNX˙MX˙N.L_{a}=\frac{1}{2}G_{MN}\dot{X}^{M}\dot{X}^{N}. (19)

Next, the complexity is calculated by minimizing AaA_{a} in Eq. (18) as

C=inf𝛾12GMNX˙MX˙Ndt,C=\underset{\gamma}{\mathrm{inf}}\int\frac{1}{2}G_{MN}\dot{X}^{M}\dot{X}^{N}\mathrm{d}t, (20)

which is an explicit form of Eq. (5).

For any classical system, the equation of motion can be derived from an action functional by applying the Euler-Lagrange equation. Thus, for any auxiliary system 𝒜\mathcal{A}, the equation of motion reads

LaXMddt(LaX˙M)=0.\frac{\partial L_{a}}{\partial X^{M}}-\frac{\mathrm{d}}{\mathrm{d}t}(\frac{\partial L_{a}}{\partial\dot{X}^{M}})=0. (21)

Substituting the right-hand side of Eq. (19) into Eq. (21), we obtain

X¨M+ΓYNMX˙YX˙N=0,\ddot{X}^{M}+\Gamma^{M}_{YN}\dot{X}^{Y}\dot{X}^{N}=0, (22)

where the Christoffel symbol ΓYNM\Gamma^{M}_{YN} is defined as

ΓYNM12GMS(NGSY+YGSNSGNY),\Gamma^{M}_{YN}\equiv\frac{1}{2}G^{MS}(\partial_{N}G_{SY}+\partial_{Y}G_{SN}-\partial_{S}G_{NY}), (23)

and MXM\partial_{M}\equiv\frac{\partial}{\partial X^{M}}. Eq. (22) is the geodesic equation in \mathcal{M}, which is the equivalent expression to the equation of motion101010There is one more equivalent form commonly used in Lie algebra, namely, the Euler-Arnold equation 45 ..

We further extend the discussion made by Brown and Susskind 8 . Consider a system 𝒬\mathcal{Q} governed by a time-dependent Hamiltonian H(t)H(t). Consequently, the dual auxiliary system 𝒜\mathcal{A} is a stochastic classical system, and the evolution of the unitary operator corresponds to a Markov process in \mathcal{M}. The equation of motion becomes a stochastic differential equation, e.g., Quantum Brownian Circuit 46 ,

dγ(t)=12γ(t)dt+i8K(K1)j<kαj,αk=03σjαjσkαkγ(t)dBj,k,αj,αk(t),\mathrm{d}\gamma(t)=-\frac{1}{2}\gamma(t)\mathrm{d}t+\frac{i}{\sqrt{8K(K-1)}}\sum_{j<k}\sum_{\alpha_{j},\alpha_{k}=0}^{3}\sigma_{j}^{\alpha_{j}}\otimes\sigma_{k}^{\alpha_{k}}\gamma(t)\mathrm{d}B_{j,k,\alpha_{j},\alpha_{k}}(t), (24)

where dBj,k,αj,αk(t)\mathrm{d}B_{j,k,\alpha_{j},\alpha_{k}}(t) are independent Wiener processes with a unit variance per unit time. KK denotes the number of qubits in the quantum system 𝒬\mathcal{Q}. The configuration space \mathcal{M} is SU(2K)\mathrm{SU}(2^{K}) (equipped with a complexity metric). The system 𝒜\mathcal{A} has (4K1)(4^{K}-1) degrees of freedom.

In summary, although the understanding of the complexity geometry is still incomplete, it helps us change pure quantum problems (i.e., finding the optimal circuits) to classical geometric problems (i.e., finding geodesics in \mathcal{M}). This quantum-classical duality is called the 𝒬\mathcal{Q}-𝒜\mathcal{A} correspondence.

2.1.3 Principle of minimal complexity

The principle of minimal complexity is the complexity version of the principle of least action, namely the application of the principle of least action to the auxiliary system 𝒜\mathcal{A} 21 . The statement of the principle of minimal complexity is the rewritten form of the principle of least action 47 : “A true dynamical trajectory of the system 𝒜\mathcal{A} between an initial and a final configuration in a specified time interval is found by imagining all possible trajectories that the system could conceivably take. Then, we compute the complexity for each of these trajectories and select one that makes the complexity stationary (or ‘minimal’). Thus, true trajectories are those that have the minimal complexity.”

By applying the variational method to the first order of Eq. (20), the Euler-Lagrange equation for the principle of minimal complexity can be obtained. However, Eq. (21) is a necessary condition for the minimal value of complexity. To determine whether a trajectory is minimal, we must consider its second-order variation. A trajectory with minimal complexity has a positive second-order derivative.

A similar statement arises from the complexity=action conjecture, that is, the principle of least computation 20 . The conjecture states that the complexity of the boundary state is proportional to the on-shell action of the bulk spacetime. Therefore, we can apply the principle of least action to obtain the equations of motion in the bulk spacetime and minimize the complexity.

2.2 Jarzynski identity

As one of the most remarkable achievements in recent decades, the Jarzynski identity can be derived or proved by various means such as microscopic 22 or stochastic 27 dynamics. In this study, our discussion is mainly based on the path integral derivation of the Jarzynski identity presented by Hummer and Szabo 27 .

2.2.1 Path integral derivation

First, suppose there is a system whose phase space is denoted by x\vec{x}. The evolution of the system follows the canonical Liouville equation, i.e.,

P(x,t)t=LtP(x,t),\frac{\partial P(\vec{x},t)}{\partial t}=L_{t}P(\vec{x},t), (25)

where P(x,t)P(\vec{x},t) is the phase space density function and LtL_{t} is a time-dependent operator. Its stationary solution is a Boltzmann distribution LteβH(x,t)=0L_{t}\mathrm{e}^{-\beta H(\vec{x},t)}=0 48 . Therefore, we consider a distribution P(x,t)P(\vec{x},t) at time tt that satisfies the condition of the stationary solution LtP(x,t)=0L_{t}P(\vec{x},t)=0. At the same time, it obeys

P(x,t)t=β(H(x,t)t)P(x,t).\frac{\partial P(\vec{x},t)}{\partial t}=-\beta\left(\frac{\partial H(\vec{x},t)}{\partial t}\right)P(\vec{x},t). (26)

If we combine the stationary solution condition of Eq. (25) and Eq. (26), a Fokker-Planck type equation with a sink term can be obtained, i.e.,

P(x,t)t=LtP(x,t)β(H(x,t)t)P(x,t).\frac{\partial P(\vec{x},t)}{\partial t}=L_{t}P(\vec{x},t)-\beta\left(\frac{\partial H(\vec{x},t)}{\partial t}\right)P(\vec{x},t). (27)

Now, we consider the system evolves from an equilibrium state at t=0t=0 to a nonequilibrium state at t=Tt=T under an arbitrary force. Under this condition, Hummer and Szabo determined that the solution of Eq. (27) could be expressed using the Feynman-Kac formula 29 ; 30 ; 31 as follows:

P(x,T)=δ(xx(T))exp[β0THt(x,t)dt],P(\vec{x},T)=\left\langle{\delta(\vec{x}-\vec{x}(T))\mathrm{exp}{[-\beta\int_{0}^{T}\frac{\partial H}{\partial t}(\vec{x},t)\mathrm{d}t]}}\right\rangle, (28)

where the bracket \langle{\cdots}\rangle represents the ensemble average. Each trajectory in the phase space is weighted by a factor that can be defined as the external work done on the system,

W(T)0TH(x,t)tdt.W(T)\equiv\int_{0}^{T}\frac{\partial H(\vec{x},t)}{\partial t}\mathrm{d}t. (29)

Based on the famous relation between the free energy and partition function in statistical mechanics (i.e., F(t)=β1logZ(t)F(t)=-\beta^{-1}\mathrm{log}Z(t)), the exponent of the free energy difference ΔF(T)=F(T)F(0)\Delta F(T)=F(T)-F(0) is given as

eβΔF(T)=Z(T)Z(0)=dxeβH(x,T)dyeβH(y,0).\mathrm{e}^{-\beta\Delta F(T)}=\frac{Z(T)}{Z(0)}=\frac{\int\mathrm{d}\vec{x}\mathrm{e}^{-\beta H(\vec{x},T)}}{\int\mathrm{d}\vec{y}\mathrm{e}^{-\beta H(\vec{y},0)}}. (30)

The Boltzmann distribution reads

P(x,T)=eβH(x,T)dyeβH(y,0).P(\vec{x},T)=\frac{\mathrm{e}^{-\beta H(\vec{x},T)}}{\int\mathrm{d}\vec{y}\mathrm{e}^{-\beta H(\vec{y},0)}}. (31)

In Eq. (31), the numerator is divided by such a denominator because the initial distribution is exact. Thus the Jarzynski identity

exp(βΔF(T))=exp(βW(T))\mathrm{exp}(-\beta\Delta F(T))=\langle{\mathrm{exp}(-\beta W(T))}\rangle (32)

is derived by integrating both sides of Eq. (28) over x\vec{x}.

2.2.2 Generalized form

To generalize their formalism to a configuration space, we must re-interpret the meaning of each symbol in Eq. (25). We re-interpret x\vec{x} and P(x,t)P(\vec{x},t) as a random variable and distribution function, respectively. Then, the time-dependent operator LtL_{t} becomes the Fokker-Planck operator that satisfies a Fokker-Planck equation 49 . Consequently, the stationary solution of LtL_{t} becomes Gaussian that can be expressed as

LteηA=0,L_{t}\mathrm{e}^{-\eta A}=0, (33)

where η\eta is a positive constant. The quadratic action AA is given by

A=120Tδijx˙ix˙jdt,A={\frac{1}{2}}\int_{0}^{T}\delta_{ij}\dot{x}^{i}\dot{x}^{j}\mathrm{d}t, (34)

where xix^{i} and xjx^{j} represent components of x\vec{x}. If we consider a curved space, we only need to transform the Kronecker-delta δij\delta_{ij} into a general metric tensor gijg_{ij}. Then, Eq. (26) becomes

P(x,t)t=η(A(x,t)t)P(x,t).\frac{\partial P(\vec{x},t)}{\partial t}=-\eta\left(\frac{\partial A(\vec{x},t)}{\partial t}\right)P(\vec{x},t). (35)

Similarly, combining this equation with Eq. (33) yields

P(x,t)t=LtP(x,t)η(A(x,t)t)P(x,t).\frac{\partial P(\vec{x},t)}{\partial t}=L_{t}P(\vec{x},t)-\eta\left(\frac{\partial A(\vec{x},t)}{\partial t}\right)P(\vec{x},t). (36)

We apply the Feynman-Kac formula to Eq. (36) to obtain

P(x,T)=δ(xx(T))exp[η0TAt(x,t)dt].P(\vec{x},T)=\left\langle{\delta(\vec{x}-\vec{x}(T))\mathrm{exp}{[-\eta\int_{0}^{T}\frac{\partial A}{\partial t}(\vec{x},t)\mathrm{d}t]}}\right\rangle. (37)

We can further define the generalized work111111We sometimes refer to the action functional as the energy functional, which is why the generalized work is defined in this way, e.g., 33 . as

W(t)0tAs(x,s)ds.W(t)\equiv\int_{0}^{t}\frac{\partial A}{\partial s}(\vec{x},s)\mathrm{d}s. (38)

By integrating x\vec{x} on both sides of Eq. (37), we obtain a Jarzynski-like identity, that is,

DxeηA(x,T)DyeηA(y,0)Z(T)Z(0)=exp(ηW(T)),\frac{\int D\vec{x}\mathrm{e}^{-\eta A(\vec{x},T)}}{\int D\vec{y}\mathrm{e}^{-\eta A(\vec{y},0)}}\equiv\frac{Z(T)}{Z(0)}=\langle{\mathrm{exp}(-\eta W(T))}\rangle, (39)

where Z(t)Z(t) is the partition function and DxD\vec{x} and DyD\vec{y} are suitable path integral measures in the configuration space. Note that if we regard the constant η\eta as the inverse temperature of the system and set T=1T=1, using the F(t)=η1logZ(t)F(t)=-\eta^{-1}\mathrm{log}Z(t) relation, Eq. (39) becomes the Jarzynski identity (i.e., Eq. (32)).

To apply this generalized method to the space of unitary operators, we will introduce the Haar measure and ergodicity in the subsequent section. Furthermore, we will use the Hamilton-Jacobi (HJ) equation to rewrite the complexity version of the Jarzynski identity.

3 Jarzynski identity under the background of the complexity geometry

In Section 3.1, we introduce the Haar measure and path integral in \mathcal{M}. Then, we apply the same logic as in the path integral derivation in the last section to obtain the Jarzynski identity for complexity. We argue that one can use the Hamilton-Jacobi equation to rewrite the obtained identity into a more meaningful form and raise several nonequilibrium analogs of complexity dynamical issues. We will present these in the next section.

3.1 Path integral in \mathcal{M}

3.1.1 Path integral

Before discussing the path integral in \mathcal{M} (mathematically, the path integral is somewhat consistent with the Wiener measure; see 50 for the definition of the Wiener measure), we must first clarify a prerequisite that enables to integrate in the group manifold. We define a Haar measure as a unique measure that is invariant under translations by group elements. We express a Haar measure in terms of general coordinates:

[dγ]1NcGMN(γ)dX1dX2dXdim().[\mathrm{d}\gamma]\equiv\frac{1}{N_{c}}\sqrt{G_{MN}(\gamma)}\mathrm{d}X^{1}\mathrm{d}X^{2}\cdots\mathrm{d}X^{\mathrm{dim}(\mathcal{M})}. (40)

We use [dγ][\mathrm{d}\gamma] to represent the Haar measure (see 43 for an example for choosing a specific parameterization of a normalized Haar measure). NcN_{c} is the normalization coefficient. GMN(γ)|det(Jd)|\sqrt{G_{MN}(\gamma)}\equiv|\mathrm{det}(J_{d})| denotes the determinant of the Jacobian matrix JdJ_{d}. The Jacobian matrix identifies an invariant measure under coordinate transformations. We follow the notation in Eq. (17), i.e., {XM}\{X^{M}\} are the coefficients (or components) of γ(t)\gamma(t)\in\mathcal{M} expanded on a local basis in all the following contents. The Haar measure must meet two requirements:

  • (1)

    normalization condition: [dγ]=I\int_{\mathcal{M}}[\mathrm{d}\gamma]=I; and

  • (2)

    orthogonal completeness condition: [dγ]|γγ|=I\int_{\mathcal{M}}[\mathrm{d}\gamma]\ket{\gamma}\bra{\gamma}=I.

Here, |γ\ket{\gamma} is considered as the group representation of some pseudo-quantum states that satisfies γ|γ=δ(γγ)\langle\gamma|\gamma^{\prime}\rangle=\delta(\gamma-\gamma^{\prime}). Note that the system described by these pseudo-quantum states is not a real quantum system, but a hypothetical quantum system obtained by applying “stochastic quantization51 to system 𝒜\mathcal{A}, a classical system.

Note that the pseudo-quantum states only help us derive the path integral. To derive the path integral in quantum mechanics, the evolution kernel is obtained by constantly inserting the orthogonal completeness condition, which is familiar to most physicists. Thus, we can assume that there exists a hypothetical quantum system with such an orthogonal completeness condition. Then, we can derive the path integral by inserting the orthogonal completeness condition instead of introducing an unfamiliar concept, “i.e., stochastic quantization51 . In this system, a stochastic differential equation, e.g., Langevin equation, is analogous to the Heisenberg operator equation in quantum mechanics. Although the introduction of the pseudo-quantum state is not mathematically rigorous, it is convenient for us to introduce a path integral in \mathcal{M}. A more rigorous discussion of “stochastic quantization” can be found in 51 . It must be emphasized that the hypothetical quantum system is essentially a classical stochastic system, and its uncertainty comes from stochastic motion not from the Heisenberg uncertainty principle. Therefore, the system described here is purely classical even if we use Dirac notations.

Once a Haar measure is defined, we can do integral in \mathcal{M}. Thus, we can derive the evolution kernel. Consider a Markov chain in \mathcal{M} with a finite state space (i.e., 𝒮={γ0=I,γ1,,γN=U}\mathcal{S}=\{\gamma_{0}=I,\gamma_{1},\cdots\cdots,\gamma_{N}=U\} with 0=t0<t1<t2<<tN1<tN=T0=t_{0}<t_{1}<t_{2}<\cdots\cdots<t_{N-1}<t_{N}=T). We set the unit time interval as titi1=Δt=T/Nt_{i}-t_{i-1}=\Delta t=T/N for any i{1,2,,N}i\in\{1,2,\cdots\cdots,N\}. Thus, the propagator 𝒦a(γi+1,ti+1;γi,ti)\mathcal{K}_{a}(\gamma_{i+1},t_{i+1};\gamma_{i},t_{i}) is defined as

𝒦a(γi+1,ti+1;γi,ti)γi+1,ti+1|γi,ti=eiLa[γ(ti+1);γ(ti)]Δt+O(Δt2).\mathcal{K}_{a}{(\gamma_{i+1},t_{i+1};\gamma_{i},t_{i})}\equiv\braket{\gamma_{i+1},t_{i+1}}{\gamma_{i},t_{i}}=e^{iL_{a}[\gamma(t_{i+1});\gamma(t_{i})]\Delta t+O(\Delta t^{2})}. (41)

We can obtain the evolution kernel by repeatedly inserting the orthogonal completeness condition similar to what we usually do in Feynman path integrals. If we take N \to\infty such that Δt0\Delta t\to 0 , the evolution kernel (or heat kernel) Ka(U,T;I,0)K_{a}(U,T;I,0) from II at t=0t=0 to UU at t=Tt=T is obtained as

Ka(U,T;I,0)\displaystyle K_{a}(U,T;I,0) i=0N1[dγi]𝒦a(γi+1,ti+1;γi,ti)\displaystyle\equiv\int_{\mathcal{M}}\prod_{i=0}^{N-1}[\mathrm{d}\gamma_{i}]\mathcal{K}_{a}{(\gamma_{i+1},t_{i+1};\gamma_{i},t_{i})} (42)
=i=0N1Ka(γi+1,ti+1;γi,ti).\displaystyle=\prod_{i=0}^{N-1}K_{a}(\gamma_{i+1},t_{i+1};\gamma_{i},t_{i}).

The sum of Lagrangians, LaL_{a}, can be written in the form of the complexity

Ka(U,T;I,0)=i=0N1[dγi]eiC(T).K_{a}(U,T;I,0)=\int_{\mathcal{M}}\prod_{i=0}^{N-1}[\mathrm{d}\gamma_{i}]\mathrm{e}^{iC(T)}. (43)

However, the path integral in a curved configuration space inevitably introduces a correction term with a scalar curvature in the complexity CC to maintain the covariance of the path integral under any coordinate transformation. Consequently, we must deal with an additional curvature term when calculating the path integral.

Fortunately, an elegant proposal 52 provides a novel form without any curvature modification; it introduces a factor called Van Vleck-Morette determinant 53 ; 54 in Eq. (43)

Ka(U,T;I,0)=i=1N1[dγi]Nc|iΔ(γ;γi)|GMN(γ)eiC(T),K_{a}(U,T;I,0)=\int_{\mathcal{M}}\prod_{i=1}^{N-1}[\mathrm{d}\gamma_{i}]\frac{N_{c}{}_{i}|\Delta(\gamma_{*};\gamma_{i})|}{\sqrt{G_{MN}(\gamma_{*})}}\mathrm{e}^{iC(T)}, (44)

where NciN_{c}{}_{i} denotes the normalized factor of the iith Haar measure and |Δ(γ;γ)||\Delta(\gamma_{*};\gamma)| is the Van Vleck-Morette determinant:

|Δ(γ;γ)|Nc2GMN(γ)GMN(γ)det(δ2λ(γ;γ)δXMδXN),|\Delta(\gamma_{*};\gamma)|\equiv\frac{N_{c}^{2}}{\sqrt{G_{MN}(\gamma_{*})}\sqrt{G_{MN}(\gamma)}}\mathrm{det}\left(-\frac{\delta^{2}\lambda(\gamma_{*};\gamma)}{\delta X^{M}\delta X_{*}^{N}}\right), (45)

where λ(γ;γ)\lambda(\gamma_{*};\gamma) is defined as the geodesic interval 55 ; 56 between a fixed point γ\gamma_{*}\in\mathcal{M} and γ\gamma\in\mathcal{M}. Note that

λ(γ;γ)12D2(γ;γ),\lambda(\gamma_{*};\gamma)\equiv\frac{1}{2}D^{2}(\gamma_{*};\gamma), (46)

where D(γ;γ)D(\gamma_{*};\gamma) is the length of the geodesic connecting point γ\gamma_{*} to point γ\gamma. With the replacement, i.e., Dγi=1N1[dγi]Nci|Δ(γ;γi)|GMN(γ)\int_{\mathcal{M}}D\gamma\equiv\int_{\mathcal{M}}\prod_{i=1}^{N-1}[\mathrm{d}\gamma_{i}]\frac{N_{c_{i}}|\Delta(\gamma_{*};\gamma_{i})|}{\sqrt{G_{MN}(\gamma_{*})}}, the expression of evolution kernel becomes

Ka(U,T;I,0)=DγeiC(T).K_{a}(U,T;I,0)=\int_{\mathcal{M}}D\gamma\mathrm{e}^{iC(T)}. (47)

The detailed derivation is presented in Appendix B (see also 52 ). In time limit tt\to\infty we consider the continuation of time tt to the complex plane. A Wick rotation, tiηtt\to i\eta t, is then applied to Eq. (47) such that one can rewrite the evolution kernel as

Za(T)=DγeηC(T),Z_{a}(T)=\int_{\mathcal{M}}D\gamma\mathrm{e}^{-\eta C(T)}, (48)

where Za(T)Z_{a}(T) refers to the partition function of the system 𝒜\mathcal{A} and η\eta is a positive constant121212It has two meanings, a Lagrangian multiplier and the inverse temperature of the system 𝒜\mathcal{A} 8 .. Let us give some remarks on this equation. In principle, one can consider a trajectory with a large complexity in \mathcal{M}. Then, the contribution of this trajectory to Eq. (48) is incredibly small because the complexity exists in the form of an exponential function, specifically, eηCe^{-\eta C}, in Eq. (48). We will further consolidate this fact in Appendix C by discussing the relationship between the principle of minimal complexity and the second law of complexity.

3.1.2 Ergodicity

The ergodic motion in the configuration space ensures that the contributions of all possible trajectories are included in Eq. (48). The ergodicity for the system 𝒜\mathcal{A} can be fulfilled in two ways: partial ergodic (chaotic evolution with a time-independent Hamiltonian 8 ) and complete ergodic (stochastic process with a time-dependent Hamiltonian 21 ).

A time-independent kk-local Hamiltonian generates the partial ergodic motion in \mathcal{M}. To understand this, consider a quantum system comprising KK qubits that evolves by applying the unitary operator

U=eiHT=n=12KeiEnT|EnEn|,U=\mathrm{e}^{-iHT}=\sum_{n=1}^{2^{K}}\mathrm{e}^{-iE_{n}T}\ket{E_{n}}\bra{E_{n}}, (49)

where |En\ket{E_{n}} are the eigenstates of Hamiltonian HH with eigenvalues EnE_{n}. Because ergodicity is equivalent to the incommensurability of the energy eigenvalues in this case and there are 2K2^{K} energy eigenvalues for the Hamiltonian HH, the unitary operator UU moves on a 2K2^{K} dimensional torus (subspace of \mathcal{M}) 8 in an ergodic motion.

Complete ergodicity can be achieved in the case with a time-dependent kk-local Hamiltonian, i.e., Eq. (8). In this case, the motions in \mathcal{M} are considered as the ergodic Markov process filling up all (4K1)(4^{K}-1) dimensions of \mathcal{M}. An ergodic Markov process must rigorously satisfy irreducible and non-periodic conditions, and all states are persistent 57 . The necessary and sufficient condition for the existence of a stationary distribution of an irreducible Markov chain is an ergodic Markov chain. We can write a stationary distribution according to our settings as follows:

P(γ,t)=𝒩aexp(ηC(t)),P(\gamma,t)=\mathcal{N}_{a}\mathrm{exp}(-\eta C(t)), (50)

where 𝒩a\mathcal{N}_{a} represents the normalized constant. Thus, the ergodicity can be satisfied. In conclusion, the existence of a stationary distribution indicates that the motion of the unitary operator in \mathcal{M} is ergodic when system 𝒬\mathcal{Q} is governed by a time-dependent Hamiltonian.

3.2 Complexity version of the Jarzynski identity

3.2.1 Derivation of the Jarzynski identity

In this section, we derive the complexity version of the Jarzynski identity from the Fokker-Planck equation with a sink term in \mathcal{M}. Consider a stochastic auxiliary system 𝒜\mathcal{A} that describes a particle moving from a fixed point γ(0)\gamma(0)\in\mathcal{M} to another fixed point γ(T)\gamma(T)\in\mathcal{M}. The dual quantum system 𝒬\mathcal{Q} is governed by a time-dependent Hamiltonian H(t)H(t) evolving in a time interval TT. Here, the fixed endpoints of the trajectories in \mathcal{M} play a similar role to the two fixed states in a common Jarzynski case. The evolution equation of the system 𝒜\mathcal{A} is a Fokker-Planck equation, i.e.,

P(γ,t)t=LtP(γ,t),\frac{\partial P(\gamma,t)}{\partial t}=L_{t}P(\gamma,t), (51)

where P(γ,t)P(\gamma,t) represents the distribution function, and the time-dependent operator LtL_{t} denotes the Fokker-Planck operator131313The derivations of the Fokker-Planck equation are presented in Appendix B.. Because of Eq. (50) and Eq. (48), we can construct a distribution as the stationary solution of Eq. (51) at t=Tt=T similar to what we did in Section 2.2:

P(γ,T)=1Za(0)exp(ηC(T)),P(\gamma,T)=\frac{1}{Z_{a}(0)}\mathrm{exp}(-\eta C(T)), (52)

where Za(0)Z_{a}(0) refers to the partition function at t=0t=0 and complexity CC plays the role of the action AA in Section 2.2, such that P(γ,t)P(\gamma,t) satisfies LtP(γ,t)=0L_{t}P(\gamma,t)=0. Moreover, one can check that

P(γ,t)t=η(C(t)t)P(γ,t).\frac{\partial P(\gamma,t)}{\partial t}=-\eta\left(\frac{\partial C(t)}{\partial t}\right)P(\gamma,t). (53)

Hence, using this equation and LtP(γ,t)=0L_{t}P(\gamma,t)=0, we can obtain the Fokker-Planck equation with a sink term

P(γ,t)t=LtP(γ,t)η(C(t)t)P(γ,t).\frac{\partial P(\gamma,t)}{\partial t}=L_{t}P(\gamma,t)-\eta\left(\frac{\partial C(t)}{\partial t}\right)P(\gamma,t). (54)

Solving this equation with constraint γ(T)=γ\gamma(T)=\gamma, we obtain

P(γ,T)=δ(γγ(T))exp[η0TCt(γ,t)dt].P(\gamma,T)=\left\langle{\delta(\gamma-\gamma(T))\mathrm{exp}{[-\eta\int_{0}^{T}\frac{\partial C}{\partial t}(\gamma,t)\mathrm{d}t]}}\right\rangle. (55)

The ensemble average is over all the possible trajectories departing from the identity II to reach the fixed point UU at t=Tt=T. The Dirac function indicates the termination condition. Equating this equation with Eq. (52) gives:

1Za(0)exp(ηC(T))=δ(γγ(T))exp[η0TCt(γ,t)dt].\frac{1}{Z_{a}(0)}\mathrm{exp}(-\eta C(T))=\left\langle{\delta(\gamma-\gamma(T))\mathrm{exp}{[-\eta\int_{0}^{T}\frac{\partial C}{\partial t}(\gamma,t)\mathrm{d}t]}}\right\rangle. (56)

By integrating γ\gamma on both sides of this equality and defining a quantity, called computational work Wa(t)W_{a}(t) as a type of general work defined in Eq. (38) with the following form:

Wa(t)0tCs(γ,s)ds,W_{a}(t)\equiv\int_{0}^{t}\frac{\partial C}{\partial s}(\gamma,s)\mathrm{d}s, (57)

the complexity version of Jarzynski identity is given as

Za(T)Za(0)=exp(ηWa(T)),\frac{Z_{a}(T)}{Z_{a}(0)}=\left\langle\mathrm{exp}(-\eta W_{a}(T))\right\rangle, (58)

which is one of our main proposals in this paper.

To simplify Eq. (58), we introduce an analog of the thermodynamic free energy in complexity, that is, the “computational free energy”:

Fa(t)η1logZa(t).F_{a}(t)\equiv-\eta^{-1}\mathrm{log}Z_{a}(t). (59)

If we assume η=1/Ta\eta=1/T_{a} as the inverse temperature of the system 𝒜\mathcal{A} and set t=1t=1, FaF_{a} can be regarded as the thermodynamic free energy of system 𝒜\mathcal{A} and Za(t)Z_{a}(t) takes the same form as the partition function of a free particle141414Essentially, this is the relationship between the partition function obtained from the path integral approach and the thermodynamic free energy 58 .. A similar discussion between the complexity-related and thermodynamic quantities was made in 8 . By substituting Eq. (59) into Eq. (58), the equality takes a more familiar form:

exp(ηΔFa(T))=exp(ηWa(T)),\mathrm{exp}(-\eta\Delta F_{a}(T))=\left\langle\mathrm{exp}(-\eta W_{a}(T))\right\rangle, (60)

where ΔFa(T)=Fa(T)Fa(0)\Delta F_{a}(T)=F_{a}(T)-F_{a}(0) depends on the two endpoints of evolution in system 𝒜\mathcal{A}.

Even though we have already defined the computational work WaW_{a}, intuitively understanding its physical meaning remains hard. Hence, we expect a more instructive interpretation of WaW_{a} exists within the abovementioned discussion of the complexity version of the Jarzynski identity. Eq. (57) suggests that the definition of WaW_{a} contains the time derivative of complexity. We intend to rewrite the complexity version of the Jarzynski identity using the Hamilton-Jacobi (HJ) equation that describes the change of complexity.

3.2.2 Hamilton-Jacobi equation

To further explore the Jarzynski identity in the context of complexity, a proper rewriting of the expression is required. In this section, we start from the derivation of the HJ equation considering the complexity geometry and use the HJ equation to rewrite Eq. (60). The rewriting of the Jarzynski identity will facilitate the development of the thermodynamic analog of complexity, particularly for the second law of complexity and the related topic uncomplexity as a resource 8 .

Recall that the trajectories in \mathcal{M} follow the principle of minimum complexity, that is,

δC(γ,γ˙)=δ0TLa(γ,γ˙,t)dt=0.\delta C(\gamma,\dot{\gamma})=\delta\int_{0}^{T}L_{a}(\gamma,\dot{\gamma},t)\mathrm{d}t=0. (61)
Refer to caption
Figure 3: The red arrow represents the direction of motion of a particle in the auxiliary system (tangent to the trajectory γ(t)\gamma(t)), the green arrow represents the direction of the particle’s momentum (perpendicular to the blue curve), and the blue curve represents a surface of equal complexity.

Now, let us consider the principle of minimal complexity from a different perspective, that is, the Hamiltonian mechanics. We first determine the starting and ending points on a configuration space (i.e., (γ(0)=I,t=0)(\gamma(0)=I,t=0) and (γ(T)=U,t=T)(\gamma(T)=U,t=T)), respectively. Next, we assume that the trajectories connecting those points are obtained using Eq. (61), which satisfies Eq. (21). The generalized momentum is defined as

PMLaX˙M,P_{M}\equiv\frac{\partial L_{a}}{\partial\dot{X}^{M}}, (62)

whose direction is depicited in Fig. 3. Next, we rewrite Eq. (61) as

δC=0T{LaX˙MδX˙M+LaXMδXM}dt.\delta C=\int_{0}^{T}\left\{\frac{\partial L_{a}}{\partial\dot{X}^{M}}\delta\dot{X}^{M}+\frac{\partial L_{a}}{\partial X^{M}}\delta X^{M}\right\}\mathrm{d}t. (63)

By substituting Eq. (21) into this equation and assuming a small variation at the endpoint, namely δγ(T)=δγ0\delta\gamma(T)=\delta\gamma\not=0, Eq. (61) is reformulated as

δC\displaystyle\delta C =0T{LaX˙MδX˙M+ddt(LaX˙M)δXM}dt\displaystyle=\int_{0}^{T}\left\{\frac{\partial L_{a}}{\partial\dot{X}^{M}}\delta\dot{X}^{M}+\frac{\mathrm{d}}{\mathrm{d}t}(\frac{\partial L_{a}}{\partial\dot{X}^{M}})\delta X^{M}\right\}\mathrm{d}t (64)
=0Tddt{LaX˙MδXM}dt\displaystyle=\int_{0}^{T}\frac{d}{dt}\left\{\frac{\partial L_{a}}{\partial\dot{X}^{M}}\delta X^{M}\right\}\mathrm{d}t
=PMδXM.\displaystyle=P_{M}\delta X^{M}.

We take the limit δγ0\delta\gamma\to 0151515In Lagrangian mechanics, we always assume the variations of endpoints to be zero when deriving the Euler-Lagrange equation. Why do we need to consider an infinitesimal nonzero variation at the endpoint when deriving the HJ equation, though the Euler-Lagrange equation and the HJ equation can be used to describe the same classical system? The difference comes from different essential settings. In Lagrangian mechanics, we identify the equation of motions by varying trajectories between two fixed endpoints. However, when deriving the HJ equation, we consider that the trajectory satisfies the equation of motion. Therefore, instead of varying trajectories, we infinitesimally vary the endpoint of the trajectory to study the corresponding change of the action. So does for the same case in complexity story. In particular, 59 ; 60 have investigated a specific dynamic law, i.e., the first law of complexity by studying the variation of a trajectory’s endpoint in the complexity geometry., such that PM=CXMP_{M}=\frac{\partial C}{\partial X^{M}}. Thus, the complexity can be regarded as a functional of γ\gamma, and its infinitesimal variation is written as

δC=CXMδXM+Ctdt.\delta C=\frac{\partial C}{\partial X^{M}}\delta X^{M}+\frac{\partial C}{\partial t}\mathrm{d}t. (65)

Dividing its both sides by dt\mathrm{d}t and assuming that dCdt=La\frac{\mathrm{d}C}{\mathrm{d}t}=L_{a} we obtain

Ct=LaPMX˙M=Ha,\frac{\partial C}{\partial t}=L_{a}-P_{M}\dot{X}^{M}=-H_{a}, (66)

where HaH_{a} is the Hamiltonian of the system 𝒜\mathcal{A} and Eq. (66) represents the HJ equation in the complexity story. However, a new question arises: what is the form of HaH_{a}? We look at the auxiliary Lagrangian, i.e., Eq. (19), in which PMP_{M} has the following form:

PM\displaystyle P_{M} =CXM\displaystyle=\frac{\partial C}{\partial X^{M}} (67)
=0TLaXM𝑑t\displaystyle=\int_{0}^{T}\frac{\partial L_{a}}{\partial X^{M}}dt
=0Tddt(LaX˙M)dt\displaystyle=\int_{0}^{T}\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{\partial L_{a}}{\partial\dot{X}^{M}}\right)\mathrm{d}t
=GMNX˙N.\displaystyle=G_{MN}\dot{X}^{N}.

The last equal sign is established because the metric tensor GMNG_{MN} is Hessian 2 , that is, GMNLaX˙MX˙NG_{MN}\equiv\frac{\partial L_{a}}{\partial\dot{X}^{M}\partial\dot{X}^{N}}. Based on the above construction, we have

Ha=La=12GMNX˙MX˙N,H_{a}=L_{a}=\frac{1}{2}G_{MN}\dot{X}^{M}\dot{X}^{N}, (68)

for which we have

Ct+La=0.\frac{\partial C}{\partial t}+L_{a}=0. (69)

Substituting this into Eq. (57), the computational work is recast as

Wa(T)=0TLa(t)dt=C(T).W_{a}(T)=-\int_{0}^{T}L_{a}(t)\mathrm{d}t=-C(T). (70)

We then obtain the equivalent expression of the Jarzynski identity as

exp(ηΔFa(T))=exp(ηC(T)).\mathrm{exp}(-\eta\Delta F_{a}(T))=\left\langle\mathrm{exp}(\eta C(T))\right\rangle. (71)

This equality directly builds a bridge between the computational free energy difference and the complexity.

In the next section, we will argue that the thermodynamic analogs of complexity should be explored based on Eq. (71), because complexity has similarities with the thermodynamic entropy 8 and the Jarzynski identity strongly connects with the entropy.

4 On the nonequilibrium thermodynamic analog of complexity

The Jarzynski identity can provide a theoretical framework for exploring the thermodynamics of nonequilibrium systems, including stochastic systems. Because we have derived the Jarzynski identity, Eq. (71), and the system 𝒜\mathcal{A} is stochastic, this section primarily aims to construct the thermodynamic analogs and deepen our understanding of complexity. In addition, several interesting issues are discussed in this section. First, we will review the content and development for each topic. Next, we will further explore these topics based on the previously obtained results.

4.1 Uncomplexity as a computational resource

To understand this statement, we must first know what a resource is. The resource theory has a wide range of applications in quantum physics 61 ; 62 ; 63 ; 64 ; 65 ; 66 , and we do not need to know all about them. All we need to learn from the resource theory in this paper can be summarized by the following sentence: a resource is something one needs to do X 67 . For example, negentropy is a resource needed for doing work 61 ; 62 ; 63 , which is defined as the difference between the maximal and actual entropies,

Negentropy(t)SmaxS(t).\mathrm{Negentropy}(t)\equiv S_{\mathrm{max}}-S(t). (72)

The system that does work (to achieve some goals) must expend negentropy. Therefore, negentropy is a resource for doing work. Because complexity shows its analogs with classical entropies 8 , by analogy, a complexity version of negentropy, namely uncomplexity, is defined as

Uncomplexity(t)CmaxC(t),\mathrm{Uncomplexity}(t)\equiv C_{\mathrm{max}}-C(t), (73)

where CmaxC_{\mathrm{max}} is the possible maximal complexity and C(t)C(t) denotes the actual complexity of the system 𝒬\mathcal{Q} at a certain moment. This quantity is a resource that can be expended for doing direct computations 8 ; 67 . The central idea is expressed as

FaC,F_{a}\propto-C, (74)

where FaF_{a} refers to the thermodynamic free energy of system 𝒜\mathcal{A} 8 ; 67 obtained by assuming η=1/Ta\eta=1/T_{a} as the inverse temperature of the system 𝒜\mathcal{A} and setting T=1T=1 in Eq. (59). Equivalently,

R(t)CmaxC(t),R(t)\equiv C_{\mathrm{max}}-C(t), (75)

where resource is denoted by R(t)R(t).

Suppose that a particle is initialized at II\in\mathcal{M} with zero complexity in the system 𝒜\mathcal{A}. Equivalently, no gate is initially applied to any qubit in the system 𝒬\mathcal{Q}. Recall that uncomplexity is the space for complexity to grow 8 . We can write

Fa(0)=R(0)=Cmax,F_{a}(0)=R(0)=C_{\mathrm{max}}, (76)

because FaF_{a} represents the ability of the system 𝒬\mathcal{Q} to do computation161616ΔFa-\Delta F_{a} always takes non-negative values because F(0)=CmaxFa(T)F(0)=C_{\mathrm{max}}\geq F_{a}(T). (analogous to the case in thermodynamics). Substituting Eq. (75) and Eq. (76) into Eq. (71), we obtain

exp(ηFa(T))=exp(ηR(T))=exp[η(CmaxC(T))].\mathrm{exp}(-\eta F_{a}(T))=\left\langle\mathrm{exp}(-\eta R(T))\right\rangle=\left\langle\mathrm{exp}\left[-\eta(C_{\mathrm{max}}-C(T))\right]\right\rangle. (77)

This equation provides new evidence supporting the existence of a well-defined resource theory of uncomplexity 87 .

4.2 Second law of complexity and fluctuation theorem

The second law of complexity is obtained by applying the thermodynamic method to the auxiliary system 𝒜\mathcal{A} and has been studied when the system 𝒜\mathcal{A} is chaotic 8 . Based on the same logic, we should use stochastic thermodynamics to study the second law of complexity when the system 𝒜\mathcal{A} is stochastic rather than chaotic. We need two important pieces to complete this “puzzle”: the second law of thermodynamics and the trajectory thermodynamics 28 . In Section 4.2.1, we will first review these two important pieces in the framework of nonequilibrium thermodynamics. Subsequently, in Section 4.2.2, we will discuss the second law of complexity for stochastic auxiliary systems using an analogy with discussions of Section 4.2.1. To avoid confusion, we specify a few significant notations and make a clarification before the discussion.

Notations: in the following content, we use

P(γ)P(γ,T)=eηC(γ)Za(0)P(\gamma)\equiv P(\gamma,T)=\frac{e^{-\eta C(\gamma)}}{Z_{a}(0)} (78)

to represent the stationary distribution of a trajectory γ\gamma\in\mathcal{M} in the forward process (starting from γ(0)=I\gamma(0)=I to γ(T)=U\gamma(T)=U). A superscript “tilde” denotes the quantities related to the reverse process. By implementing time-reverse, such that tt~=Ttt\to\tilde{t}=T-t and γγ~(t~)=γ(t)\gamma\to\tilde{\gamma}(\tilde{t})=\gamma(t) for P(γ)P(\gamma), we define the stationary distribution of a time-reverse trajectory γ~\tilde{\gamma}\in\mathcal{M} (starting from γ~(0)=U\tilde{\gamma}(0)=U to γ~(T)=I\tilde{\gamma}(T)=I) as

P~(γ~)P~(γ~,T~)=eηC(γ~)Z~a(0~),\tilde{P}(\tilde{\gamma})\equiv\tilde{P}(\tilde{\gamma},\tilde{T})=\frac{e^{-\eta C(\tilde{\gamma})}}{\tilde{Z}_{a}(\tilde{0})}, (79)

where Z~a(0~)\tilde{Z}_{a}(\tilde{0}) refers to the partition function meeting the initial conditions of the reverse process. In the common thermodynamics’ case, the stationary distribution of a forward trajectory x\vec{x} in a phase space 70

P(x)=P(x,t)=eβH(x)Z(0),P(\vec{x})=P(\vec{x},t)=\frac{e^{-\beta H(\vec{x})}}{Z(0)}, (80)

and the integral is over the phase space. Similar to Eq. (79) the counterpart of Eq. (80), P~(x~)\tilde{P}(\tilde{\vec{x}}) represents the stationary distribution of a time-reverse trajectory x~\tilde{\vec{x}} in the phase space.

D(p(x)||q(y))D(p(x)||q(y)) represents the relative entropy between any two distributions p(x)p(x) and q(y)q(y)

D(p(x)||q(y))p(x)logp(x)q(y)dx0,D(p(x)||q(y))\equiv\int p(x)\mathrm{log}\frac{p(x)}{q(y)}\mathrm{d}x\geq 0, (81)

which is equal to zero if and only if p(x)=q(y),x,yp(x)=q(y),\forall x,y. Such a quantity provides a measure of distinguishability and is a handy tool for quantifying time-asymmetry in thermodynamics.

Clarification: our discussion on the second law of complexity is an extension of that made by Brown and Susskind 8 . However, there are two main differences between our discussion and theirs. The first difference is that the system 𝒜\mathcal{A} they discussed was a classical chaotic system. The Hamiltonian of the corresponding quantum system 𝒬\mathcal{Q} was time-independent. In contrast, our system 𝒜\mathcal{A} is a classical stochastic system, whose dual quantum system 𝒬\mathcal{Q} has a time-dependent Hamiltonian. The second difference is that we used different methods to study the second law of complexity. In particular, we used the approach developed by Brock and Esposito 28 the trajectory thermodynamics to explore the second law of complexity by analogy with their discussions on the second law of thermodynamics of nonequilibrium systems.

4.2.1 Second law of thermodynamics and trajectory thermodynamics

This section provides a brief review on the second law of thermodynamics for nonequilibrium systems and the trajectory thermodynamics 28 . Note that the most common expression of the second law of thermodynamics is known as the Clausius inequality, that is,

βQΔS,\beta\langle Q\rangle\leq\Delta S, (82)

where QQ is the heat absorbed by the system during a process and β\beta and SS represent the constant inverse temperature and the system’s thermodynamic entropy, respectively. We define the free energy of the system as

FUβ1S,F\equiv U-\beta^{-1}S, (83)

where UU denotes the system’s internal energy. By combining this definition with the first law of thermodynamics,

ΔU=W+Q,\Delta U=W+Q, (84)

we obtain

WΔF,\langle W\rangle\geq\Delta F, (85)

which corresponds to the Kelvin-Planck statement of the second law of thermodynamics: it is impossible to extract energy from a sole heat bath and converse all that energy into work without introducing any other influence. Equivalently, Eq. (85) can be written as

ΔStotal=ΔiSβWdissβ(WΔF)0,\langle\Delta S_{total}\rangle=\langle\Delta_{i}S\rangle\equiv\beta\langle W_{diss}\rangle\equiv\beta(\langle W\rangle-\Delta F)\geq 0, (86)

where ΔStotal\langle\Delta S_{total}\rangle denotes the combined entropy change of the system and environment 70 and WdissWΔF\langle W_{diss}\rangle\equiv\langle W\rangle-\Delta F is the average dissipated work for the forward process171717Because Wdiss\langle W_{diss}\rangle is a physical measure quantifying the dissipation, Eq. (86) is also a measure of dissipation.. ΔiS\Delta_{i}S is defined as the cumulative entropy production along a trajectory 28 , which is the time integration of the entropy production S˙idSidt\dot{S}_{i}\equiv\frac{\mathrm{d}S_{i}}{\mathrm{d}t}. Therefore, the non-negativity of ΔiS\Delta_{i}S can be converted into the following form:

S˙i0\langle\dot{S}_{i}\rangle\geq 0 (87)

which is one of the basic features of the thermodynamic second law 28 . Eq. (85) can also be derived from Eq. (2) by directly applying the Jensen’s inequality, that is, exex\langle e^{x}\rangle\geq e^{\langle x\rangle}. Thus, Eq. (2) is closely related to the second law of thermodynamics.

Next, we review the second piece of the “puzzle,” that is, the trajectory thermodynamics 28 . The cumulative entropy production along a forward trajectory x\vec{x} in phase space is defined as the log-ratio of the distributions for observing its trajectory in the forward and reverse processes.

ΔiS(x)=logP(x)P~(x~)=β(WΔF),\Delta_{i}S(\vec{x})=\mathrm{log}\frac{P(\vec{x})}{\tilde{P}(\tilde{\vec{x}})}=\beta(W-\Delta F), (88)

where WW denotes the work done on the system in a forward experiment. Instead of the trajectories in the phase space, we treat the cumulative entropy production as a random variable because it encodes each trajectory. Consequently, the distribution of the cumulative entropy production is given by the path integral in phase space in combination with Eq. (80):

P(ΔiS)\displaystyle P(\Delta_{i}S) δ(ΔiSΔiS(x))P(x)dx\displaystyle\equiv\int\delta(\Delta_{i}S-\Delta_{i}S(\vec{x}))P(\vec{x})\mathrm{d}\vec{x} (89)
=exp(ΔiS)δ(ΔiSΔiS(x))P~(x~)dx\displaystyle=\mathrm{exp}(\Delta_{i}S)\int\delta(\Delta_{i}S-\Delta_{i}S(\vec{x}))\tilde{P}(\tilde{\vec{x}})\mathrm{d}\vec{x}
=exp(ΔiS)δ(ΔiSΔiS~(x~))P~(x~)dx~\displaystyle=\mathrm{exp}(\Delta_{i}S)\int\delta\left(-\Delta_{i}S-\Delta_{i}\tilde{S}(\tilde{\vec{x}})\right)\tilde{P}(\tilde{\vec{x}})\mathrm{d}\tilde{\vec{x}}
exp(ΔiS)P~(ΔiS),\displaystyle\equiv\mathrm{exp}(\Delta_{i}S)\tilde{P}(-\Delta_{i}S),

and because x~~=x\tilde{\tilde{\vec{x}}}=\vec{x} and P~~=P\tilde{\tilde{P}}=P, the cumulative entropy production along a reverse trajectory x~\tilde{\vec{x}} is obtained by

ΔiS~(x~)=logP~(x~)P(x)=ΔiS(x).\Delta_{i}\tilde{S}(\tilde{\vec{x}})=\mathrm{log}\frac{\tilde{P}(\tilde{\vec{x}})}{P(\vec{x})}=-\Delta_{i}S(\vec{x}). (90)

Furthermore, because the Jacobian for the transformation to the time-reverse variables is equal to one 28 , we can conclude from Eq. (89) that

P(ΔiS)P~(ΔiS)=exp(ΔiS),\frac{P(\Delta_{i}S)}{\tilde{P}(-\Delta_{i}S)}=\mathrm{exp}(\Delta_{i}S), (91)

which is called detailed fluctuation theorem 77 . Eq. (91) has the corresponding statement 28 : the probability of stochastic entropy’s increase in the forward process is exponentially more probable than that of a corresponding decrease in the reverse process. We can rewrite the Eq. (91) as

exp(ΔiS)=1\langle\mathrm{exp}(-\Delta_{i}S)\rangle=1 (92)

by integrating ΔiS\Delta_{i}S in Eq. (91). Hence by directly applying Jensen’s inequality to Eq. (92), we obtain Eq. (86). Finally, we note that the average cumulative entropy production can be given by:

ΔiS=β(WΔF)=D(P(x)||P~(x~))0,\langle\Delta_{i}S\rangle=\beta(\langle W\rangle-\Delta F)=D(P(\vec{x})||\tilde{P}(\tilde{\vec{x}}))\geq 0, (93)

where D(P(x)||P~(x~))D(P(\vec{x})||\tilde{P}(\tilde{\vec{x}})) represents the relative entropy between P(x)P(\vec{x}) and P~(x~)\tilde{P}(\tilde{\vec{x}}).

4.2.2 Discussions on the second law of complexity

The second law of complexity was first conjectured in 44 and developed in 8 ; 67 . This conjecture has two equivalent statements:

  • 1.

    Conditioning on the complexity being less than maximal, it will most likely increase, both into the future and into the past (Statement 1).

  • 2.

    Decreasing complexity is unstable (Statement 2).

These statements initially described the features of complexity growth for a chaotic auxiliary system, which is dual to a quantum system with a time-independent Hamiltonian. To avoid confusion, we stipulate that the first statement is called “Statement 1,” and the second statement is called “Statement 2.” We mainly focus on Statement 2. Analogous to the canvass in Section 4.2.1, we extend the discussion on the second law of complexity to the case in which system 𝒜\mathcal{A} is stochastic (i.e., corresponds to a quantum system with a time-dependent Hamiltonian). Notably, we argue that the complexity version of the Jarzynski identity and trajectory thermodynamics provide a “Kelvin-Planck-like” statement and a new version of Statement 2 of the second law of complexity for stochastic auxiliary systems.

The distribution for a forward trajectory γ\gamma in \mathcal{M} is presented as Eq. (78). Moreover, the distribution for its reverse γ~\tilde{\gamma} is represented by Eq. (79). By analogy with Eq. (88) we introduce a new quantity similar to the cumulative entropy production as follows:

ΔiC(γ)=logP(γ)P~(γ~),\Delta_{i}C(\gamma)=\log{\frac{P(\gamma)}{\tilde{P}(\tilde{\gamma})}}, (94)

and we refer to it as the cumulative complexity production along the forward trajectory γ\gamma. Moreover, we replace ΔiS\Delta_{i}S, work WW, and free energy difference ΔF\Delta F in Eq. (93) with Eq. (94), computational work WaW_{a}, and computational free energy difference ΔFa\Delta F_{a}, respectively. We obtain

ΔiC=η(WaΔFa)=η(C+ΔFa)0.\langle\Delta_{i}C\rangle=\eta\left(\langle W_{a}\rangle-\Delta F_{a}\right)=-\eta\left(\langle C\rangle+\Delta F_{a}\right)\geq 0. (95)

This inequality can be obtained by applying the Jensen’s inequality to Eq. (71). Thus, the complexity version of the Clausius inequality is obtained as follows:

CΔFa.\langle C\rangle\leq-\Delta F_{a}. (96)

Because Eq. (95) and Eq. (96) are similar to Eqs. (86) and (85), respectively, we conclude that Eqs. (95) and (96) are the mathematical expressions of the second law of complexity for stochastic auxiliary systems. Eq. (87) denotes the equivalent expression of Eq. (86), which describes the increase of entropy for nonequilibrium systems. After making an analog with Eq. (86), Eq. (95) corresponds to a “Kelvin-Planck-like” statement of the second law of complexity and describes the increasing nature of complexity, which is the stochastic generalization of the second law of complexity.

Let us consider ΔiC(γ)\Delta_{i}C(\gamma) as a random variable. Analogous to Eq. (89), the resulting distribution for ΔiC\Delta_{i}C can be obtained by doing a path integral in \mathcal{M}:

P(ΔiC)\displaystyle P(\Delta_{i}C) δ(ΔiCΔiC(γ))P(γ)Dγ\displaystyle\equiv\int\delta(\Delta_{i}C-\Delta_{i}C(\gamma))P(\gamma)D\gamma (97)
=exp(ΔiC)δ(ΔiCΔiC(γ))P~(γ~)Dγ\displaystyle=\mathrm{exp}(\Delta_{i}C)\int\delta(\Delta_{i}C-\Delta_{i}C(\gamma))\tilde{P}(\tilde{\gamma})D\gamma
=exp(ΔiC)δ(ΔiC+ΔiC~(γ~))P~(γ~)Dγ~\displaystyle=\mathrm{exp}(\Delta_{i}C)\int\delta(\Delta_{i}C+\Delta_{i}\tilde{C}(\tilde{\gamma}))\tilde{P}(\tilde{\gamma})D\tilde{\gamma}
exp(ΔiC)P~(ΔiC)\displaystyle\equiv\mathrm{exp}(\Delta_{i}C)\tilde{P}(-\Delta_{i}C)

that can be rewritten as

P(ΔiC)P~(ΔiC)=exp(ΔiC).\frac{P(\Delta_{i}C)}{\tilde{P}(-\Delta_{i}C)}=\mathrm{exp}(\Delta_{i}C). (98)

Eq. (98) is the complexity version of the detailed fluctuation theorem analogous to Eq. (91) and the mathematical expression of the new version of Statement 2 for the stochastic system 𝒜\mathcal{A}. We summarize the statement as follows: the possibility for an increase in stochastic complexity production is exponentially greater than that of a corresponding decrease. The average cumulative complexity production is equal to zero if and only if P(γ)=P(γ~)P(\gamma)=P(\tilde{\gamma}), γ\forall\gamma\in\mathcal{M}. This reveals that the time-asymmetry only vanishes when all possible trajectories in \mathcal{M} are reversible (i.e., the maintenance of time-reversal symmetry). However, this vanishing condition is extremely hard to satisfy because the probability of reversing even a small fraction of the trajectory in the space of unitary operators is negligible for a stochastic system 𝒜\mathcal{A}, which has been already discussed in Section 9 of 67 .

This section ends with the exploration of the probability to observe a complexity value that rises beyond the complexity upper bound obtained by applying the Jensen’s inequality to Eq. (71), that is, ΔFa-\Delta F_{a}. Same as before, let us resolve this problem within thermodynamics and give a thermodynamic analog of the complexity. To experimentally obtain the average work, we measure the fluctuating work WiW_{i} of a single trajectory in a specific carry out ii of the experiment in statistical mechanics 69 . A protocol defines a family of the Hamiltonian {H(t)}\{H(t)\} governing the system evolution from t=0t=0 to t=Tt=T. The experiments are run by controlling a parameter, such as

Λ(t)Λ0+(ΛTΛ0)×tT,with0tT,\Lambda(t)\equiv\Lambda_{0}+(\Lambda_{T}-\Lambda_{0})\times\frac{t}{T},\quad\mathrm{with}\quad 0\leq t\leq T, (99)

where Λ={Λi}\Lambda=\{\Lambda_{i}\} is a set of external controlled parameters changing in time. One can consider a similar situation in the complexity context, because the trajectories in \mathcal{M} are generated by time-dependent Hamiltonians and each trajectory corresponds to a specific value of complexity. In particular, we consider a similar protocol that defines a family of {H(t)}\{H(t)\} and evolves the unitary operator181818Note that we evolve a unitary operator using a time-dependent Hamiltonian instead by evolving a quantum state. in a time interval TT. Then, we repeat the experiment nn times and compute the complexity for each experiment of the quantum system 𝒬\mathcal{Q}191919We must reinitialize the system 𝒬\mathcal{Q} to the same initial state after each experiment.. Taking the arithmetic mean of these values to build a complexity ensemble (i.e., Ω={Ci}\Omega=\{C_{i}\}), we can construct a probability distribution P(C)P(C) for complexity in the limit nn\to\infty. The average complexity is obtained as

C(T)=limn1ni=1nCi(T)=P(C)CdC.\langle C(T)\rangle=\underset{n\to\infty}{\mathrm{lim}}\frac{1}{n}\sum_{i=1}^{n}C_{i}(T)=\int P(C)C\mathrm{d}C. (100)

Notably, the considered Hamiltonians are time-dependent and the evolution of the system 𝒜\mathcal{A} follows a Markov process202020If the system has a time-independent Hamiltonians (e.g., SYK model), the randomness in the distribution function P(C)P(C) comes from the random couplings {J}\{J\}. We leave this for our future study.. Now, suppose several experiments with C>ΔFa+ζC>-\Delta F_{a}+\zeta are included in the ensemble Ω\Omega. In combination with Eq. (71), the probability of their appearance is as follows:

pv[C>ΔFa+ζ]\displaystyle p_{v}[C>-\Delta F_{a}+\zeta] ΔFa+ζP(C)dC\displaystyle\equiv\int_{-\Delta F_{a}+\zeta}^{\infty}P(C)\mathrm{d}C (101)
ΔFa+ζexp[η(C+ΔFaζ)]P(C)dC\displaystyle\leq\int_{-\Delta F_{a}+\zeta}^{\infty}\mathrm{exp}[\eta(C+\Delta F_{a}-\zeta)]P(C)\mathrm{d}C
exp[η(ΔFaζ)]0exp(ηC)P(C)dC\displaystyle\leq\mathrm{exp}[\eta(\Delta F_{a}-\zeta)]\int_{0}^{\infty}\mathrm{exp}(\eta C)P(C)\mathrm{d}C
=exp(ηζ),\displaystyle=\mathrm{exp}(-\eta\zeta),

where ζ\zeta is an arbitrary positive number. Eq. (101) shows a behavior similar to a thermodynamic case 70 where the left tail of the distribution P(C)P(C) becomes exponentially suppressed in the forbidden region C>ΔFaC>-\Delta F_{a}. Consequently, it is hard to measure a complexity value that rises significantly more than the multiples of η\eta beyond ΔFa-\Delta F_{a}, which can be considered as a phenomenon related to the second law of complexity. Unlike nonequilibrium thermodynamics 76 , the lower limit of the integral in Eq. (101) is not negative infinity but zero because the complexity metric should be non-negative 2 .

4.3 Fluctuation-dissipation theorem and complexity

Because we obtained a fluctuation theorem for the complexity in Eq. (98), it is interesting to ask whether it is possible to relate the complexity fluctuation to the cumulative complexity production representing the dissipation of the system 𝒜\mathcal{A} by analogy with thermodynamic fluctuation-dissipation theorem. To answer this question, we propose a complexity version of the fluctuation-dissipation theorem in this section.

Let us start by discussing the situation in thermodynamics. Using Eq. (2) and the nonequilibrium work to derive the equilibrium free energy difference, we obtain

βΔF=logexp(βW).-\beta\Delta F=\mathrm{log}\langle\mathrm{exp}(-\beta W)\rangle. (102)

We can expand the series on the left-hand side of this equation to the second-order term.

βΔF=(β)W+12!(β)2[W2W2],-\beta\Delta F=(-\beta)\langle W\rangle+\frac{1}{2!}(-\beta)^{2}[\langle W^{2}\rangle-\langle W\rangle^{2}], (103)

where the second cumulant is

σS2(β)2[W2W2]\sigma_{S}^{2}\equiv(\beta)^{2}[\langle W^{2}\rangle-\langle W\rangle^{2}] (104)

as the variance of entropy SS. Notably, Eq. (104) represents the fluctuation of the entropy SS. The discussion on the entropy production can be alternatively phrased in terms of dissipation work (i.e., Wdiss=β1ΔiSW_{\mathrm{diss}}=\beta^{-1}\Delta_{i}S). From the Callen-Welton theorem 71 ,

dissipationfluctuation,\mathrm{dissipation}\propto\mathrm{fluctuation}, (105)

the fluctuation-dissipation theorem can be obtained by combining Eq. (103) with Eq. (88), namely

2ΔiS=σS2.2\langle\Delta_{i}S\rangle=\sigma_{S}^{2}. (106)

This fluctuation-dissipation theorem was studied in 72 and has many potential applications, including the construction of some hydrodynamic approaches 57 .

We now go back to the content of complexity. Eq. (71) gives

ηΔFa=logexp(ηC),-\eta\Delta F_{a}=\mathrm{log}\langle\mathrm{exp}(\eta C)\rangle, (107)

where the average value of the exponential function on the right-hand side is obtained using the probability P(C)P(C):

exp(ηC)=exp(ηC)P(C)dC.\langle\mathrm{exp}(\eta C)\rangle=\int\mathrm{exp}(\eta C)P(C)\mathrm{d}C. (108)

We expand the left-hand side of Eq. (107) into an infinite series:

ηΔFa=ηζ1(C)+(η)22!ζ2(C,C2)+n=3(η)nn!ζn(C,C2,,Cn),-\eta\Delta F_{a}=\eta\zeta_{1}(C)+\frac{(\eta)^{2}}{2!}\zeta_{2}(C,C^{2})+\sum_{n=3}^{\infty}\frac{(\eta)^{n}}{n!}\zeta_{n}(C,C^{2},...,C^{n}), (109)

where ζn\zeta_{n} here denotes nn-order cumulant212121Note the distinction between ζn\zeta_{n} and the positive number ζ\zeta in Section 4.2.. Among these cumulants, the first and second order terms are

ζ1(C)=C,ζ2=σC2=C2C2,\zeta_{1}(C)=\langle C\rangle,\quad\zeta_{2}=\sigma_{C}^{2}=\langle C^{2}\rangle-\langle C\rangle^{2}, (110)

and the second-order cumulant denotes the variance (fluctuation) of the complexity. Recall that the state space is 𝒮={γ0,γ1,,γN}\mathcal{S}=\{\gamma_{0},\gamma_{1},\cdots\cdots,\gamma_{N}\} that contains (N+1)(N+1) mutually independent random variables. If we take the limit NN\to\infty, then P(C)P(C) is approximately Gaussian for the central limit theorem.

P(C)1σC22πexp(12[CC]2σC2).P(C)\approx\frac{1}{\sigma^{2}_{C}\sqrt{2\pi}}\mathrm{exp}\left(-\frac{1}{2}\frac{[C-\langle C\rangle]^{2}}{\sigma^{2}_{C}}\right). (111)

Moreover, in this case, we can expand Eq. (107) only to the second-order terms. Then, Eqs. (109) and (95) imply

2ΔiC=η2σC2.2\langle\Delta_{i}C\rangle=\eta^{2}\sigma_{C}^{2}. (112)

This is the version of the fluctuation-dissipation theorem for complexity that connects the fluctuation of complexity with the dissipation of the auxiliary system 𝒜\mathcal{A} during the evolution.

Notably, Eq. (112) essentially links the fluctuation of the trajectories (each trajectory corresponds to a specific complexity value) in \mathcal{M} with the time-dependent perturbation applied to the quantum system 𝒬\mathcal{Q} because any trajectory in \mathcal{M} is generated by the Hamiltonian H(t)H(t) of the system 𝒬\mathcal{Q}. This connection implies that Eq. (112) may play a vital role in quantifying holographic fluctuations, which will be discussed in Section 4.4.

4.4 Remarks on holographic fluctuations and complexity

The discussions in this section are inspired by the remarkable work of Chemissany and Osborne 21 , who developed a method for identifying the relation between the fluctuation of the bulk geometry and the perturbation applied to the boundary quantum system via the principle of minimal complexity. We argue that the obtained Jarzynski framework provides a potential tool for quantitatively investigating the holographic fluctuations. We divide this section into two subsections. First, we briefly review the settings and main contribution of 21 . Second, we give a remark based on the Jarzynski framework obtained in the previous sections.

4.4.1 Basic settings and construction of the bulk spacetime

The boundary system is a 22-local quantum system comprising KK distinct subsystems (qubits)222222For simplicity, we only consider the system has 22-local Hamiltonian. In principle, one can consider a kk-local Hamiltonian for any kk., which is initialized in a trivial reference state |Ω\ket{\Omega}. We use different numbers that form a point set {1,2,K}\{1,2\cdots,K\} to label different subsystems. A unitary operator UU is generated in a certain time interval TT that diagonalizes a Hamiltonian HH of the boundary system. The focus here is the evolution of the unitary operator from II to UU. It is equivalent to the evolution of the trajectory γ\gamma of a fictitious particle with unit mass (of the system 𝒜\mathcal{A}) moving on \mathcal{M} from II to UU. The time interval TT forms another set [0,T][0,T], and a topological space is appointed as the bulk spacetime, i.e., (𝒳,𝒯)(\mathcal{X},\mathcal{T}), where 𝒳={1,2,K}×[0,T]\mathcal{X}=\{1,2\cdots\cdots,K\}\times[0,T] and 𝒯\mathcal{T} is an undetermined topology denoting the causality of the bulk spacetime. The point set 𝒳\mathcal{X} corresponds to holographic spacetime with discrete boundary spatial coordinates j{1,2,,K}j\in\{1,2,\cdots\cdots,K\} and “radial” holographic time coordinates as t[0,T]t\in[0,T]. We can completely identify a bulk spacetime from the trajectories {γ}\{\gamma\} via the principle of minimal complexity by determining the topology 𝒯\mathcal{T}21 .

The target unitary UU form for the boundary system is presented in Eq. (3). This expression can be approximately replaced by a discrete quantum circuit UV=VTV2V1U\approx V=V_{T}\cdots V_{2}V_{1}, where Vt,t{1,2,,T}V_{t},t\in\{1,2,\cdots\cdots,T\} denote the gates acting on one or two qubits at a moment. Therefore, the set 𝒳\mathcal{X} becomes 𝒳={1,2,K}×{1,2,,T}\mathcal{X}=\{1,2\cdots\cdots,K\}\times\{1,2,\cdots\cdots,T\}. This forms a simple graph in Fig. 4.

Refer to caption
Figure 4: Each line represents a qubit and edges connecting two lines refer to local gates.

We put an edge between the two vertices if a two-qubit gate acts nontrivially on a pair of qubits. To obtain the topology (causality) of the bulk spacetime, we first sample points from a Poisson distribution on set 𝒳\mathcal{X} with density ρ\rho to give a new finite set 𝒴\mathcal{Y}. The causality relation on 𝒴\mathcal{Y} is then constructed by sending a detectable signal from a spacetime point x=(i,s)x=(i,s) to another point y=(j,t)y=(j,t) via a unitary process γ\gamma. We are allowed to interrupt the evolution of γ\gamma by introducing arbitrary fast local interventions232323Local unitary operations introduce these interventions 21 . at any holographic time t=twt=t_{w}. Consequently, this method of building causal structures connecting with trajectory γ\gamma gives us a topology for building the topological space (𝒳,𝒯)(\mathcal{X},\mathcal{T}) regarded as the bulk spacetime.

4.4.2 Holographic fluctuations and Jarzynski identity

According to the above discussion, any geodesic γ\gamma\in\mathcal{M} gives rise to the bulk spacetime. Therefore, the fluctuating trajectories in \mathcal{M} (i.e., the trajectories with near-minimal complexity) can be interpreted as fluctuations in the bulk geometry considered as holographic fluctuations. To capture structures of holographic fluctuations, we only need to describe the structures of the fluctuating trajectories in \mathcal{M} based on the three following premises:

  • 1.

    The complexity, C(γ)C(\gamma), is sensitive to the applied 2-local interactions (quantum gates) between an arbitrary pair of qubits, but not to a particular pair of qubits to which the unitary gate is applied 21 .

  • 2.

    A complexity functional determines a geodesic in \mathcal{M} similar to an action functional specifies a geodesic in classical mechanics.

  • 3.

    Any trajectory in \mathcal{M} arises from the boundary system via the 𝒬\mathcal{Q}-𝒜\mathcal{A} correspondence; thus, perturbing the boundary system by inserting quantum gates is equivalent to perturbing the trajectory γ\gamma in \mathcal{M}.

Fig. 5 depicts the structure of the holographic fluctuations summarized as follows: the trajectories are equal to γ(t)\gamma(t) for all tt, except at one moment t=twt=t_{w} when a local unitary gate242424One can regard this gate as the arbitrary fast local intervention. is applied to an arbitrary pair of qubits ii and jj, followed immediately by its inverse gate 21 . Because the applied gate generates an instantaneous interaction between qubits ii and jj and the inverse gate cancels the interaction effect, a “wormhole” is created between two points (i,tw)(i,t_{w}) and (j,tw)(j,t_{w}) in the dual bulk spacetime that immediately “evaporates”. In 21 , Eq. (48) was introduced to model these fluctuations. Recall that the complexity has a quadratic action form in Eq. (48); hence, the fluctuating trajectories in \mathcal{M} can be understood as the stochastic trajectories of the Brownian motions in \mathcal{M} and are the solutions of Eq. (24) invoked as a toy model of the black hole in 46 . In summary, the bulk geometry modeled by Eq. (48) constitutes a spacetime where “wormholes” are fluctuating in and out of existence between all pairs of spacetime points 21 .

Refer to caption
Figure 5: This can be seen as the structure of holographic fluctuation which is composed of qubits (lines) and gates (connections between lines). The red part represents a fluctuation, that is, in a time-slice, a 22-local gate is applied to an arbitrary pair of qubits followed by its inverse.

Now, we make a remark on the holographic fluctuations from the perspective of the obtained Jarzynski framework. Applying the results we obtained in the previous sections helps us obtain a better quantitative realization of the holographic fluctuations and several clues strengthen our confidence about that. First, the holographic fluctuation follows a stochastic process in \mathcal{M}, which means that we can introduce a partition function Eq. (48) using the path integral approach to model its structure. Because we have the partition function, the Jarzynski identity Eq. (71) can be obtained to further give a framework that provides us with a version of the fluctuation theorem for complexity, Eq. (98), which describes the complexity fluctuations. Second, the complexity fluctuations can equivalently describe the fluctuations of trajectories in \mathcal{M} based on the second premise, and the fluctuating trajectories can give rise to the bulk spacetime. Therefore, the fluctuation theorem describes the fluctuations of complexity and bulk geometry. Third, the fluctuation-dissipation theorem connects the average cumulative complexity production with fluctuations of complexity, which is usually used to study the response of a system to external influences. Thus, Eq. (112) may be applicable to detect the response of the bulk geometry to some perturbations applied to the boundary quantum system. In particular, this equality can be used to quantitatively measure the fluctuations of the complexity by capturing the information on the average cumulative complexity production.

5 Example: transverse field Ising model

The obtained Jarzynski framework gives us few interesting conclusions that need testing. Camilo and Teixeira 73 studied the complexity of the transverse field Ising model (TFIM). We follow their steps to numerically test two of our main proposals, Eqs. (95) and (96). For simplicity, we only consider two phases with the ferromagnetic order along the zz direction (FMZ) and the paramagnetic phase (PM). We do not focus on the detailed derivation here but will only present a brief review of the derivation with minimal efforts. One can refer to 71 for a detailed derivation.

5.1 Model settings

The TFIM is determined as follows by the time-dependent Hamiltonian

H(t)=Jj=1Nσj3σj+13g(t)j=1Nσj1,H(t)=-J\sum_{j=1}^{N}\sigma_{j}^{3}\sigma_{j+1}^{3}-g(t)\sum_{j=1}^{N}\sigma_{j}^{1}, (113)

where JJ denotes the definite numbers representing couplings and σj3\sigma_{j}^{3} and σj1\sigma_{j}^{1} are the Pauli matrices acting on the jjth lattice site. g(t)=g0+g1(t)cos(ξt)g(t)=g_{0}+g_{1}(t)\mathrm{cos}(\xi t) is the transverse field denoting the perturbation comprising a constant g0g_{0} and monochromatic driving term with frequency ξ\xi. Assuming that the system is a closed lattice with periodical boundaries σN+1α=σ1α\sigma_{N+1}^{\alpha}=\sigma_{1}^{\alpha}, restricting NN to be even and applying the Fourier transformation, the Hamiltonian can be rewritten as follows in terms of Jordan-Wigner fermions cq=eiπ4NqBckeikqc_{q}=\frac{\mathrm{e}^{i\frac{\pi}{4}}}{\sqrt{N}}\sum_{q\in B}c_{k}\mathrm{e}^{ikq} as H(t)=k>0Hk(t)H(t)=\sum_{k>0}H_{k}(t):

Hk(t)=[2g(t)ωk](ckck+ckck)+Δk(ckck+ckck)ωk,H_{k}(t)=[2g(t)-\omega_{k}](c_{k}^{\dagger}c_{k}+c_{-k}^{\dagger}c_{-k})+\Delta_{k}(c_{k}^{\dagger}c_{-k}^{\dagger}+c_{-k}c_{k})-\omega_{k}, (114)

where B={±πN,±3πN,±5πN,,±(N1)πN}B=\{\pm\frac{\pi}{N},\pm\frac{3\pi}{N},\pm\frac{5\pi}{N},\cdots\cdots,\pm\frac{(N-1)\pi}{N}\} denotes the Brillouin zone, ωk=2Jcosk\omega_{k}=2J\mathrm{cos}k, Δk=2Jsink\Delta_{k}=2J\mathrm{sin}k, and the trivial contribution 2Ng(t)-2Ng(t) is neglected. Eq. (114) is called the Bogoliubov-de Gennes (BdG) Hamiltonian which conserves momentum and parity; the latter implements the 2\mathbb{Z}_{2} symmetry resulting in a decomposition of the Hilbert space into a direct sum of Neveu-Schwarz (NS) sectors. The system evolution dynamically obeys the Schrödinger’s equation. The dynamics is confined to the two-level Nambu subspace spanned by {|0k0k,|1k1k}\{\ket{0_{-k}0_{k}},\ket{1_{-k}1_{k}}\}. The system state at any time tt will acquire the following form:

|Ψ(t)=k>0[uk(t)|1k1k+vk(t)|0k0k],\ket{\Psi(t)}=\otimes_{k>0}[u_{k}(t)\ket{1_{-k}1_{k}}+v_{k}(t)\ket{0_{-k}0_{k}}], (115)

where the coefficients follow the Schrödinger equation, and the spinor is denoted by the symbol Ψk(t)[uk(t)\Psi_{k}(t)\equiv[u_{k}(t) vk(t)]Tv_{k}(t)]^{\mathrm{T}}.

Imagine that the system is initialized in state |Ω=k>0|0k0k\ket{\Omega}=\otimes_{k>0}\ket{0_{-k}0_{k}} at t=0t=0 and evolves during a time interval TT to a target state |Ψ(T)=U|Ω\ket{\Psi(T)}=U\ket{\Omega} through a specific unitary operator U=γ(T)=k>0UkU=\gamma(T)=\otimes_{k>0}U_{k}, where UkU_{k} represents the kkth momentum sector of UU. The boundary conditions γk(0)=I\gamma_{k}(0)=I and γk(T)=Uk\gamma_{k}(T)=U_{k} are fixed. The application of the Bogoliubov transformation suggests that the complexity metric for each momentum sector is presented as follows in terms of Hopf coordinates (ϕ1,ϕ2,ω)(\phi_{1},\phi_{2},\omega)

ds2|k=dω2+cos2ωdϕ12+sin2ωdϕ22,\mathrm{d}s^{2}|_{k}=\mathrm{d}\omega^{2}+\mathrm{cos}^{2}\omega\mathrm{d}\phi_{1}^{2}+\mathrm{sin}^{2}\omega\mathrm{d}\phi_{2}^{2}, (116)

where ϕ1\phi_{1} and ϕ2\phi_{2} correspond to two phases and

ω(t)=tT×|arcsin(Δkθ(l)ϵ(k,l)sin(ϵ(k,l)t))|,t[0,T]\omega(t)=\frac{t}{T}\times\left|{\mathrm{arcsin}\left(\frac{\Delta_{k}\theta^{(l)}}{\epsilon_{(k,l)}}\mathrm{sin}(\epsilon_{(k,l)}t)\right)}\right|,\quad t\in[0,T] (117)

denotes the linear profile, where ll\in\mathbb{Z}. The anisotropic parameter 74 ; 75 and the eigenvalues of BdG Hamiltonian are represented by

ϵ(k,l)=(δg0(l)ωk)2+(Δkθ(l))2,θ(l)=(1)l𝒥l(4g1η),\epsilon_{(k,l)}=\sqrt{(\delta g_{0}^{(l)}-\omega_{k})^{2}+(\Delta_{k}\theta^{(l)})^{2}},\quad\theta^{(l)}=(-1)^{l}\mathcal{J}_{l}\left(\frac{4g_{1}}{\eta}\right), (118)

respectively. They are obtained from the Bogoliubov transformation and the high-frequency driving approximation 73 . Here 𝒥l(x)\mathcal{J}_{l}(x) represents the Bessel functions and δg0(l)g0lξ/4\delta g_{0}^{(l)}\equiv g_{0}-l\xi/4 is called the detuning parameter. After summing over all kk for Eq. (116), from Eq. (20), the complexity is derived in the following form:

C(t)\displaystyle C(t) =inf𝛾k>0Ck(t)\displaystyle=\underset{\gamma}{\mathrm{inf}}\sum_{k>0}C_{k}(t) (119)
12k>0|arcsin(Δkθ(l)ϵ(k,l)sin(ϵ(k,l)t))|2,\displaystyle\equiv\frac{1}{2}\sum_{k>0}\left|{\mathrm{arcsin}\left(\frac{\Delta_{k}\theta^{(l)}}{\epsilon_{(k,l)}}\mathrm{sin}(\epsilon_{(k,l)}t)\right)}\right|^{2},

where CkC_{k} is defined as the complexity of the kkth momentum sector solely.

A numerical simulation is performed after the parameters NN, ll, JJ glg_{l}, and δg0(l)\delta g_{0}^{(l)} and the η\eta value (η=1\eta=1) are set. We use time-average instead of ensemble-average in the simulation because the motion on \mathcal{M} satisfies ergodicity. Therefore, we take the limit TT\to\infty and precisely replace the ensemble-average with the time-average:

Qaensemble=Qa(T)timelimT1T0TQa(t)dt,\left\langle Q_{a}\right\rangle_{\mathrm{ensemble}}=\left\langle Q_{a}(T)\right\rangle_{\mathrm{time}}\equiv\underset{T\to\infty}{\mathrm{lim}}\frac{1}{T}\int_{0}^{T}Q_{a}(t)\mathrm{d}t, (120)

where QaQ_{a} represents two related quantities, that is, complexity CC and the cumulative complexity production ΔiC\Delta_{i}C.

5.2 Numerical results

Two features of the numerical simulation support our previous analytical results:

  • 1.

    The computational free energy difference provides an average complexity upper bound, which is not violated. This supports Eq. (96).

  • 2.

    The non-negative average cumulative complexity production supports Eq. (95), which is the mathematical expression corresponding to the “Kelvin-Planck-like” statement of the second law of complexity.

The Hamiltonian Eq. (113) corresponds to various regimes according to the different values of the detuning parameter δg0(l)\delta g_{0}^{(l)}. The phases can be changed from the FMZ phase through a quantum critical point (QCP) to the PM phase by varying δg0(l)\delta g_{0}^{(l)} from 0 to JJ and to 2J2J 73 . For simplicity, we do not consider the critical behavior of the QCP herein and use Eq. (120) to calculate the time-averaged values of the complexity-related quantities for the FMZ and PM phases.

Refer to caption
(a) Evolution of Ferromagnetic Phase
Refer to caption
(b) Evolution of Paramagnetic Phase
Figure 6: Here CC, C\langle C\rangle, ΔFa\Delta F_{a} and ΔiC\langle\Delta_{i}C\rangle represent complexity, average complexity, the computational free energy difference and the average cumulative complexity production, respectively, and the parameters are set to η=1\eta=1, N=1000N=1000, l=2l=2, J=0.01ΩJ=0.01\Omega, g1=Ωg_{1}=\Omega, and δg0(l)={0,2J}\delta g_{0}^{(l)}=\{0,2J\} that correspond to the ferromagnetic phase along the zz direction and the paramagnetic phase, respectively.

Recall that δg0(l)={0,2J}\delta g_{0}^{(l)}=\{0,2J\} correspond to the FMZ and PM phases. We plot the complexity CC, time-averaged complexity C\langle C\rangle, computational free energy difference ΔFa-\Delta F_{a}, and average cumulative complexity production ΔiC\langle\Delta_{i}C\rangle, as a function of time interval TT in Figs. 6(a) and 6(b).

The FMZ and PM phases present an approximately linear growth initially but subsequently show distinct behaviors. Because the FMZ phase is susceptible to the time-dependent transverse field, its complexity finds it hard to maintain stability and violently fluctuates around a certain value for a long time interval (Fig. 6(a)). In contrast, the complexity of the PM phase will remain stable around a certain value with time interval (Fig. 6(b)). The average cumulative complexity production of the PM phase gradually becomes more negligible and eventually turns to zero. Physically, this comes from the disordered character of the PM phase: “non-local operations are required to create order in a state of the PM phase, but local operations would maintain disorder of such a state. Consequently, the influence of the transverse field is suppressed to prevent the system from creating non-local gates to order the system when g0g_{0} is large 73 .” However, for the FMZ phase, even though the average cumulative complexity production will gradually decrease, it will not drop to zero for a long period. The computational free energy differences are obtained by applying the Jarzynski identity corresponding to the changes for the FMZ and PM phases shown in Figs. 6(a) and 6(b), respectively.

We have simulated the dissipation for the FMZ (Fig. 6(a)) and PM (Fig.6(b)) phases and obtained a fluctuation-dissipation for the complexity. Therefore, in Eq. (112), we can make some further discussions about the dissipative behaviors of the two phases and their relation with the holographic fluctuations. First, let us discuss the evolution of the average cumulative complexity production (dissipative behaviors) for the large TT regime, where P(C)P(C) is simply Gaussian because of the central limit theorem. Fig. 6(a) depicts that for the FMZ phase, no steady state can be found in a short period (the complexity violently fluctuates) and its average cumulative complexity production always takes large values. Meanwhile, the average cumulative complexity production of the PM phase (plotted in Fig. 6(b)) shows a downward trend and tends to be zero for TT\to\infty, indicating that the dissipation vanishes for large TT. Physically, this means that the quantum system 𝒬\mathcal{Q} reaches its average complexity upper bound (i.e., CΔFa\left\langle C\right\rangle\to-\Delta F_{a}), such that no resource can be extracted from the system 𝒬\mathcal{Q} due to the breaking of the time-asymmetry for any possible trajectory in \mathcal{M} 28 , and reversibility holds for all possible trajectories. Additionally, since ΔiC=0\left\langle\Delta_{i}C\right\rangle=0 can only be obtained when TT\to\infty, we assume TT\to\infty as a complexity quasi-static limit analogous to thermodynamics. By taking this limit, Eq. (96) takes the equal sign and the average complexity of system 𝒬\mathcal{Q} reaches its upper bound252525Brown and Susskind called this “complexity equilibrium” 8 . However, because “equilibrium” is usually used to describe macroscopic quantities in thermodynamics, to avoid confusion, we do not use this word in this study.. As mentioned in Section 4.4.2, the complexity fluctuations can be regarded as bulk geometry fluctuations and the fluctuation-dissipation theorem states that ΔiCσC2\langle\Delta_{i}C\rangle\sim\sigma^{2}_{C}; hence, theoretically, we can construct a bulk spacetime from a topological space and simulate its fluctuation using Eq. (112). In particular, let the TFIM be our boundary quantum system 𝒬\mathcal{Q} with a time-dependent perturbation (transverse field). Note that the transverse field in the system 𝒬\mathcal{Q} causes the geodesics in \mathcal{M} to fluctuate. We can then utilize the method of 21 to construct a dual topological space from the TFIM as our bulk spacetime. The geometric structures of the bulk spacetime are changed because the transverse field affects the complexity to vary. We leave the exploration of this part to our future work.

We end this section with a remark. In the sense of average, Eq. (120) may not be sufficiently accurate to describe behaviors of small TT regime because it holds strictly only when TT\to\infty. Therefore, performing the time average might not be the best approach to run simulations. In comparison, employing the Metropolis algorithm over Monte Carlo sweeps may be more practical in analogy with the cases of performing ensemble average, which has already been used to model a similar scenario of the common Jarzynski identity 76 .

6 Conclusions and Outlooks

This study is motivated by the Nielsen’s complexity geometry and the elegant proof of the Jarzynski identity done by Hummer and Szabo. We introduced the path integral in the context of the complexity geometry and used it to derive a complexity version of the Jarzynski identity. In addition, we made remarks on different complexity-related topics based on the obtained identity. The first remark is that exp(ηFa(T))=exp(ηR(T))=exp[η(CmaxC(T))]\mathrm{exp}(-\eta F_{a}(T))=\left\langle\mathrm{exp}(-\eta R(T))\right\rangle=\left\langle\mathrm{exp}\left[-\eta(C_{\mathrm{max}}-C(T))\right]\right\rangle provides us a new evidence of the existence of a well-defined resource theory of uncomplexity 8 ; 87 . The second and most crucial remark is an extension of the proposal made by Brown and Susskind, that is, the second law of complexity 8 . Our focus was slightly different from theirs such that the quantum system 𝒬\mathcal{Q} we considered is governed by a time-dependent Hamiltonian that forms a classical auxiliary system 𝒜\mathcal{A} with stochastic features. However, Brown and Susskind considered a quantum system with a time-independent Hamiltonian forming a chaotic auxiliary system. We extended their original second law to the case involving a stochastic auxiliary system. Based on the trajectory thermodynamics, we argued that the complexity version of the Jarzynski identity provides two mathematical expressions of the second law of complexity. Third, we derived a fluctuation-dissipation theorem for complexity by analogy with thermodynamics, which links the fluctuation of complexity to a crucial quantity, namely the average cumulative complexity production. The last remark is on holographic fluctuations. Because any geodesic in the space of unitary operators encodes a bulk spacetime with an extra dimension via the principle of minimal complexity 21 and any geodesic in the space of unitary operators corresponds to a value of complexity, the complexity fluctuations can play the role of bulk geometry fluctuations. Furthermore, our framework connects with the complexity fluctuations, therefore, our results can provide us with a new perspective on the exploration of holographic fluctuations by applying the complexity version of the Jarzynski framework.

We only touched some aspects of these issues, and extensive topics are waiting to be tackled. Several of them are presented below:

  • \bullet

    To explore the holographic fluctuations, one must sample points from the Poisson distribution on point set 𝒳\mathcal{X}, which is a discrete process. Consequently, integrals in \mathcal{M} are hard to solve. It is significant to ask if there is a proper continuum limit. Moreover, taking the continuum limit, the resulting bulk spacetime for CFTs should then converge to AdS 21 .

  • \bullet

    In our discussions, the considered boundary system is a normal quantum system comprising qubits but not a standard quantum field theory. The path integral complexity 78 ; 79 should be a candidate for generalizing our formalism to the quantum field theory. Choosing a suitable definition of quantum complexity would facilitate directly linking the quantum computational complexity with holographic complexity 80 . This generalization may provide us with deeper insights into the AdS/CFT duality, e.g., for Complexity=Action 19 ; 20 and Complexity=Volume conjectures 16 ; 17 ; 18 .

  • \bullet

    We chose the TFIM as an example. One would like to know if our results are applicable for other models, such as the SYK model 37 ; 38 ; 39 ; 40 , or if we can directly simulate Quantum Brownian Circuit. The Quantum Brownian Circuit is quite complicated, and a quantum simulation might be needed. As a reference, 81 recently proposed a quantum simulation for calculating the Jarzynski identity.

Acknowledgements.
We would like to thank Peng Cheng, Shao-Feng Wu and Yu-qi Lei for helpful discussions. This work is partly supported by NSFC (No.11875184).

Appendix A Fokker-Planck equations

The time-dependent operators LtL_{t} in Eqs. (25) and (51) are Fokker-Planck operators. Hence, understanding the derivation of the Fokker-Planck equations is helpful 82 . We first review the common derivation of the Fokker-Planck equation and generalize it to the cases in a curved space equipped with a non-Euclidean metric 83 . The latter can be directly used in Eq. (25).

Let us consider the stochastic differential equations (SDEs)

dx(t)=f(x,t)dt+g(x,t)dB(t),\mathrm{d}x(t)=f(x,t)\mathrm{d}t+g(x,t)\mathrm{d}B(t), (121)

where xx denotes a stochastic variable, ff and gg are the functions of xx, and dB(t)\mathrm{d}B(t) denotes the independent Brownian motion with unit variance per unit time. We introduce an arbitrary function h(x)h(x) and use the Ito’s rule to derive the Fokker-Planck equation:

dh(x)=(dhdx)f(x,t)dt+(d2hdx2)g2(x,t)2dt+(dhdx)g(x,t)dB(t).\mathrm{d}h(x)=\left(\frac{\mathrm{d}h}{\mathrm{d}x}\right)f(x,t)\mathrm{d}t+\left(\frac{\mathrm{d}^{2}h}{\mathrm{d}x^{2}}\right)\frac{g^{2}(x,t)}{2}\mathrm{d}t+\left(\frac{\mathrm{d}h}{\mathrm{d}x}\right)g(x,t)dB(t). (122)

If we take averages on both sides, we immediately obtain:

dh(x)dt\displaystyle\frac{\mathrm{d}\langle h(x)\rangle}{\mathrm{d}t} =f(x,t)(dhdx)+g2(x,t)2(d2hdx2)\displaystyle=\left\langle f(x,t)\left(\frac{\mathrm{d}h}{\mathrm{d}x}\right)\right\rangle+\left\langle\frac{g^{2}(x,t)}{2}\left(\frac{\mathrm{d}^{2}h}{\mathrm{d}x^{2}}\right)\right\rangle (123)
=[f(x,t)(dhdx)+g2(x,t)2(d2hdx2)]P(x,t)dx,\displaystyle=\int_{-\infty}^{\infty}\left[f(x,t)\left(\frac{\mathrm{d}h}{\mathrm{d}x}\right)+\frac{g^{2}(x,t)}{2}\left(\frac{\mathrm{d}^{2}h}{\mathrm{d}x^{2}}\right)\right]P(x,t)\mathrm{d}x,

where P(x,t)P(x,t) represents the distribution function satisfying P(x±,t)=0P(x\to\pm\infty,t)=0 and P(x,t)dx=1\int_{-\infty}^{\infty}P(x,t)\mathrm{d}x=1. Using the part-by-part integration, and h(x)=P(x,t)h(x)dx\langle h(x)\rangle=\int_{-\infty}^{\infty}P(x,t)h(x)\mathrm{d}x, we obtain

h(x)P(x,t)tdx=h(x){x[f(x,t)P(x,t)]+122x2[g2(x,t)P(x,t)]}dx,\int_{-\infty}^{\infty}h(x)\frac{\partial P(x,t)}{\partial t}\mathrm{d}x=\int_{-\infty}^{\infty}h(x)\left\{-\frac{\partial}{\partial x}\left[f(x,t)P(x,t)\right]+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}\left[g^{2}(x,t)P(x,t)\right]\right\}\mathrm{d}x, (124)

where h(x)h(x) is independent of tt. This equality can be transformed into the Fokker-Planck equation, i.e.

P(x,t)t=LtP(x,t)=[f(x,t)P(x,t)]+122[g2(x,t)P(x,t)],\frac{\partial P(x,t)}{\partial t}=L_{t}P(x,t)=-\nabla\cdot\left[f(x,t)P(x,t)\right]+\frac{1}{2}\nabla^{2}\left[g^{2}(x,t)P(x,t)\right], (125)

where the time-dependent operator LtL_{t} refers to the Fokker-Planck operator and \nabla represents the Nabla operator. Eq. (25) is the Fokker-Planck equation for a vector Ito stochastic equation, in which xxx\to\vec{x}, fff\to\vec{f} and g(x,t)g(x,t) become a matrix with the same dimensions of x\vec{x}.

In the context of the complexity geometry, the configuration space is the group manifold \mathcal{M} equipped with a non-Euclidean metric, Eq. (17). Thus, we must introduce some modifications to derive the Fokker-Planck equation governing the time evolution of distributions on a Riemannian manifold. The first modification denotes the volume element

dx[dγ],\mathrm{d}\vec{x}\to[\mathrm{d}\gamma], (126)

where [dγ][\mathrm{d}\gamma] denotes the Haar measure, namely Eq. (40), which makes the volume element independent of the choice of coordinate systems. The gradient, divergence, and Laplacian (or Laplacian-Beltrami operator 83 ) are modified accordingly.

hgrad(h)MGMNNh,\nabla h\to\mathrm{grad}(h)_{M}\equiv G_{MN}\partial^{N}h, (127)
fdiv(f)MfM,\nabla\cdot\vec{f}\to\mathrm{div}\left(\vec{f}\right)\equiv\partial_{M}f^{M}, (128)

where fMf^{M} denotes the component of vector f\vec{f}\in\mathcal{M} and

2hdiv(grad(h))|GMN(γ)|12M(|GMN(γ)|12GMNNh).\nabla^{2}h\to\mathrm{div}\left(\mathrm{grad}(h)\right)\equiv|G_{MN}(\gamma)|^{-\frac{1}{2}}\partial_{M}\left(|G_{MN}(\gamma)|^{\frac{1}{2}}G^{MN}\partial_{N}h\right). (129)

hh represents an arbitrary function. Applying these modifications to the Fokker-Planck equation yields

P(γ,t)t=LtP(γ,t)=div[f(γ,t)P(γ,t)]+12div{grad[g2(γ,t)P(γ,t)]},\frac{\partial P(\gamma,t)}{\partial t}=L_{t}P(\gamma,t)=-\mathrm{div}\left[\vec{f}(\gamma,t)P(\gamma,t)\right]+\frac{1}{2}\mathrm{div}\left\{\mathrm{grad}\left[g^{2}(\gamma,t)P(\gamma,t)\right]\right\}, (130)

as the Fokker-Planck equation for the group manifold \mathcal{M}, where f\vec{f} is the vector in \mathcal{M}. We generally regard Eq. (130) as the general form of the Fokker-Planck equation governing the time evolution of distributions in any curved space by considering γ\gamma as a vector in that space.

We provide a special example, i.e. Eq. (24), to obtain a better understanding of the Fokker-Planck equation. Note that i8K(K1)j<kαj,αk=03σjαjσkαk\frac{i}{\sqrt{8K(K-1)}}\sum_{j<k}\sum_{\alpha_{j},\alpha_{k}=0}^{3}\sigma_{j}^{\alpha_{j}}\otimes\sigma_{k}^{\alpha_{k}} is independent of γ\gamma; thus, we use g=g(γ,t)g=g(\gamma,t) to represent it. Recall that gg is a generalized Pauli matrix that is an anti-Hermitian operator. Hence, the square of gg is given as follows:

g2=(i8K(K1)j<kαj,αk=03σjαjσkαk)(i8K(K1)j<kαj,αk=03σjαjσjαk).g^{2}=\left(\frac{i}{\sqrt{8K(K-1)}}\sum_{j<k}\sum_{\alpha_{j},\alpha_{k}=0}^{3}\sigma_{j}^{\alpha_{j}}\otimes\sigma_{k}^{\alpha_{k}}\right)^{\dagger}\left(\frac{i}{\sqrt{8K(K-1)}}\sum_{j<k}\sum_{\alpha_{j},\alpha_{k}=0}^{3}\sigma_{j}^{\alpha_{j}}\otimes\sigma_{j}^{\alpha_{k}}\right). (131)

We notice that f(γ,t)=12γ\vec{f}(\gamma,t)=-\frac{1}{2}\gamma. Substituting f\vec{f} and Eq. (131) into Eq. (130), we finally obtain the Fokker-Planck equation governing the time evolution of the distributions of the Quantum Brownian Circuit in \mathcal{M}.

Appendix B Path integral in \mathcal{M}

A path integral measure is required to do path integrals in \mathcal{M} which is a curved manifold equipped with the complexity metric. This measure generally contains an additional curvature modification on its exponent because the vector operation of the two points on \mathcal{M} is involved in the derivation of the path integral. However, because the additional term cannot be included in the measure, this takes the form of a partition function varying from Eq. (48). 52 provided a method for doing the path integral in a curved space without any additional curvature modification on the exponent. 52 introduced a factor that is contained in the measure. Therefore, we briefly introduce this method here.

For consistency, we set |γ,t\ket{\gamma,t} as the eigenstates of the position operators denoted by symbol X^M\hat{X}^{M}.

X^M|γ,t=XM(t)|γ,t.\hat{X}^{M}\ket{\gamma,t}=X^{M}(t)\ket{\gamma,t}. (132)

We consider that a particle in the system 𝒜\mathcal{A} evolves from γ1\gamma_{1}\in\mathcal{M} at t=t1t=t_{1} to γ2\gamma_{2}\in\mathcal{M} at t=t2t=t_{2}. Before deriving a more general case, we first assume that if \mathcal{M} is flat, the complexity metric reduces to the standard inner-product metric. We also suppose that a source term J(t)J(t) is added to contribute to the complexity. The complexity CC then becomes CJC_{J}.

CJ(γ,γ)=C(γ)+t1t2JM(t)(XM(t)XM(t))dtC_{J}(\gamma,\gamma_{*})=C(\gamma)+\int_{t_{1}}^{t_{2}}J_{M}(t)\left(X^{M}(t)-X^{M}_{*}(t)\right)\mathrm{d}t (133)

by introducing a fixed point γM\gamma_{*}\in M, where (XM(t)XM(t))\left(X^{M}(t)-X^{M}_{*}(t)\right) represents the tangent vector to the geodesic from γ\gamma to γ\gamma_{*}. To generalize Eq. (133), we replace this vector with the geodesic interval λ(γ;γ)\lambda(\gamma_{*};\gamma) 55 ; 56 . By definition, it is presented in the following form:

(XM(t)XM(t))λ(γ;γ)12D2(γ;γ),\left(X^{M}_{*}(t)-X^{M}(t)\right)\to\lambda(\gamma_{*};\gamma)\equiv\frac{1}{2}D^{2}(\gamma_{*};\gamma), (134)

where D(γ;γ)D(\gamma_{*};\gamma) is the relative geodesic length between points γ\gamma and γ\gamma_{*}; thus, the tangent vector to the geodesic at γ\gamma_{*} refers to

λM(γ;γ)=GMN(γ)δδXNλ(γ;γ).\lambda^{M}(\gamma_{*};\gamma)=G^{MN}(\gamma_{*})\frac{\delta}{\delta X^{N}_{*}}\lambda(\gamma_{*};\gamma). (135)

Because the source J(t)J(t) transforms like a covariant vector at γ\gamma_{*} independent of γ\gamma, Eq. (133) is changed to

CJ=C(γ)t1t2JM(t)λM(γ;γ)dt.C_{J}=C(\gamma)-\int_{t_{1}}^{t_{2}}J_{M}(t)\lambda^{M}(\gamma_{*};\gamma)\mathrm{d}t. (136)

Moreover, the Schwinger action principle 83 ; 84 states that

δKa(γ2,t2;γ1,t1)[J]=iγ2,t2|δCJ|γ1,t1[J]=0,\delta K_{a}(\gamma_{2},t_{2};\gamma_{1},t_{1})[J]=i\left\langle\gamma_{2},t_{2}|\delta C_{J}|\gamma_{1},t_{1}\right\rangle[J]=0, (137)

from which the equation of motion may be inferred as

δCδλM(J)=JM.\frac{\delta C}{\delta\lambda^{M}}(J)=J_{M}. (138)

We follow by expanding Ka(γ2,t2;γ1,t1)[J]K_{a}(\gamma_{2},t_{2};\gamma_{1},t_{1})[J] in the Taylor series about JM=0J_{M}=0 as

Ka(γ2,t2;γ1,t1)[J]\displaystyle K_{a}(\gamma_{2},t_{2};\gamma_{1},t_{1})[J] =n=01n!JM1JM2JMnδnγ2,t2|γ1,t1δJM1δJM2δJMn[J=0]\displaystyle=\sum_{n=0}^{\infty}\frac{1}{n!}J_{M1}J_{M2}\cdots J_{Mn}\frac{\delta^{n}\left\langle\gamma_{2},t_{2}|\gamma_{1},t_{1}\right\rangle}{\delta J_{M1}\delta J_{M2}\cdots\delta J_{Mn}}[J=0] (139)
=n=01n!(i)nJM1JMnγ2,t2|𝒫(λM1λMn)|γ1,t1[J=0]\displaystyle=\sum_{n=0}^{\infty}\frac{1}{n!}(-i)^{n}J_{M1}\cdots J_{Mn}\left\langle\gamma_{2},t_{2}|\mathcal{P}(\lambda^{M1}\cdots\lambda^{Mn})|\gamma_{1},t_{1}\right\rangle[J=0]
=γ2,t2|𝒫{exp(iJMλM)}|γ1,t1[J=0].\displaystyle=\left\langle\gamma_{2},t_{2}|\mathcal{P}\left\{\mathrm{exp}({-iJ_{M}\lambda^{M}})\right\}|\gamma_{1},t_{1}\right\rangle[J=0].

We acquire the following functional-differential equation by combining Eq. (138) with Eq. (139):

δCδλM(J)Ka(γ2,t2;γ1,t1)[J]=JMγ2,t2|γ1,t1[J],\frac{\delta C}{\delta\lambda^{M}}(J)K_{a}(\gamma_{2},t_{2};\gamma_{1},t_{1})[J]=J_{M}\left\langle\gamma_{2},t_{2}|\gamma_{1},t_{1}\right\rangle[J], (140)

whose integration gives rise to the path integral. If we integral over all λM\lambda^{M} with boundary conditions λ(γ;γ)λ(γ;γ1)\lambda(\gamma_{*};\gamma)\equiv\lambda(\gamma_{*};\gamma_{1}) at t=t1t=t_{1} and λ(γ;γ)λ(γ;γ2)\lambda(\gamma_{*};\gamma)\equiv\lambda(\gamma_{*};\gamma_{2}) at t=t2t=t_{2} to solve Eq. (140), we will find that

Ka(γ2,t2;γ1,t1)[J]=Mdim()dλMei(CJMλM),K_{a}(\gamma_{2},t_{2};\gamma_{1},t_{1})[J]=\int_{\mathcal{M}}\prod_{M}^{\mathrm{dim}(\mathcal{M})}\mathrm{d}\lambda^{M}\mathrm{e}^{i(C-J_{M}\lambda^{M})}, (141)

where, any vector in \mathcal{M} has dim[]{\mathrm{dim}[\mathcal{M}]} components. Changing the integral variables by general rule, namely

M=1dim()dλM\displaystyle\prod_{M=1}^{\mathrm{dim}(\mathcal{M})}\mathrm{d}\lambda^{M} =N=1dim()dXN|detδλMδXN|\displaystyle=\prod_{N=1}^{\mathrm{dim}(\mathcal{M})}\mathrm{d}X^{N}\left|\mathrm{det}\frac{\delta\lambda^{M}}{\delta X^{N}}\right| (142)
=Mdim()dXMGMN(γ)GMN(γ)|Δ(γ;γ)|,\displaystyle=\prod_{M}^{\mathrm{dim}(\mathcal{M})}\mathrm{d}X^{M}\frac{\sqrt{G_{MN}(\gamma)}}{\sqrt{G_{MN}(\gamma_{*})}}|\Delta(\gamma_{*};\gamma)|,

where |Δ(γ;γ)||\Delta(\gamma_{*};\gamma)| is defined in Eq. (45) as the Van Vleck-Morrete determinant. The original Haar measure Eq. (40) becomes

[dγ]Nc|Δ(γ;γ)|GMN(γ)[dγ].[\mathrm{d}\gamma]\to\frac{N_{c}|\Delta(\gamma_{*};\gamma)|}{\sqrt{G_{MN}(\gamma_{*})}}[\mathrm{d}\gamma]. (143)

Assuming that J=0J=0 for Eq. (141), we substitute Eq. (142) into Eq. (43) and ultlize the following property

Ka(γN,tN;γ0,t0)=i=0N1Ka(γi+1,ti+1;γi,ti)K_{a}(\gamma_{N},t_{N};\gamma_{0},t_{0})=\prod_{i=0}^{N-1}K_{a}(\gamma_{i+1},t_{i+1};\gamma_{i},t_{i}) (144)

to obtain Eq. (48) by absorbing all factors into the path integral measure.

Appendix C Principle of minimal complexity and second law of complexity

Analogous to the relationship between the least action principle and the maximum entropy suggested by the thermodynamic second law discussed in 86 , we will provide herein remarks on the connection between the principle of minimal complexity and the second law of complexity. We first need to identify a stationary distribution P(γ)P(\gamma). According to the principle of Jaynes, the entropy of the auxiliary system (i.e., the Shannon entropy of 𝒜\mathcal{A}) can be maximized under two certain constraints, namely

C=P(γ)C(γ)Dγand\displaystyle\langle C\rangle=\int_{\mathcal{M}}P(\gamma)C(\gamma)D\gamma\quad\mathrm{and} P(γ)Dγ=Constant\displaystyle\int_{\mathcal{M}}P(\gamma)D\gamma=\mathrm{Constant} (145)

to obtain the optimal distribution, where the first constraint is satisfied by the ergodicity and the second constraint is a normalized condition. The entropy of 𝒜\mathcal{A} is denoted by symbol SaS_{a}. Using the Lagrange multiplier method to maximize SaS_{a}, we obtain

δ[Sa+αMP(γ)Dγ+ηP(γ)C(γ)Dγ]=0,\delta\left[-S_{a}+\alpha\int_{M}P(\gamma)D\gamma+\eta\int_{\mathcal{M}}P(\gamma)C(\gamma)D\gamma\right]=0, (146)

where α\alpha and η\eta are Lagrangian multipliers262626The stationary distribution requires η\eta to be strictly greater than zero and a constant 85 .. The entropy is given as

SaP(γ)logP(γ)Dγ,S_{a}\equiv-\int_{\mathcal{M}}P(\gamma)\mathrm{log}P(\gamma)D\gamma, (147)

where η\eta is a positive constant number. The following distribution is stationary, namely Eq. (50). By substituting the stationary distribution into the Eq. (147), we find that

SaCS_{a}\propto\langle C\rangle (148)

because 𝒩a\mathcal{N}_{a} and η\eta are constants. The average complexity is maximized when system 𝒜\mathcal{A} reaches its thermal equilibrium (corresponding to the state with the largest SaS_{a}). This is in line with the description of the second law of complexity 8 .

We now focus on the principle of minimal complexity. We consider the perturbation of the distribution as δP(C(γ))\delta P(C(\gamma)); however, we obtain the following because P(C(γ))P(C(\gamma)) is stationary, such that δP(C(γ))=0\delta P(C(\gamma))=0:

δP(C(γ))=ηP(C(γ))δC(γ)=0.\delta P(C(\gamma))=-\eta P(C(\gamma))\delta C(\gamma)=0. (149)

δC=0\delta C=0 gives rise to the Euler-Lagrange equation, that is, Eq. (21), which meets the requirement of the principle of minimal complexity. Fig. 7 illustrates the core idea.

Refer to caption
Figure 7: Only the trajectories with minimal complexity (denoted by solid lines) contribute to the path integral, the contribution of the trajectory with large complexity (denoted by red dotted line) is exponentially suppressed by the factor eηCe^{-\eta C} in Eq. (48).

Combining the above discussions, we learned that averaging over the trajectories satisfying the principle of minimal complexity directly produces the maximum average complexity based on the second law of complexity. One may argue that we can only identify trajectories with extreme values from the first-order variation that δC=0\delta C=0 but not the minimal values. However, since the probability of each trajectory is weighted by a factor eηC\mathrm{e}^{-\eta C} in Eq. (78), when a trajectory’s complexity is large, its contribution is almost negligible. Then, these trajectories satisfy the principle of minimal complexity. In other words, we do not need to test the second-order variation to find the minimal complexity in the average sense.

Appendix D Glossary and some clarifications

We provide some supplementary explainations to the text here to help readers clearly understand the content of this paper.

Notations: The uppercase letters YY, MM, NN, and SS used in this paper denote the components of vector γ\gamma\in\mathcal{M}. xix^{i} and xjx^{j} represent the components of vector x\vec{x}. Subscript kk, ii, and jj in Section 4.4 and Eq. (24) depict the sites kk, ii, and jj affected by the local gate. In Section 5, subscript kk indicates the kk momentum sector.

Generalized Pauli matrices: The generalized Pauli matrices throughout this paper have the same meaning as in 8 , specifically footnote 5 of 8 .

kk-local Hamiltonian: The kk-local Hamiltonian only contains terms acting on kk or fewer qubits. For example, Eq. (113) is a 22-local Hamiltonian.

Irreducible Markov chain: A Markov chain is irreducible if a chain of steps exists between any two states with a positive probability 88 .

Relationship between complexity and entropy: This relationship is explored in 8 ; 67 . The content mainly includes the following points:

  • \bullet

    The entropy of the auxiliary system 𝒜\mathcal{A} is proportional to the ensemble-averaged complexity of quantum system 𝒬\mathcal{Q}.

  • \bullet

    The physical law of the complexity operates on a vastly longer time than the entropy, such as the recurrence phenomenon.

  • \bullet

    The complexity along the particle trajectory is equal to the entropy of a black hole in Rindler units if the energy of the particle in the system 𝒜\mathcal{A} is conserved.

Constant η\eta and computational free energy FaF_{a}: In Section 3.2.1, we mentioned that η\eta can be seen as the inverse temperature 1/Ta1/T_{a} of system 𝒜\mathcal{A} or a positive Lagrangian multiplier (Appendix C). By considering η\eta as a small inverse temperature, Eq. (59) represents the thermodynamic free energy of the system 𝒜\mathcal{A}. In 29 , Feynman has discussed the relationship between thermodynamic free energy and the free energy obtained through path integral. To better understand the point, we assume that =SU(2)\mathcal{M}=\mathrm{SU}(2), which corresponds to a nonrelativistic free particle moving in a three-dimensional (3D) configuration space, and choose the standard inner-product metric instead of the complexity metric for simplicity. The partition function272727The general expression of thermodynamic partition function of a three-dimensional free particle reads: Zth=((mkBTth)/(2π2))32Z^{th}=\left((mk_{B}T_{\mathrm{th}})/(2\pi\hbar^{2})\right)^{\frac{3}{2}}, where mm, kBk_{B} and TthT_{\mathrm{th}} represent the mass of the particle, Boltzmann constant and the thermodynamic temperature, respectively. However, for simplicity we set =1\hbar=1 and kB=1k_{B}=1 in our work. is directly written as follows:

Zath=(Ta2π)32.Z_{a}^{\mathrm{th}}=\left(\frac{T_{a}}{2\pi}\right)^{\frac{3}{2}}. (150)

The metric we chose is the standard inner-product metric that forms an Euclidean space. Hence, we take the Gaussian integral of Eq. (48), which is the path integral in 3D Euclidean space 58 , to give

Za=(12πηT)32,Z_{a}=\left(\frac{1}{2\pi\eta T}\right)^{\frac{3}{2}}, (151)

where TT denotes the time interval. If we let η=1/Ta\eta=1/T_{a} and set T=1T=1, Eq. (151) then coincides with ZathZ_{a}^{\mathrm{th}}. Accordingly, the computational free energy FaF_{a} obtained from Eq. (151) is equivalent to the thermodynamic free energy of the system 𝒜\mathcal{A}. Consequently, from Eq. (96) the computational free energy difference ΔFa(T)=Fa(0)Fa(T)-\Delta F_{a}(T)=F_{a}(0)-F_{a}(T) can be explained as the possible maximal value of the average computational work available for extraction by the system 𝒜\mathcal{A} in a certain time interval T=1T=1. For such a perspective, the temperature of the system 𝒜\mathcal{A} is considered to be determined in several ways 8 . The simplest choice presented in 8 only depends on the number of qubits and the locality parameter kk (kk-local):

Ta1Kk1.T_{a}\propto\frac{1}{K^{k-1}}. (152)

Note that TaT_{a} is the temperature of system 𝒜\mathcal{A}, but not the temperature of quantum system 𝒬\mathcal{Q}.

Boltzmann and Gaussian distributions: Boltzmann distributions with a quadratic Hamiltonian are also Gaussian distributions by considering the velocity as a stochastic variable.

Quasi-static limit: In Section 5, TT\to\infty corresponds to the reversible limit, which is similar to the “quasi-static” limit in thermodynamics. Discussing the thermodynamics of the auxiliary system 𝒜\mathcal{A}, this limit can be regarded as the quasi-static limit of system 𝒜\mathcal{A}. Note that we can only talk about such a limit for the classical system 𝒜\mathcal{A} but not for the quantum system 𝒬\mathcal{Q}.

References

  • (1) J. A. Wheeler, Information, Physics, Quantum: The Search for Links, The Proceedings of the 1988 Workshop on Complexity, Entropy, and the Physics of Information, Westview Press, Santa Fe, New Mexico, Boulder, CO, USA (1990).
  • (2) J. M. Maldacena, The large-N limit of superconformal field theories and supergravity, Adv. Thoer. Math. Phys. 2, 231 (1998), arXiv:hep-th/9711200.
  • (3) S. S. Gubser, I. R. Klebanov and A. M. Polyakov, Gauge theory correlators from noncritical string theory, Phys. Lett. B 428, 105 (1998), arXiv:hep-th/9802109.
  • (4) E. Witten, Anti-de Sitter space and holography, Adv. Thoer. Math. Phys. 2, 253 (1998), arXiv:hep-th/9802150.
  • (5) S. Ryu and T. Takayanagi, Holographic derivation of entanglement enrtopy from AdS/CFT, Phys. Rev. Lett. 96, 181602 (2006), arXiv:hep-th/0603001.
  • (6) L. Susskind, Computational complexity and black hole horizons, Fortschr. Phys. 64, 24 (2016), arXiv:1402.5674.
  • (7) D. Stanford, L. Susskind, Complexity and shock wave geometries, Phys. Rev. D 90, 126007 (2014), arXiv:1406.2678.
  • (8) L. Susskind, Entanglement is not enough, Fortschr. Phys. 64, 49 (2016), arXiv:1411.0690.
  • (9) A. Brown, D. A. Roberts, L. Susskind, B. Swingle and Ying Zhao, Holographic Complexity Equals Bulk Action?\mathrm{?}, Phys. Rev. Lett. 116, 191301 (2016).
  • (10) A. Brown, D. A. Roberts, L. Susskind, B. Swingle and Ying Zhao, Complexity, action, and black holes, Phys. Rev. D 93, 086006 (2016), arXiv:1512.04993.
  • (11) W. Chemissany, T. Osborne, Holographic fluctuations and the principle of minimal complexity, J. High Energ. Phys. 12, 055 (2016), arXiv:1605.07768.
  • (12) M. A. Nielsen, A geometric approach to quantum circuit lower bounds, arXiv:quant-ph/0502070 (2005).
  • (13) M. A. Nielsen, M. R. Dowling, M. Gu, and A. C. Doherty, Quantum Computation as Geometry, Scicence 311, 1133 (2006).
  • (14) M. A. Nielsen, M. R. Dowling, M. Gu, and A. C. Doherty, Optimal control, geometry, and quantum computing, Phy. Rev. A 73, 062323 (2006), arXiv:quant-ph/0603160.
  • (15) M. R. Dowling and M. A. Nielsen, The geometry of quantum computation, arXiv:quant-ph/0701004 (2006).
  • (16) A. R. Brown and L. Susskind, The Complexity Geometry of a Single Qubit, Phy. Rev. D 100, 046020 (2019), arXiv:1903.12621.
  • (17) R. Auzzi, S. Baiguera, G. B. De Luca, A. Legramandi, G. Naredelli and N. Zenoni, Geometry of quantum complexity, Phy. Rev. D 103, 106021 (2018), arXiv:2001.07601.
  • (18) R. Brown and L. Susskind, The Second Law of Quantum Complexity, Phy. Rev. D 97, 086015 (2018), arXiv:1701.01107.
  • (19) Wei Sun, Xian-hui Ge, Complexity growth rate, grand potential and partition function, arXiv:1912.00153 (2019).
  • (20) Xian-hui Ge, Bin Wang, Quantum computational complexity, Einstein’s equations and accelerated expansion of the Universe, JCAP. 02, 047 (2018), arXiv:1708.06811.
  • (21) P. Caputa, J. M. Magan, Quantum Computation as Gravity, Phys. Rev. Lett. 122, 231302 (2019), arXiv:1807.04422.
  • (22) C. Jarzynski, Nonequilibrium Equality for Free Energy Difference, Phy. Rev. Lett. 78, 2690 (1997), arXiv:cond-mat/9610209.
  • (23) C. Jarzynski, Equilibrium free-energy differences from nonequilibrium measurements: A master-equation approach, Phy. Rev. E 56, 5018 (1997), arXiv:cond-mat/9707325.
  • (24) C. Jarzynski, Microscopic analysis of Clausius-Duhem processes, J. Stat. Phys. 96, 415 (1999), arXiv:cond-mat/9802249.
  • (25) G. E. Crooks, Nonequilibrium Measurements of Free Energy Differences for Microscopically Reversible Markovian Systems, J. Stat. Phys. 90, 1481 (1998).
  • (26) G. E. Crooks, Path-ensemble averages in systems driven far from equilibrium, Phy. Rev. E 61, 2361 (2000), arXiv:cond-mat/9908420.
  • (27) G. Hummer, A. Szabo, Free energy reconstruction from nonequilibrium single-molecule pulling experiments, PNAS. 98(7), 3658 (2001).
  • (28) C. V. den Brock, M. Esposito, Ensemble and trajectory thermodynamics: A brief introduction, Physica A 418, 6 (2015), arXiv:1403.1777.
  • (29) R. P. Feynman, A. R. Hibbs, Quantum Mechanics and Path integrals, Higher Education Press, Beijing (2015).
  • (30) M. Kac, On distributions of certain Wiener functionals, Amer. Math. Soc. 65, 1 (1949).
  • (31) B. Oeksendal, Stochastic differential equations, Springer, Heidelberg (2000).
  • (32) R. P. Feynman, Space-Time Approach to Non-Relativistic Quantum Mechanics, Rev. Mod. Phys. 20, 367 (1948).
  • (33) D. Minic, M. Pleimling, The Jarzynski identity and the AdS/CFT Duality, Phys. Lett. B 700, 277 (2011), arXiv:1007.3970.
  • (34) N. Y. Halpern, Jarzynski-like equality for the out-of-time-ordered correlator, Phys. Rev. A 95, 012120 (2017), arXiv:1609.00015.
  • (35) N. Y. Halpern, A. J. P. Garner, O. C. O. Dahlsten and V. Vedral, Maximum one-shot dissipated work form Re´\acute{e}nyi divergences, Phys. Rev. E 97, 052135 (2018).
  • (36) S. S. Chern, W. H. Chen and K. S. Lam, Lectures on Differential Geometry, World Scientific, New Jersey (1999).
  • (37) J. Maldacena, D. Stanford, Remarks on the Sachdev-Ye-Kitaev model, Phys. Rev. D 94, 106002 (2016).
  • (38) J. Maldacena, D. Stanford and Zhenbin Yang, Conformal symmetry and its breaking in two dimensional Nearly Anti-de-Sitter space, PTEP. 2016, 12 (2016), arXiv:1606.01857.
  • (39) E. Witten, An SYK-Like Model Without Disorder, J. Phys. A 52, 474002 (2019), arXiv:1610.09758.
  • (40) Ying-Fei Gu, Xiao-Liang Qi and D. Stanford, Local criticality, diffusion and chaos in generalized Sachdev-Ye-Kitaev models, J. High Energ. Phys. 2017, 125 (2017), arXiv:1609.07832.
  • (41) W. Cottrell, B. Freivogel, D. M. Hofman, S. F. Lokhande, How to Build the Thermofield Double State, J. High Energ. Phys. 02, 058 (2017), arXiv:1811.11528.
  • (42) S. Chapman, J. Eisert, L. Hackl, M. P. Heller, R. Jefferson, H. Marrochio and R. C. Myers, Complexity and entanglement for thermofield double states, SciPost Phys. 6, 034 (2017), arXiv:1810.05151.
  • (43) C. Spengler, M. Huber, B. C. Hiesmayr, Composite parameterization and Haar measure for all unitary and special unitary groups, J. Math. Phys. 53, 013501 (2012), arXiv:1103.3408.
  • (44) A. R. Brown, L. Susskind, Ying Zhao, Quantum Complexity and Negative Curvature, Phys. Rev. D 95, 045010 (2017), arXiv:1608.02612.
  • (45) V. I. Arnold, Mathematical Methods of Classical Mechanics, Springer, Heidelberg (1989).
  • (46) N. Lashkari, D. Stanford, M. Hastings, T. Osborne and P. Hayden, Towards the fast scrambling conjecture, J. High Energ. Phys. 04, 022 (2012), arXiv:1111.6580.
  • (47) Chris G. Gray, Principle of least action, Scholarpedia, 4(12):8291 (2009).
  • (48) R. Zwanzig, Nonequilibrium statistical mechanics, Oxford University Press, New York (2000).
  • (49) K. Jacobs, Stochastic Processes for Physicists: Understanding Noisy System, Cambridge University Press, Cambridge (2010).
  • (50) Z. Schuss, Theory and Applications of Stochastic Processes: An Analytical Approach, Springer, Heidelberg (2010).
  • (51) B. E. Baaquie, Path Integrals and Hamiltonians: Principles and Methods, Cambridge University Press, Cambridge (2014).
  • (52) D. J. Toms, The Schwinger Action Principle and the Feynman Path Integral for Quantum Mechanics in Curved Space, arXiv:hep-th/0411233 (2004).
  • (53) J. H. Van Vleck, The Correspondence Principle in the Statistical Interpretation of Quantum Mechanics, PNAS. 14, 178 (1928).
  • (54) C. Morette, On the Definition and Approximation of Feynman’s Path Integrals, Phys. Rev. 81, 848 (1951).
  • (55) B. S. DeWitt, Dynamical Theory of Groups and Fields, Gordan and Breach, New York (1964).
  • (56) H. S. Ruse, Taylor’s Theorem in the Tensor Calculus, Proc. London Math. Soc. 32, 87 (1931).
  • (57) R. Livi, P. Politi, Nonequilibrium Statistical Physics: A Modern Perspective, Cambridge University Press, Cambridge (2017).
  • (58) R. P. Feynman, Statistical Mechanics: A Set of Lectures, CRC Press, Los Angeles (2017).
  • (59) A. Bernamonti, F. Galli, J. Hernandez, R. C. Myers, Shan-Ming Ruan and J. Sio´\acute{o}n, First Law of Holographic Complexity, Phys. Rev. Lett. 123, 081601 (2019).
  • (60) A. Bernamonti, F. Galli, J. Hernandez, R. C. Myers, Shan-Ming Ruan and J. Sio´\acute{o}n, Aspects of The First Law of Complexity, arXiv:hep-th/2002.05779 (2020).
  • (61) G. Gour, M. P. Müller, V. Narasimhachar, R.W. Spekkens and N. Y. Halpern, The resource theory of informational nonequilibrium in thermodynamics, Phys. Rep. 583, 1 (2015), arXiv:1309.6586.
  • (62) F. G. S. L. Brandao, M. Horodecki, J. Oppenheim, J. M. Renes and R. W. Spekkens, Resource Theory of Quantum States out of Thermal Equilibrium, Phys. Rev. Lett. 111, 250404 (2013).
  • (63) V. Veitch, S. A. H. Mousavian, D. Gottesman and J. Emerson, The resource theory of stablizer computation, New J. Phys. 16, 013009 (2014).
  • (64) E. Chitambar, G. Gour, Quantum resource theories, Rev. Mod. Phys. 91, 025001 (2019).
  • (65) M. Horodecki, P. Horodecki and J. Oppenheim, Reversible transformations from pure to mixed states and the unique measure of information, Phys. Rev. A 67, 062104 (2003).
  • (66) M. Horodecki, K. Horodecki, P. Horodecki, R. Horodecki, J. Oppenheim, A. Sen(De) and U. Sen, Local Information as a Resource in Distributed Quantum Systems, Phys. Rev. Lett. 90, 100402 (2003).
  • (67) L. Susskind, Three Lectures on Complexity and Black Holes, arXiv:1810.11563 (2018).
  • (68) N. Y. Halpern, N. B. T. Kothakonda, J. Haferkamp, A. Munson, J. Eisert and P. Faist, Resource theory of quantum uncomplexity, arXiv:2110.11371 (2021).
  • (69) L. Susskind and Y. Zhao, Switchbacks and The Bridge to Nowhere, arXiv:1408.2823 (2014).
  • (70) S. Vinjanampathy and J. Anders, Quantum Thermodynamics, Contemporary Phys. DOI: 10.1080/00107514.2016.1201896 (2016) arXiv: 1508.06099.
  • (71) C. Jarzynski, Equalities and Inequalities: Irreversibility and the Second Law of Thermodynamics at the Nanoscale, Annu. Rev. Condens. Matter Phys. 2, 329 (2011).
  • (72) H. B. Callen, T. A. Welton, Irreversibility and Generalized Noise, Phys. Rev. 83, 34 (1951).
  • (73) J. Hermans, Simple Analysis of Noise and Hysteresis in (Slow-Growth) Free Energy Simulations, J. Phys. Chem. 95, 9029 (1991).
  • (74) G. Camilo, Complexity and Floquet dynamics: Non-equilibrium Ising phase transition, Phys. Rev. B 102, 174304 (2020), arXiv:2009.00069.
  • (75) E. Lieb, T. Schultz, D. Mattis, Two soluble models of an antiferromagnetic chain, Ann. Phys. 16, 407 (1961).
  • (76) E. Barouch, B. M. McCoy, M. Dresden, Statistical Mechanics of the XY Model. I, Phys. Rev. A 2, 1075 (1970).
  • (77) M. S. Kalyan, G. A. Prasad, V. S. S. Sastry, K. P. N. Murthy, A Note on Non-equilibrium Work Fluctuations and Equilibrium Free Energies, J. Phys. A 390, 1240 (2011), arXiv:1011.4413.
  • (78) M. Esposito and C. V. den Broeck, Three detailed fluctuation theorems, Phys. Rev. Lett. 104, 090601 (2010).
  • (79) R. Jefferson, R. C. Myers, Circuit complexity in quantum field theory, J. High Energ. Phys. 2017, 107 (2017), arXiv:1707.08570.
  • (80) R. Jefferson, R. C. Myers, Circuit complexity in quantum field theory, J. High Energ. Phys. 2017, 107 (2017), arXiv:1707.08570.
  • (81) Run-Qiu Yang, Yu-Sen An, Chao Niu, Cheng-Yong Zhang, Keun-Young Kim, What kind of "complexity" is dual to holographic complexity, arXiv:2011.14636 (2021).
  • (82) L. Bassman, K. Klymko, N. M. Tubman and Wibe A. de Jong, Computing Free Energies with Fluctuation Relations on Quantum Computers, arXiv:2103.09846 (2021).
  • (83) K. Jacobs, Stochastic Processes for Physicits: Understanding Noisy Systems, Cambridge University Press, Cambridge (2010).
  • (84) G. S. Chirikjian, Stochastic Models, Information theory, and Lie Groups, Springer, Heidelberg (2000).
  • (85) J. Schwinger, The Theory of Quantized Fields. I, Phys. Rev. 82, 914 (1951).
  • (86) J. Schwinger, The Theory of Quantized Fields. II, Phys. Rev. 91, 713 (1953).
  • (87) Qiuping A. Wang, Maximum path information and the principle of least action for chaotic system, Chaos, Solitons & Fractals 23, 1253 (2005), arXiv:cond-mat/0405373.
  • (88) https://brilliant.org/wiki/markov-chains/markov-chain.