This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

aainstitutetext: Department of Physics, Stanford University, Stanford, California 94305, USA

Operator Krylov complexity in random matrix theory

Haifeng Tang [email protected]
Abstract

Krylov complexity, as a novel measure of operator complexity under Heisenberg evolution, exhibits many interesting universal behaviors and also bounds many other complexity measures. In this work, we study Krylov complexity 𝒦(t)\mathcal{K}(t) in Random Matrix Theory (RMT). In large NN limit: (1) For infinite temperature, we analytically show that the Lanczos coefficient {bn}\{b_{n}\} saturate to constant plateau limnbn=b\lim\limits_{n\rightarrow\infty}b_{n}=b, rendering a linear growing complexity 𝒦(t)t\mathcal{K}(t)\sim t, in contrast to the exponential-in-time growth in chaotic local systems in thermodynamic limit. After numerically comparing this plateau value bb to a large class of chaotic local quantum systems, we find that up to small fluctuations, it actually bounds the {bn}\{b_{n}\} in chaotic local quantum systems. Therefore we conjecture that in chaotic local quantum systems after scrambling time, the speed of linear growth of Krylov complexity cannot be larger than that in RMT. (2) For low temperature, we analytically show that bnb_{n} will first exhibit linear growth with nn, whose slope saturates the famous chaos bound. After hitting the same plateau bb, bnb_{n} will then remain constant. This indicates 𝒦(t)e2πt/β\mathcal{K}(t)\sim e^{2\pi t/\beta} before scrambling time tO(βlogβ)t_{*}\sim O(\beta\log\beta), and after that it will grow linearly in time, with the same speed as in infinite temperature. We finally remark on the effect of finite NN corrections.

Keywords:
Krylov complexity, Random Matrix Thoery, Scrambling

1 Introduction

Operator complexity describes the phenomenon that a simple operator OO becomes complex under Heisenberg evolution O(t)O(t) in chaotic local quantum systems. Many complexity measures Roberts2017 ; Cotler2017chaos ; Jefferson2017 ; Roberts2018 ; Yang2018 ; Khan2018 ; Qi2019 ; Lucas2019 ; Balasubramanian2020 ; Balasubramanian2021 make this manifest. An intuitive one is the operator size and OTOC Roberts2018 ; Qi2019 ; Lucas2019 , which counts the average size of spatial support for an operator written in the basis made from tensor product of simple operators. Since operator size is bounded by the size of the whole system, then one would expect that size would cease to grow and saturate soon after it spreads onto the whole system at the time scale of scrambling time tt_{*} Maldacena2016bound . However, since the Heisenberg evolution proceeds, one would expect that the operator is still growing more and more complex, simply not in the sense of the size of its spatial support.

Behaviour of Krylov complexity in generic chaotic local systems.

Krylov complexity Parker2019 , denoted as 𝒦(t)\mathcal{K}(t), proposed as a measure of operator complexity, achieves this goal. Namely, it successfully captures the entire time scale and different stages of complexity dynamics. In generic chaotic local quantum systems and starting from a simple operator, the dynamics of 𝒦(t)\mathcal{K}(t) often exhibit three stages:

  • 1.

    Stage one: exponential-in-time growth. 𝒦(t)e2αt\mathcal{K}(t)\sim e^{2\alpha t}. This exponential growth will extend up to scrambling time tlog(S)t_{*}\sim\log(S) and 𝒦(t)\mathcal{K}(t) reaches the value of order O(S)O(S), where SS is the number of degree of freedom. The complexity growth in this stage is mainly due to spreading a simple operator to the whole system, therefore describing the same physics as in operator size. The temperature-dependent exponential coefficient α\alpha is related and actually bounds the Lyapunov exponential of OTOC Maldecena2016 ; Maldacena2016bound .

  • 2.

    Stage two: linear-in-time growth. 𝒦(t)vt\mathcal{K}(t)\sim vt. Up to scrambling time in stage one, the growth of ‘complexity in real space’ has finished. In this linear growth region, 𝒦(t)\mathcal{K}(t) probes the growth of ‘complexity in Hilbert space’. Roughly speaking, 𝒦(t)\mathcal{K}(t) in this stage characterizes the fact that O(t)O(t) explores more and more linearly independent basis that it has not been occupied before in the operator vector space as large as O(e2S)O(e^{2S}). This stage will last up to a time scale O(e2S)O(e^{2S}) where 𝒦(t)\mathcal{K}(t) reaches the value of order O(e2S)O(e^{2S}) Rabinovici2021 , the dimension of operator vector space, as claimed.

  • 3.

    Stage three: remain constant. 𝒦(t)Const\mathcal{K}(t)\sim\text{Const}. At the end of linear-in-time stage, O(t)O(t) has explored the edge of Krylov subspace and occupied as much linearly independent operator basis as it can, therefore it will remain constant afterward. This constant value is precisely 12K\frac{1}{2}K Rabinovici2021 , which is the half of the Krylov subspace dimension KO(e2S)K\sim O(e^{2S}). The prefactor 12\frac{1}{2} means that O(t)O(t) is evenly distributed on every basis of Krylov subspace, indicating that it is fully scrambled. There might be possible Poincaré recurrence at the time of order O(ee2S)O(e^{e^{2S}}) corresponding to some doubly-non-perturbative physics that is beyond our scope Rabinovici2021 .

Refer to caption
Figure 1: Schematic plot of the main result of this work. In (a) and (b), we show the behavior of Krylov complexity 𝒦(t)\mathcal{K}(t) and Lanczos coefficient {bn}\{b_{n}\} in Random Matrix Theory in large NN limit with finite or infinite temperature. The plateau value equating unity comes from normalization of radius of Wigner semicircle to be unity. In (c) and (d), we compare the random matrix results with generic chaotic local quantum systems.

Motivation of our work.

In this paper, we mainly focus on the linear-in-time stage, and sometime on exponential-in-time stage. Our motivation is that, at the conjunction of stage one and two, O(t)O(t) has just finished its ‘scrambling in real space’, and begun to start its ‘scrambling in Hilbert space’. Therefore in stage two, O(t)O(t) completely abandoned any spatial locality structure, or equivalently, the local Hamiltonian HH no longer looks local from the perspective of O(t)O(t). Therefore, in this stage, HH would act like a chaotic Hamiltonian without spatial locality, resembling a typical random matrix.

This is our main reasoning and motivation for studying Krylov complexity in Random Matrix Theory (RMT), which intuitively should dominate the behavior of complexity growth after scrambling time. This is consistent with the fact that in stage one, the exponential growth in time crucially comes from the argument of the locality of O(t)O(t) and HH Parker2019 . Without locality, we would expect the growth rate to decrease to linear.

Another important motivation is that the linear growth of complexity characterizes the expanding volume of the Einstein-Rosen bridge in the dual holographic gravity susskind2014computational . We expect our RMT analysis would shed light on bulk gravitational systems.

Main results and structure of the paper.

In this paper, we obtain several analytical results of Krylov complexity in a specific random matrix ensemble, called GUE (Gaussian Unitary Ensemble) in large NN limit, thanks to the large NN factorization.

In section 2, we review the construction of Krylov basis which originates from an orthogonalization procedure, the definition of Krylov complexity, and associated Lanczos coefficient {bn}\{b_{n}\}.

In section 3.1, we study the Krylov complexity of arbitrary traceless initial operator OO, and we show that if we properly scale the radius of Wigner semicircle to unity, for Lanczos coefficient {bn}\{b_{n}\}, we have:

limnlimNbn=1\lim\limits_{n\rightarrow\infty}\lim\limits_{N\rightarrow\infty}b_{n}=1 (1)

This indicates that in large NN limit, 𝒦(t)\mathcal{K}(t) in RMT grows linearly in time, in contrast to the chaotic local quantum systems in thermodynamic limit, whose K-complexity exhibits an exponential growth (in thermodynamic limit, scrambling time tt_{*} goes to infinity, therefore chaotic local system only have stage one, without stage two and three). We remark that this does not mean K-complexity in RMT is smaller than that in chaotic local systems. This is because, in RMT theory, we usually scale the Hamiltonian such that its eigenenergy is of order O(1)O(1). But in chaotic local systems, the energy spectrum is extensive which is of order O(S)O(S). Since the energy scale is conjugate to the time scale, we should properly rescale our RMT result.

Indeed, in section 3.2, we numerically compared the Lanczos coefficient {bn}\{b_{n}\} in generic chaotic local systems in stage two with that from properly rescaled RMT, and find that the former is generally of the same order and smaller than the latter up to fluctuation. This indicates that the stage two growth in chaotic local systems originates the onset of random matrix behavior, as claimed in the motivation. Due to various empirical findings from numerics and theoretic reasoning, we therefore conjecture that stage two growth in chaotic local systems is bounded by scaled RMT growth. This is schematically summarized in figure 1(c), (d).

Nevertheless, the complexity dynamics of RMT is by itself interesting. In section 3.3, still working in large NN limit, we deviate from infinite temperature and study K-complexity in RMT at finite and especially low temperature. We analytically show that {bn}\{b_{n}\} will first grow linear in nn with slope α\alpha proportional to temperature TT, until it reaches the plateau value bn=1b_{n}=1, the same as in the case of infinite temperature. Through numerical simulations, we fix α=πβ1\alpha=\pi\beta^{-1} (β=T1\beta=T^{-1} is inverse temperature). Translating the behavior of {bn}\{b_{n}\} to the dynamics of K-complexity, we find that 𝒦(t)\mathcal{K}(t) will first grow exponentially with the Lyapunov exponent saturating the chaos bound Maldecena2016 ; Maldacena2016bound ; Parker2019 , and after a scrambling time of order O(βlogβ)O(\beta\log\beta), 𝒦(t)\mathcal{K}(t) switch to linear growth in time, with the velocity the same as if it’s in infinite temperature. This is summarized schematically in figure 1(a), (b).

In section 4 we remark on the finite NN corrections and in section 5, we summarize our result and discuss some possible future directions.

Related works.

In reference Bhattacharyya2023 ; Erdmenger2023 , the authors studied the ‘state’ Krylov complexity generated by random matrix Hamiltonian. Namely, they expand |ψ(t)=eiHt|ψ0|\psi(t)\rangle=e^{-iHt}|\psi_{0}\rangle on the Krylov basis which is constructed from the Lanczos orthogonalization of {Hn|ψ0}\{H^{n}|\psi_{0}\rangle\}. Despite having the same name, in our work we actually study ‘operator’ Krylov complexity generated by Heisenberg evolution of RMT, which is a different quantity. In reference Kar2022 , the authors considered an ‘energy-refined’ version of Krylov complexity and its relation with RMT universality class. Though bearing similar motivations to us, they studied different quantities from ours.

2 Review of Krylov complexity

In this section, we briefly review the definition of Krylov complexity and construction of Krylov basis Parker2019 . Experts on this topic may skip this section and directly dive to the next one with new results.

Given an Hermtian initial operator O()O\in\mathcal{B}(\mathcal{H}) which is an bounded operator on Hilbert space \mathcal{H} with dimension NN, its Heisenberg evolution is given by O(t)=eiHtOeiHtO(t)=e^{iHt}Oe^{-iHt}, where HH is the Hamiltonian governing the time evolution of the quantum systems. Since we are interested in the dynamics of O(t)O(t), it is convenient to work in the vector space ()\mathcal{B}(\mathcal{H}) of operator, with dimension N2N^{2}. This is achieved through operator-to-state mapping: O|O)O\rightarrow|O).

In order to have a notion of normalized and orthogonal vectors, we need to equip this operator linear space with an inner product structure (A|B)(A|B), defined as

(A|B){tr[AB],β=0tr[eβH/2AeβH/2B],β0(A|B)\equiv\begin{cases}\operatorname{tr}[A^{\dagger}B],\ &\beta=0\\ \operatorname{tr}[e^{-\beta H/2}A^{\dagger}e^{-\beta H/2}B],\ &\beta\neq 0\end{cases} (2)

for infinite temperature and finite temperature respectively. The motivation for inserting thermal density matrix into finite temperature inner product is because it resembles the thermal Wightman two-point function Parker2019 .

The Heisenberg evolution is also mapped to a superoperator, called Liouvillian \mathcal{L}, defined as the commutators of Hamiltonian: O[H,O]\mathcal{L}O\equiv[H,O]. One can easily show that such an inner product would render Livioullian a Hermitian super-operator, namely (A|B)=(A|B)(A|\mathcal{L}B)=(\mathcal{L}A|B). By Baker-Hausdoff-Campell formula, the time evolution is thus expanded in the basis of {|nO)}\{|\mathcal{L}^{n}O)\}:

|O(t))=eit|O)=n=0(it)nn!|nO)|O(t))=e^{i\mathcal{L}t}|O)=\sum\limits_{n=0}^{\infty}\frac{(it)^{n}}{n!}|\mathcal{L}^{n}O) (3)

As we claimed in section 1, the idea of Krylov complexity is that can quantify the complexity ‘beyond real space’: the complexity in Hilbert space. Namely, we expect the growth of 𝒦(t)\mathcal{K}(t) to characterize the process in which O(t)O(t) explores more and more linearly independent basis in ()\mathcal{B}(\mathcal{H}) that it has not occupied before. Certainly {|nO)}\{|\mathcal{L}^{n}O)\} is linearly independent in general, however, they are neither normalized nor orthogonal. Therefore, to make this idea explicit, we need to perform an orthogonalization procedure on {|nO)}\{|\mathcal{L}^{n}O)\} and generate an orthonormal basis {|On)}\{|O_{n})\}.

This procedure is the Lanczos algorithm. Starting from |O)|O), define a normalized vector |O0)=(O|O)1/2|O)|O_{0})=(O|O)^{-1/2}|O). The second step is to define |O1)=b11|O0)|O_{1})=b_{1}^{-1}\mathcal{L}|O_{0}) where b1=(O0|O0)1/2b_{1}=(\mathcal{L}O_{0}|\mathcal{L}O_{0})^{1/2}. Then inductively define:

|An)\displaystyle|A_{n}) =|On1)bn1|On2)\displaystyle=\mathcal{L}|O_{n-1})-b_{n-1}|O_{n-2}) (4)
bn\displaystyle b_{n} =(An|An)1/2\displaystyle=(A_{n}|A_{n})^{1/2}
|On)\displaystyle|O_{n}) =bn1|An)\displaystyle=b_{n}^{-1}|A_{n})

The output of this algorithm is a set of orthonormal basis {|On)}\{|O_{n})\} satisfying (On|Om)=δnm(O_{n}|O_{m})=\delta_{nm} and a set of real and positive coefficient {bn}\{b_{n}\} called Lanczos coefficient. One also notice that since OO is Hermitian, then |On)|O_{n}) is also a Hermitian operator which can be expressed as a degree-nn-polynomial of \mathcal{L} acting on |O)|O).

One also notices that the matrix of super-operator \mathcal{L} written in the basis {|On)}\{|O_{n})\} is a tridiagonal real and Hermitian matrix with zero diagonal entries and its off-diagonal entries equating {bn}\{b_{n}\}, namely:

Lmn=(Om||On)=[0b100b10b200b20b300b30]L_{mn}=(O_{m}|\mathcal{L}|O_{n})=\begin{bmatrix}0&b_{1}&0&0&\cdots\\ b_{1}&0&b_{2}&0&\cdots\\ 0&b_{2}&0&b_{3}&\cdots\\ 0&0&b_{3}&0&\ddots\\ \vdots&\vdots&\vdots&\ddots&\ddots\end{bmatrix} (5)

We also briefly remark that the dimension of Krylov space (spanning of {|On)}\{|O_{n})\}) denoted as KK, is of the same order but slightly smaller than the dimension of ()\mathcal{B}(\mathcal{H}). This is because Hamiltonian commutes with any function of itself. In other words, {|Hn)}\{|H^{n})\} spans the kernel space of \mathcal{L}. Therefore even if initial operator |O)|O) may have non zero occupation on ker()\text{ker}(\mathcal{L}), all other {n|O),n1}\{\mathcal{L}^{n}|O),n\geq 1\} are perpendicular to ker()\text{ker}(\mathcal{L}). Therefore at most we have KN2dim(ker())+1K\leq N^{2}-\text{dim}(\text{ker}(\mathcal{L}))+1. A careful examination shows that Rabinovici2021 KN2N+1K\leq N^{2}-N+1. Nevertheless, in this paper, we will not probe the physics near the edge of Krylov space.

Now we are ready to study the dynamics of O(t)O(t) by expanding |O(t))|O(t)) in the basis {On}\{O_{n}\} with coefficient φn(t)\varphi_{n}(t):

|O(t))=n=0K1φn(t)|On)|O(t))=\sum\limits_{n=0}^{K-1}\varphi_{n}(t)|O_{n}) (6)

Then φn(t)\varphi_{n}(t) satisfies the following Schrodinger-like equation:

itφn=bn+1φn+1bnφn1i\partial_{t}\varphi_{n}=-b_{n+1}\varphi_{n+1}-b_{n}\varphi_{n-1} (7)

which describes a single particle initially at the left-end and then starts hopping on the one dimensional (semi-infinite, if we imagine KK is large) Krylov chain with non-uniform position-dependent hopping coefficients {bn}\{b_{n}\}.

The Krylov complexity is then defined as the average position of this wave packet:

𝒦(t)n=0K1n|φn(t)|2\mathcal{K}(t)\equiv\sum\limits_{n=0}^{K-1}n|\varphi_{n}(t)|^{2} (8)

This definition is very intuitive compared with our daily life experience of defining complexity: To handle a complex task (say, build a house), we try to decompose it into a sequence of simple tasks (say, adding bricks), which we manage to deal with step by step. Then we use the number of steps to denote the overall complexity (say, the number of bricks). Many complexity measures in quantum systems follow this spirit. For instance, in quantum computing we often use circuit complexity to quantify a complex quantum state |ψ|\psi\rangle, which is defined as the number of elementary gates that are needed to assemble into a large unitary rotating simple state |ψ0|\psi_{0}\rangle(often a product state with zero entanglement entropy) to |ψ|\psi\rangle. For our case of Krylov complexity, the simple operation is just the Heisenberg commutator =[H,]\mathcal{L}=[H,\cdot] and K-complexity counts the average number of operations we needed to make a simple OO to be as complex as O(t)O(t). This analogy is summarized in table 1.

Simple initial object Simple operation Complex final object Complexity measure
Circuit complexity |ψ0|\psi_{0}\rangle product state {Ui}\{U_{i}\} sets of universal gates |ψ|\psi\rangle highly entangled number of operations
Krylov complexity OO local operator =[H,]\mathcal{L}=[H,\cdot] Heisenber commutator O(t)O(t) highly scrambled average number of operation
Table 1: Analogy between circuit complexity (as a state complexity measure shown above, but can also be readily generalized to measure operator complexity) and Krylov complexity (as an operator complexity measure). The universal set of gates {Ui}\{U_{i}\} are often one-qubit gates and two-qubit gates. One choice is all single qubit rotation together with CNOT gate.

We also remark on the difference between circuit complexity and Krylov complexity. For the former, the choice of ‘simple operation’, namely a set of universal one-qubit/two-qubit gates, has ambiguity. But for the latter, the choice of commutator is very natural, since the dynamics is generated by it in the first place.

Going back to 𝒦(t)\mathcal{K}(t), let’s find out the relation between {bn}\{b_{n}\} and 𝒦(t)\mathcal{K}(t) through some intuitive examples. The first example is that bnbb_{n}\equiv b is constant. In this case, the hopping coefficient is uniform so we expect the center of the wave packet travel in a constant velocity proportional to bb. Therefore K(t)K(t) growth linear in time.

The next example of interest is to consider bn=αn+γb_{n}=\alpha n+\gamma. Before that, we first want to take the continuous limit of the wave equation. Define ϕn(t)=inφn(t)\phi_{n}(t)=i^{-n}\varphi_{n}(t) to make ϕn(t)\phi_{n}(t) satisfy a real wave equation tϕn(t)=bn+1ϕn+1+bnϕn1\partial_{t}\phi_{n}(t)=-b_{n+1}\phi_{n+1}+b_{n}\phi_{n-1} with initial condition ϕn(0)=δn0\phi_{n}(0)=\delta_{n0}. We define a continuous position variable x=ϵnx=\epsilon n, where ϵ\epsilon plays the role of lattice constant. Then ϵ0\epsilon\rightarrow 0 limit results in the following continuous equation Jian2021 :

(t+2αxx+α)ϕ(x,t)=0(\partial_{t}+2\alpha x\partial_{x}+\alpha)\phi(x,t)=0 (9)

Given initial distribution ϕ0(x)\phi_{0}(x), we obtain the unique solution:

ϕ(x,t)=eαtϕ0(xe2αt)\phi(x,t)=e^{-\alpha t}\phi_{0}(xe^{-2\alpha t}) (10)

This shows that up to an overall damping factor eαte^{-\alpha t} which keeps the normalization of wavefunction the same, the initial wave packet is stretched with factor e2αte^{2\alpha t}. Therefore the average position x(t)=e2αtx(0)\langle x\rangle_{(t)}=e^{2\alpha t}\langle x\rangle_{(0)} will also grow exponentially, indicating a 𝒦(t)e2αt\mathcal{K}(t)\sim e^{2\alpha t}.

We also notice that 𝒦(t)\mathcal{K}(t) is fully determined by two-point auto-correlation function C(t)(O|O(t))C(t)\equiv(O|O(t)). This is because 𝒦(t)\mathcal{K}(t) is determined by {bn}\{b_{n}\}, which is in turn determined by a set of moments {μ2n(O|2n|O)}\{\mu_{2n}\equiv(O|\mathcal{L}^{2n}|O)\}, the derivatives of C(t)C(t) at t=0t=0. It is also worth remarking the relation between {μ2n}\{\mu_{2n}\} and {bn}\{b_{n}\} is highly non-linear Parker2019 :

μ2n=(L2n)00=pathbi1bi2bi2n\mu_{2n}=(L^{2n})_{00}=\sum\limits_{\text{path}}b_{i_{1}}b_{i_{2}}\cdots b_{i_{2n}} (11)

where LL is the tridiagonal matrix of \mathcal{L} written in Krylov basis, defined in equation (5). Here, a Dyck path is defined by starting from n=0n=0 and return n=0n=0 with exactly 2n2n steps. The number of such paths with 2n2n steps is given by the Catalan number Cn=(2n)!(n+1)!n!C_{n}=\frac{(2n)!}{(n+1)!n!}. The relation from {μ2n}\{\mu_{2n}\} to {bn}\{b_{n}\} is also complicated Parker2019 :

b12bn2=det(μi+j)0i,jnb_{1}^{2}\cdots b_{n}^{2}=\operatorname{det}(\mu_{i+j})_{0\leq i,j\leq n} (12)

Given {bn}\{b_{n}\}, in order to gain some intuition about the scaling behavior of {μ2n}\{\mu_{2n}\} wrt nn, which is important for our discussion in later sections, there is an explicit inequality bound for μ2n\mu_{2n} if we assume bnb_{n} is non-decreasing with nn:

Cn(b12b22bn2)μ2n>(b12b22bn2)C_{n}(b_{1}^{2}b_{2}^{2}\cdots b_{n}^{2})\geq\mu_{2n}>(b_{1}^{2}b_{2}^{2}\cdots b_{n}^{2}) (13)

This is because (b12b22bn2)(b_{1}^{2}b_{2}^{2}\cdots b_{n}^{2}) is the maximal weight among all Dyck paths. For physically relevant cases, bnb_{n} schematically has the common form bnαnδb_{n}\sim\alpha n^{\delta}. δ=1,12,0\delta=1,\frac{1}{2},0 respectively correspond to Parker2019 : chaotic local systems at stage one; integrable systems at stage one; free systems at stage one or chaotic local systems at stage two. Therefore, considering nn to be large but finite, we may apply Stirling formula to equation (13):

2δnlogn+(2log2+2logα2δ)n+O(logn)logμ2n>2δnlogn+(2logα2δ)n+O(logn)2\delta n\log n+\left(2\log 2+2\log\alpha-2\delta\right)n+O(\log n)\geq\log\mu_{2n}>2\delta n\log n+\left(2\log\alpha-2\delta\right)n+O(\log n) (14)

If we further assume that logμ2n=A1nlogn+A2n+O(logn)\log\mu_{2n}=A_{1}n\log n+A_{2}n+O(\log n) with the same scaling, then the leading-order coefficient A1A_{1} completely fix δ\delta and next-leading-order coefficient A2A_{2} provide bound on α\alpha.

Ever since the concept of Krylov complexity has been proposed Parker2019 , it has attracted considerable attention from various communities since it exhibits many universal behaviors in chaotic local systems described in detail in section 1. People studied this quantity in many contexts ranging from condensed matter models focusing on scrambling behaviors Barbón2019 ; Dymarsky2020 ; Tianrui2020 ; Chen2021 ; Noh2021 ; Fabian2022 ; Kim2022 ; Hörnedal2022 ; Bhattacharjee2022 ; Sinong2022 ; Zhongying2022 ; Fan2022 ; Rabinovici2021 ; Rabinovici20221 ; Rabinovici20222 ; Erdmenger2023 ; Kar2022 , open quantum systems Liu2023 ; Bhattacharya2022 , field theory Dymarsky2021_CFT ; Caputa2021 ; caputa2021geometry ; Adhikari2022 ; ADHIKARI2023116263 , and holographic gravity Avdoshkin2020 ; Barbón2020 ; Magán2020 ; MUCK2022115948 ; Banerjee2022 ; Kar2022 ; Jian2021 . Now we are ready to add one more analytically tractable example, in RMT, to this exciting field with flourishing development.

3 Large NN limit of K-complexity in RMT

3.1 Infinite temperature

In this section, we analytically calculate Krylov complexity of RMT in large NN limit at infinite temperature. Our strategy is to first calculate {μ2n}\{\mu_{2n}\} using RMT techniques. Ideally one would apply equation (12) to get an expression of {bn}\{b_{n}\}, but it seems hopeless. Our strategy is to focus on the scaling behavior of μ2n\mu_{2n} for large but finite nn, then appeal to equation (11) and inequality (14) to estimate the scaling behavior of {bn}\{b_{n}\} with respect to nn.

For readers’ convenience, we first recall some definitions about Krylov complexity:

μ2n=(O|2n|O),=[H,]\mu_{2n}=(O|\mathcal{L}^{2n}|O),\ \mathcal{L}=[H,\cdot] (15)

where the inner product in infinite temperature is defined as (A|B)=tr[AB](A|B)=\operatorname{tr}\left[A^{\dagger}B\right]. We also notice that μ2n+1=0\mu_{2n+1}=0 since 2n+1O\mathcal{L}^{2n+1}O is anti-Hermitian therefore its trace with Hermitian matrix OO is zero. Using identity:

2nO=([H,)2nO]=k=02nC2nk(1)kHkOH2nk\mathcal{L}^{2n}O=\left([H,\right)^{2n}O]=\sum\limits_{k=0}^{2n}C_{2n}^{k}(-1)^{k}H^{k}OH^{2n-k} (16)

we derive the formal equation for μ2n\mu_{2n}:

μ2n=k=02nC2nk(1)ktr[OHkOH2nk]\mu_{2n}=\sum\limits_{k=0}^{2n}C_{2n}^{k}(-1)^{k}\operatorname{tr}[OH^{k}OH^{2n-k}] (17)

For simplicity, we consider the simplest type of RMT, which is an N×NN\times N random matrix from GUE (Gaussian Unitary Ensemble). In GUE, the probabilistic distribution of HH is symmetric under any unitary transformation: P(H)=P(UHU)P(H)=P(UHU^{\dagger}). Therefore, the moment μ2n(O)=μ2n(UOU),U\mu_{2n}(O)=\mu_{2n}(UOU^{\dagger}),\ \forall U, which means that it finally depends only on the spectrum of OO.

We can compare the case in Bhattacharyya2023 ; Erdmenger2023 , where the author there studied the state complexity rather than operator complexity generated by GUE Hamiltonian. In their case, the initial state independence exist after average: |ψU|ψ|\psi\rangle\rightarrow U|\psi\rangle means all initial states give the same Lanczos coefficient.

To proceed, we use the above mentioned fact that P(H)=P(UHU)P(H)=P(UHU^{\dagger}), therefore we can do the following replacement:

𝔼tr[OHkOH2nk]=𝔼tr[OUHkUOUH2nkU],U\mathbb{E}\operatorname{tr}[OH^{k}OH^{2n-k}]=\mathbb{E}\operatorname{tr}[OUH^{k}U^{\dagger}OUH^{2n-k}U^{\dagger}],\ \forall U (18)

where notation 𝔼\mathbb{E} means taking average over GUE Hamiltonian. Since the above equation is valid for arbitrary UU, we can first keep HH fixed and average over UU under Haar measure, using Weingarten function Roberts2017 ; Cotler2017chaos :

𝑑UUi1j1Ui2j2Ui3j3Ui4j4=\displaystyle\int dU\ U_{i_{1}j_{1}}U_{i_{2}j_{2}}U^{*}_{i_{3}j_{3}}U^{*}_{i_{4}j_{4}}= δi1i3δi2i4δj1j3δj2j4+δi1i4δi2i3δj1j4δj2j3N21\displaystyle\frac{\delta_{i_{1}i_{3}}\delta_{i_{2}i_{4}}\delta_{j_{1}j_{3}}\delta_{j_{2}j_{4}}+\delta_{i_{1}i_{4}}\delta_{i_{2}i_{3}}\delta_{j_{1}j_{4}}\delta_{j_{2}j_{3}}}{N^{2}-1} (19)
δi1i3δi2i4δj1j4δj2j3+δi1i4δi2i3δj1j3δj2j4N3N\displaystyle-\frac{\delta_{i_{1}i_{3}}\delta_{i_{2}i_{4}}\delta_{j_{1}j_{4}}\delta_{j_{2}j_{3}}+\delta_{i_{1}i_{4}}\delta_{i_{2}i_{3}}\delta_{j_{1}j_{3}}\delta_{j_{2}j_{4}}}{N^{3}-N}

Then we arrive at the following useful formula:

dUtr[OUAUOUBU]=1N21(\displaystyle\int dU\cdot\operatorname{tr}[OUAU^{\dagger}OUBU^{\dagger}]=\frac{1}{N^{2}-1}\bigg{(} tr[O2]tr[A]tr[B]+tr[O]2tr[AB]\displaystyle\operatorname{tr}[O^{2}]\operatorname{tr}[A]\operatorname{tr}[B]+\operatorname{tr}[O]^{2}\operatorname{tr}[AB] (20)
N1tr[O]2tr[A]tr[B]N1tr[O2]tr[AB])\displaystyle-N^{-1}\operatorname{tr}[O]^{2}\operatorname{tr}[A]\operatorname{tr}[B]-N^{-1}\operatorname{tr}[O^{2}]\operatorname{tr}[AB]\bigg{)}

For simplicity we can consider a traceless normalized operator with tr[O]=0,tr[O2]=(O|O)=1\operatorname{tr}[O]=0,\ \operatorname{tr}[O^{2}]=(O|O)=1, then the moment μ2n\mu_{2n} is given by:

𝔼μ2n\displaystyle\mathbb{E}\ \mu_{2n} =𝔼1N21kC2nk(1)k(tr[Hk]tr[H2nk]N1tr[H2n])\displaystyle=\mathbb{E}\ \frac{1}{N^{2}-1}\sum\limits_{k}C_{2n}^{k}(-1)^{k}\bigg{(}\operatorname{tr}[H^{k}]\operatorname{tr}[H^{2n-k}]-N^{-1}\operatorname{tr}[H^{2n}]\bigg{)} (21)
𝔼N2kC2nk(1)ktr[Hk]tr[H2nk]\displaystyle\approx\mathbb{E}\ N^{-2}\sum\limits_{k}C_{2n}^{k}(-1)^{k}\operatorname{tr}[H^{k}]\operatorname{tr}[H^{2n-k}]
=𝔼N2i,j(λ1λj)2n\displaystyle=\mathbb{E}N^{-2}\sum_{i,j}(\lambda_{1}-\lambda_{j})^{2n}

where λi\lambda_{i} is the eigenvalue of HH. For later convenience, in the second line, we perform approximation by only retaining the leading order terms in large NN limit.

As a side remark, the second line of equation (21) can be derived using another approach. Instead of insisting that OO is a fixed operator, we can otherwise consider OO to be also a random Hermitian matrix under independent GUE distribution from HH:

OijOmn¯=N2δinδjm\overline{O_{ij}O_{mn}}=N^{-2}\delta_{in}\delta_{jm} (22)

where the variance N2N^{-2} is determined by normalization condition:

(O|O)¯=tr[O2]¯=i,j=1NOijOji¯=1\overline{(O|O)}=\overline{\operatorname{tr}[O^{2}]}=\sum\limits_{i,j=1}^{N}\overline{O_{ij}O_{ji}}=1 (23)

With the GUE distribution of OO we have identity tr[OAOB]¯=N2tr[A]tr[B]\overline{\operatorname{tr}[OAOB]}=N^{-2}\operatorname{tr}[A]\operatorname{tr}[B]. Applying this to equation (17), we have:

μ2n¯=N2kC2nk(1)ktr[Hk]tr[H2nk]\overline{\mu_{2n}}=N^{-2}\sum\limits_{k}C_{2n}^{k}(-1)^{k}\operatorname{tr}[H^{k}]\operatorname{tr}[H^{2n-k}] (24)

which is similar to equation (21).

Nevertheless, we will proceed by adopting that OO is a fixed operator. Starting from equation (21), we see that 𝔼μ2n\mathbb{E}\mu_{2n} is identically the same for any traceless initial operator OO, and only depend on the spectrum of HH:

𝔼μ2n\displaystyle\mathbb{E}\ \mu_{2n} =𝔼N2kC2nk(1)ktr[Hk]tr[H2nk]\displaystyle=\mathbb{E}\ N^{-2}\sum\limits_{k}C_{2n}^{k}(-1)^{k}\operatorname{tr}[H^{k}]\operatorname{tr}[H^{2n-k}] (25)
=𝔼N2k,i,jC2nk(1)kλikλj2nk\displaystyle=\mathbb{E}\ N^{-2}\sum\limits_{k,i,j}C_{2n}^{k}(-1)^{k}\lambda_{i}^{k}\lambda_{j}^{2n-k}
=𝔼N2i,j(λiλj)2n\displaystyle=\mathbb{E}\ N^{-2}\sum\limits_{i,j}(\lambda_{i}-\lambda_{j})^{2n}
=N2𝑑λ𝑑λ(λλ)2nρ(λ,λ)\displaystyle=N^{-2}\int d\lambda d\lambda^{\prime}\cdot(\lambda-\lambda^{\prime})^{2n}\rho(\lambda,\lambda^{\prime})

where ρ(λ,λ)\rho(\lambda,\lambda^{\prime}) is the averaged two-point spectral density of RMT.

Here we shall pause for a moment and make two relevant remarks.

  • 1.

    Order of taking random average: The order of average we take here is the same as that when people study Krylov complexity in SYK model Maldecena2016 ; Sachdev2015 . There, people first calculated random averaged two-point auto-correlation function 𝔼C(t)=𝔼(O|O(t))\mathbb{E}\ C(t)=\mathbb{E}(O|O(t)). Then by taking derivatives we arrive at disorder averaged 𝔼μ2n\mathbb{E}\mu_{2n}. The Krylov coefficient bn=f(𝔼μ)b_{n}=f(\mathbb{E}\mu) is obtained subsequently using equation (12), which we schematically denote as f()f(\cdot). Of course, we can directly perform average over 𝔼bn\mathbb{E}b_{n}, and since the fluctuation in RMT is controllably small in large NN limit, we expect the order of average to not affect the mean value,i.e., 𝔼f(μ)=f(𝔼μ)\mathbb{E}f(\mu)=f(\mathbb{E}\mu), even if f()f(\cdot) is a highly nonlinear map. Nevertheless, in the right panel of figure 2, we evaluate the statistical variance of {bn}\{b_{n}\} from numerical simulation of random matrix, and indeed verified that the variance is suppressed by some positive power of N1N^{-1}. Therefore, for notational symplicity, we will sometimes drop the average symbol 𝔼\mathbb{E}.

  • 2.

    Average over initial operator OO: In some recent papersJafferis20231 ; Jafferis20232 , people find that if the ensemble of OO and HH are sampled independently, the averaged nn-point correlator of OO cannot reproduce the known results of RMT for n4n\geq 4. Here in our story, every data comes from the two-point auto-correlation function of OO, therefore this possible subtlety will not show up in our story. Nevertheless, to avoid any possible ambiguity, we will proceed by adopting that OO is a fixed operator.

Going back to the main theme, we shall proceed by first introducing some notation and knowledge in random matrix theory. The spectral one point function ρ(λ)\rho(\lambda) and two point function ρ(λ,λ)\rho(\lambda,\lambda^{\prime}) in GUE is defined by:

ρ(λ)=𝔼iδ(λλi)=R1(λ)\displaystyle\rho(\lambda)=\mathbb{E}\sum_{i}\delta(\lambda-\lambda_{i})=R_{1}(\lambda) (26)
ρ(λ,λ)=𝔼i,jδ(λλi)δ(λλj)=δ(λλ)R1(λ)+R2(λ,λ)\displaystyle\rho(\lambda,\lambda^{\prime})=\mathbb{E}\sum_{i,j}\delta(\lambda-\lambda_{i})\delta(\lambda^{\prime}-\lambda_{j})=\delta(\lambda-\lambda^{\prime})R_{1}(\lambda)+R_{2}(\lambda,\lambda^{\prime}) (27)

where R1,R2R_{1},R_{2} are one-point and two-point spectral correlators, following the standard notation in RMT mehta2004random . For spectrum one point function R1(λ)R_{1}(\lambda), it is given by the famous Wigner semicircle in large NN limit:

R1(λ)={2π1N1λ2for|λ|<10for|λ|>1R_{1}(\lambda)=\begin{cases}2\pi^{-1}N\sqrt{1-\lambda^{2}}&\text{for}\ |\lambda|<1\\ 0&\text{for}\ |\lambda|>1\end{cases} (28)

where we scaled the probability distribution of GUE to be P(H)exp(2NtrH2)P(H)\propto\exp(-2N\operatorname{tr}H^{2}) in order to set the radius of Wigner semicircle to be unity.

For spectrum two-point function R2(λ,λ)R_{2}(\lambda,\lambda^{\prime}), it has disconnected part R1(λ)R2(λ)R_{1}(\lambda)R_{2}(\lambda^{\prime}) and connected part T2(λ,λ)T_{2}(\lambda,\lambda^{\prime}) related by:

R2(λ,λ)=R1(λ)R1(λ)T2(λ,λ)R_{2}(\lambda,\lambda^{\prime})=R_{1}(\lambda)R_{1}(\lambda^{\prime})-T_{2}(\lambda,\lambda^{\prime}) (29)
Refer to caption
Refer to caption
Figure 2: Numerical simulation of random matrix with GUE (Gaussian Unitary Ensemble) with 2000 samples. Left: The average value of bnb_{n}. Right: The log of variance. We observe that the variance of bnb_{n} decreases roughly in power law wrt NN. Therefore in large NN limits the variance is suppressed and reduces possible Anderson localization due to disorder.

The generic result for T2(λ,λ)T_{2}(\lambda,\lambda^{\prime}) is complicated, and its simplification appears only when we zoom into the center of the Wigner circle with a few windows with width determined by mean level spacing O(N1)O(N^{-1}), i.e., we take the limit where N+,NλConst,NλConstN\rightarrow+\infty,N\lambda\rightarrow\text{Const},N\lambda^{\prime}\rightarrow\text{Const}:

T2(λ,λ)(2π1N)2[sin2N(λλ)2N(λλ)]2,whenN+,Nλ,NλConstT_{2}(\lambda,\lambda^{\prime})\approx(2\pi^{-1}N)^{2}\left[\frac{\sin 2N(\lambda-\lambda^{\prime})}{2N(\lambda-\lambda^{\prime})}\right]^{2},\ \text{when}\ N\rightarrow+\infty,N\lambda,N\lambda^{\prime}\rightarrow\text{Const} (30)

Plugging everything into the expression we have:

μ2n4π2𝑑λ𝑑λ(λλ)2n{1λ21λ2[sin2N(λλ)2N(λλ)]2}\mu_{2n}\approx 4\pi^{-2}\int d\lambda d\lambda^{\prime}\cdot(\lambda-\lambda^{\prime})^{2n}\left\{\sqrt{1-\lambda^{2}}\sqrt{1-\lambda^{\prime 2}}-\left[\frac{\sin 2N(\lambda-\lambda^{\prime})}{2N(\lambda-\lambda^{\prime})}\right]^{2}\right\} (31)

We first omit the connected part in large NN limit. So the leading order in large NN would be:

μ2n\displaystyle\mu_{2n} 4π211𝑑λ𝑑λ1λ21λ2(λλ)2n\displaystyle\approx 4\pi^{-2}\int_{-1}^{1}d\lambda d\lambda^{\prime}\sqrt{1-\lambda^{2}}\sqrt{1-\lambda^{\prime 2}}(\lambda-\lambda^{\prime})^{2n} (32)
=4π20π𝑑θ𝑑θsin2θsin2θ(cosθcosθ)2n\displaystyle=4\pi^{-2}\int_{0}^{\pi}d\theta d\theta^{\prime}\sin^{2}\theta\sin^{2}\theta^{\prime}(\cos\theta-\cos\theta^{\prime})^{2n}
=4π2k=02nC2nk(1)k[0π𝑑θsin2θcoskθ][0π𝑑θsin2θcos2nkθ]\displaystyle=4\pi^{-2}\sum\limits_{k=0}^{2n}C_{2n}^{k}(-1)^{k}\left[\int_{0}^{\pi}d\theta\sin^{2}\theta\cos^{k}\theta\right]\left[\int_{0}^{\pi}d\theta\sin^{2}\theta\cos^{2n-k}\theta\right]
=4π2k=02nC2nk(1)k[FkFk+2][F2nkF2nk+2]\displaystyle=4\pi^{-2}\sum\limits_{k=0}^{2n}C_{2n}^{k}(-1)^{k}\left[F_{k}-F_{k+2}\right]\left[F_{2n-k}-F_{2n-k+2}\right]

where the integral FkF_{k} is elementary:

Fk=0π𝑑θcoskθ={Γ(12)Γ(k2+12)Γ(k2+1)forkeven0forkoddF_{k}=\int_{0}^{\pi}d\theta\cos^{k}\theta=\begin{cases}\frac{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{k}{2}+\frac{1}{2}\right)}{\Gamma\left(\frac{k}{2}+1\right)}&\text{for}\ k\in\text{even}\\ 0&\text{for}\ k\in\text{odd}\end{cases} (33)

Plugging the expression of FkF_{k}, then we arrived at the analytical form of μ2n\mu_{2n} in leading order of large NN:

μ2n=2π1m=0n(2n)!(2m)!(2n2m)!Γ(m+12)Γ(nm+12)Γ(m+2)Γ(nm+2)\mu_{2n}=2\pi^{-1}\sum\limits_{m=0}^{n}\frac{(2n)!}{(2m)!(2n-2m)!}\frac{\Gamma\left(m+\frac{1}{2}\right)\Gamma\left(n-m+\frac{1}{2}\right)}{\Gamma\left(m+2\right)\Gamma\left(n-m+2\right)} (34)

It seems hopeless to obtain a closed expression for this summation, therefore we try to seek the asymptotic behavior of μ2n\mu_{2n} when n1n\gg 1. To do this, we appeal to the Stirling formula logΓ(z)zlogzz+O(logz)\log\Gamma(z)\approx z\log z-z+O(\log z):

μ2n\displaystyle\mu_{2n} 2π1m=0nexp{n[2xlogx+2(1x)log(1x)]3logn32logx32log(1x)}\displaystyle\approx 2\pi^{-1}\sum\limits_{m=0}^{n}\exp\left\{-n\left[2x\log x+2(1-x)\log(1-x)\right]-3\log n-\frac{3}{2}\log x-\frac{3}{2}\log(1-x)\right\} (35)
2π1n211𝑑xexp(nf(x)+O(logn)),wherexmn,f(x)2xlogx+2(1x)log(1x)\displaystyle\approx 2\pi^{-1}n^{-2}\int_{-1}^{1}dx\exp\left(-nf(x)+O(\log n)\right),\ \text{where}\ x\equiv\frac{m}{n},\ f(x)\equiv 2x\log x+2(1-x)\log(1-x)
2π1n2+𝑑xexp[nf(1/2)12nf′′(1/2)(x1/2)2]\displaystyle\approx 2\pi^{-1}n^{-2}\int_{-\infty}^{+\infty}dx\exp\left[-nf\left(1/2\right)-\frac{1}{2}nf^{\prime\prime}\left(1/2\right)(x-1/2)^{2}\right]
exp[(2log2)n+O(logn)]\displaystyle\approx\exp\left[(2\log 2)n+O(\log n)\right]

In the first line, we use Stirling formula; in the second we approximate summation by integral; in the third line we perform integral using saddle point approximation, namely expanding around minimal of f(x)f(x) and perform Gaussian integral, and in the last line we keep contribution only up to order O(n)O(n) on the exponential. This is because, in the first line of approximation, the precision is already only up to order nn, since if we want to keep to O(logn)O(\log n), we should use a finer Stirling formula logΓ(z)zlogzz12logz+O(1)\log\Gamma(z)\approx z\log z-z-\frac{1}{2}\log z+O(1)

To summarize, we have an exponential scaling behavior of μ2n\mu_{2n}:

logμ2nn2log2+O(logn)\log\mu_{2n}\approx n\cdot 2\log 2+O(\log n) (36)

Next, we notice that this exponential behavior of μ2n\mu_{2n} would translate to the Constant behaviour of bnbb_{n}\equiv b for large nn. From equation (11), we see that if bnb_{n} equals constant, we only need to count the number of paths:

μ2n=Cnb2n=(2n)!(n+1)!(n)!b2n\mu_{2n}=C_{n}b^{2n}=\frac{(2n)!}{(n+1)!(n)!}b^{2n} (37)

where CnC_{n} is the Catalan number which counts the number of paths Parker2019 . In order to compare the scaling behaviour of μ2n\mu_{2n} for large but finite nn, we again appeals to Stirling formula:

logμ2nn2log2+2nlogb+O(logn)\log\mu_{2n}\approx{n\cdot 2\log 2+2n\log b+O(\log n)} (38)
b=1\Longrightarrow b=1 (39)

This suggests that bn1b_{n}\approx 1 will approach a constant value when nn is large!

We confirm this analytical prediction by direct numerical simulation of GUE random matrices, as presented in the left panel of figure 4. Numerically we see that bnb_{n} is already close enough to 1 even when n=10n=10. Since there is no scale at all for RMT at large NN and infinite temperature, we may well represent bn=1b_{n}=1 for nn larger than some order one value. This is the reason why we draw the behavior of bnb_{n} as a horizontal straight line in the schematic plot in figure 1(a).

As we have already pointed out in section 2, bnb_{n} equating constant indicates that in large NN limit, 𝒦(t)\mathcal{K}(t) in RMT grows linearly in time, in sharp contrast to the chaotic quantum system in thermodynamic limit, whose K-complexity grow exponentially in time, as mentioned in section 1. We remark that this does not mean K-complexity in RMT is smaller than that of chaotic local systems. This is because, in RMT theory, we usually scale the Hamiltonian such that its eigenenergy is of order O(1)O(1). But in chaotic local systems, the energy is an extensive value which is of order O(S)O(S), where SS is the number of degree of freedom. Since the energy scale is conjugate to the time scale, we should properly rescale our RMT result in order to compare with chaotic local systems.

We also remark that only the fact that it approaches constant is important, not the specific value of that constant. One can show directly from the Lanczos algorithm with infinite temperature inner product, that bnb_{n} is proportional to the overall scaling of Hamiltonian. So, here deriving bnb_{n} to approach 1 is simply a coincidence that we scale the radius semicircle to unity.

As a side remark, to make sure the linear growth of complexity on constant-bb-plateau, we need to ensure that the fluctuation of bnb_{n} is small enough in order to free from Anderson localization on Krylov chain Rabinovici20221 ; Rabinovici20222 . Luckily, this is also satisfied in large NN limit, where in the right panel of figure 2 we show that the statistical variance of {bn}\{b_{n}\} is suppressed by some positive power of N1N^{-1}.

3.2 Numerically comparing with generic chaotic local systems.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 3: Numerical simulation of Lanczos coefficient of {bn}\{b_{n}\} in various chaotic local systems. In the same panel, different color lines correspond to different initial operators OO (we make them normalized and traceless). bnb_{n} is shown in the unit of (EmaxEmin)/2(E_{\max}-E_{\min})/2, where EmaxE_{\max} and EminE_{\min} are the maximal and minimal eigenvalues of Hamiltonian of each panel. For SYK model in equation (41), we take J=1J=1 and Nmajorana=20N_{\text{majorana}}=20; for PXP model in equation (43), we take Ω=1,U=0.5\Omega=1,U=-0.5 on 16-sites; for chaotic fermion model in equation (42), we take t1=1,t2=0.64,V1=0.88,V2=0.76t_{1}=1,t_{2}=0.64,V_{1}=0.88,V_{2}=0.76 on 10-sites; for Ising model in equation (40), we take g=1.87,h=1.92g=1.87,h=1.92 with 10-sites for 1D and 3×33\times 3 sites for 2D.

As explained in section 1, one motivation for studying Krylov complexity in RMT is that for a generic chaotic Hamiltonian with locality, the complexity growth after scrambling time tt_{*} is governed by random matrix behavior. This is because, at tt_{*}, O(t)O(t) has just finished its ‘scrambling in real space’, and begun to start its ‘scrambling in Hilbert space’. Therefore in stage two, O(t)O(t) completely abandoned any spatial locality structure, or equivalently, the local Hamiltonian HH no longer looks local from the perspective of O(t)O(t). Therefore, in this stage, HH would act like a chaotic Hamiltonian without spatial locality, resembling a typical random matrix.

In this section, we numerically study the Lanczos coefficient {bn}\{b_{n}\} of various initial operator OO in several chaotic local systems in various dimensions, including Ising model in one or two dimensions with both transverse field and longitudinal field, SYK model Maldecena2016 ; Sachdev2015 , one-dimensional spinless fermion model with nearest-neighbor and next-nearest-neighbor hopping and density-density interaction, and PXP model as an effective model describing blockated Rydberg atom array Bernien2017 . Their Hamiltonian is explicitly given by:

HIsing=ijσixσjx+i(gσiz+hσix)H_{\text{Ising}}=-\sum\limits_{\langle ij\rangle}\sigma_{i}^{x}\sigma_{j}^{x}+\sum\limits_{i}\left(g\sigma^{z}_{i}+h\sigma_{i}^{x}\right) (40)
HSYK=i<j<k<lJijklχiχjχkχl,Jijkl2¯=(41)!Nm41J2H_{\text{SYK}}=\sum\limits_{i<j<k<l}J_{ijkl}\chi_{i}\chi_{j}\chi_{k}\chi_{l},\ \overline{J^{2}_{ijkl}}=\frac{(4-1)!}{N_{m}^{4-1}}J^{2} (41)
Hfermion=i(t1ci+1ci+t2ci+2ci+h.c.)+i(V1ni+1ni+V2ni+2ni)H_{\text{fermion}}=-\sum\limits_{i}\left(t_{1}c_{i+1}^{\dagger}c_{i}+t_{2}c_{i+2}^{\dagger}c_{i}+\text{h.c.}\right)+\sum\limits_{i}\left(V_{1}n_{i+1}n_{i}+V_{2}n_{i+2}n_{i}\right) (42)
HPXP=P[i(Ωσix+Uσiz)]PH_{\text{PXP}}=P\left[\sum\limits_{i}\left(\Omega\sigma_{i}^{x}+U\sigma_{i}^{z}\right)\right]P (43)

In SYK model, we use NmN_{m} to denote the number of Majoranas. In PXP model, PP is a projection operator on to the subspace that any nearest neighbor sites cannot be both spin up.

To compare with RMT result, we should properly rescale HRMTH_{\text{RMT}} to ηSHRMT\eta S\cdot H_{\text{RMT}} such that ηSO(S)\eta S\sim O(S). Therefore the plateau value of bnb_{n} would rescale to ηS\eta S, which is the same order as bnb_{n} from HchaoticH_{\text{chaotic}}. This rescaling also shifts the radius of Wigner semicircle from unity to ηS\eta S, the same order as the extensive energy from HchaoticH_{\text{chaotic}}. However, the order one coefficient η\eta is model-dependent and exhibits ambiguity. In order to obtain a model-independent rescaling ansatz, we define the rescaling factor to be such that the diameter of Wigner semicircle equals the width of the spectrum of HchaoticH_{\text{chaotic}}. Therefore, the plateau value of RMT is scaled to be b=(EmaxEmin)/2b=(E_{\max}-E_{\min})/2.

Numerical results in figure 3 shows that up to some erratic fluctuations, b=(EmaxEmin)/2b=(E_{\max}-E_{\min})/2 is indeed slightly larger than {bn}\{b_{n}\} from HchaoticH_{\text{chaotic}}, with SYK model almost saturate RMT result. Therefore, we may carefully conjecture that in any chaotic local quantum systems, the infinite temperature Lanczos coefficient {bn}\{b_{n}\} is bounded by bnη0(EmaxEmin)/2b_{n}\leq\eta_{0}(E_{\max}-E_{\min})/2, with η0O(1)\eta_{0}\sim O(1) a model-independent constant.

3.3 Finite temperature

For finite temperature, recall the inner product is defined by Wightman function:

(A|B)=tr[eβH/2AeβH/2B](A|B)=\operatorname{tr}\left[e^{-\beta H/2}A^{\dagger}e^{-\beta H/2}B\right] (44)

Similar calculation gives the expression of μ2n\mu_{2n}:

μ2n=N2i,j(λiλj)2neβ(λi+λj)/2\mu_{2n}=N^{-2}\sum\limits_{i,j}(\lambda_{i}-\lambda_{j})^{2n}e^{-\beta(\lambda_{i}+\lambda_{j})/2} (45)
Refer to caption
Refer to caption
Figure 4: Left: μ2n\mu_{2n}-dependence on nn, calculated at infinite NN limit, from numerical integrating first line of equation (46). Right: Numerical simulation of bnb_{n} in RMT (N=1000N=1000 is sufficient to represent infinite NN limit for our purposes), showing that the overall rescaling of Hamiltonian only proportionally changes the plateau value of bnb_{n}, while keeping the slope of initial linear-in-nn region intact.

Using the same RMT technique in large NN limit:

μ2n\displaystyle\mu_{2n} =4π211𝑑λ𝑑λ1λ21λ2(λλ)2neβ(λ+λ)/2\displaystyle=4\pi^{-2}\int_{-1}^{1}d\lambda d\lambda^{\prime}\sqrt{1-\lambda^{2}}\sqrt{1-\lambda^{\prime 2}}(\lambda-\lambda^{\prime})^{2n}e^{-\beta(\lambda+\lambda^{\prime})/2} (46)
=4π2k=02nC2nk(1)k[FkFk+2][F2nkF2nk+2]\displaystyle=4\pi^{-2}\sum\limits_{k=0}^{2n}C_{2n}^{k}(-1)^{k}\left[F_{k}-F_{k+2}\right]\left[F_{2n-k}-F_{2n-k+2}\right]

where the integral FkF_{k} is given by the derivative of the modified Bessel function of the first kind:

Fk=0π𝑑θcoskθeβcosθ/2=π(1)kI0(k)(β/2)F_{k}=\int_{0}^{\pi}d\theta\cos^{k}\theta e^{-\beta\cos\theta/2}=\pi(-1)^{k}I_{0}^{(k)}(\beta/2) (47)

with I0(z)=π1𝑑θezcosθI_{0}(z)=\pi^{-1}\int d\theta e^{-z\cos\theta} is the zeroth order of modified Bessel function of first kind, and I0(k)(z)I_{0}^{(k)}(z) denotes its kk-th derivative. Using the property that I0(z)=I0(z)I_{0}(z)=I_{0}(-z) and the identity n(fg)=Cnkf(k)g(nk)\partial^{n}(fg)=\sum C_{n}^{k}f^{(k)}g^{(n-k)}, we can perform summation over kk and arrive at a simplified result:

μ2n=4x2n[(I0(x)I0(2)(x))2]|x=β/2\mu_{2n}=4\partial_{x}^{2n}\left[\left(I_{0}(x)-I_{0}^{(2)}(x)\right)^{2}\right]\bigg{|}_{x=\beta/2} (48)

Of course one can use the expansion I0(z)=1(k!)2(z/2)2kI_{0}(z)=\sum\frac{1}{(k!)^{2}}(z/2)^{2k} to study the behaviour of ν2n\nu_{2n} in general temperature, but to get a feeling, we can study low temperature region when β\beta is large. We use the asymptotic behaviour of Bessel function:

I0(x)ex2πx,x1I_{0}(x)\sim\frac{e^{x}}{\sqrt{2\pi x}},\ x\gg 1 (49)

So we have (I0(x)I0(2)(x))2e2xx3e2x(I_{0}(x)-I_{0}^{(2)}(x))^{2}\propto e^{2x}x^{-3}\sim e^{2x}, where we ignored the logβ\log\beta correction on the exponential. Therefore:

μ2nx2n[e2x]|x=β/2=22neβμ2nexp(n2log2)\mu_{2n}\sim\partial_{x}^{2n}\left[e^{2x}\right]\bigg{|}_{x=\beta/2}=2^{2n}e^{\beta}\Longrightarrow\mu_{2n}\sim\exp(n\cdot 2\log 2) (50)

Interestingly, under this simple approximation, one may observe that finite temperature seemingly doesn’t affect the scaling behaviour of nn, compared to infinite temperature. This also means bn=1=Constb_{n}=1=\text{Const}, following the same argument starting from equation (36) to (39).

Although the above calculation is in low temperature limit, we check numerically the exponential scaling behaviour of μ2n\mu_{2n} actually appears for finite β\beta and large nn, namely we first take nn goes to infinity while keeping β\beta finite, see figure 5.

Refer to caption
Refer to caption
Figure 5: Numerical simulation of random matrix from GUE using 1000 samples. Left: We fix temperature, and see how bnb_{n} behaves wrt NN. In large NN limits, we see that bnb_{n} first linearly increases with nn and then saturates to the platform of bn=1b_{n}=1. Right: N=1000N=1000 is sufficiently large to represent infinite NN result for our purposes. We find that in low temperature limit, the slope of initial linear-in-nn growth is precisely bounded by and saturates the chaos bound.

We also numerically calculate bnb_{n} by direct simulation of RMT. The result is shown in figure 5. We also confirm that the bn=1b_{n}=1 region appears at large nn and finite β\beta. For finite nn but large β\beta, there is a region where bnb_{n} linearly increases with nn.

In order to explain this linear growing region of bnb_{n}, we directly consider the integral of μ2n\mu_{2n}:

μ2n=4π20π𝑑θ𝑑θsin2θsin2θ(cosθcosθ)2neβ(cosθ+cosθ)/2\mu_{2n}=4\pi^{-2}\int_{0}^{\pi}d\theta d\theta^{\prime}\cdot\sin^{2}\theta\sin^{2}\theta^{\prime}(\cos\theta-\cos\theta^{\prime})^{2n}e^{-\beta(\cos\theta+\cos\theta^{\prime})/2} (51)

We want to evaluate this integral by taking low temperature (β\beta\rightarrow\infty). In this way we change dummy variable φ=πθ,φ=πθ\varphi=\pi-\theta,\varphi^{\prime}=\pi-\theta^{\prime} and expand the integrand for small φ,φ\varphi,\varphi^{\prime}:

μ2n2π2eβ22nππ𝑑φ𝑑φ(φ2φ2)2neβ(φ2+φ2)/4\mu_{2n}\approx 2\pi^{-2}e^{\beta}2^{-2n}\int_{-\pi}^{\pi}d\varphi d\varphi^{\prime}(\varphi^{2}-\varphi^{\prime 2})^{2n}e^{-\beta(\varphi^{2}+\varphi^{\prime 2})/4} (52)

where we also extend the integral range from [0,π][0,\pi] to [π,π][-\pi,\pi] using inversion symmetry of integrand. For low temperature, this Gaussian distribution is located at zero with width O(β1/2)O(\beta^{-1/2}), parametrically small compared to integral range [π,π][-\pi,\pi]. Therefore we can safely extend the integral range to infinity. By changing variable to φ¯φ+φ,δφ=φφ\overline{\varphi}\equiv\varphi+\varphi^{\prime},\delta\varphi=\varphi-\varphi^{\prime}, this Gaussian integral is evaluated as:

μ2n\displaystyle\mu_{2n} π1eβ22n+2β1(φ¯2n+4σδφ2nσ+φ¯2nσδφ2n+4σ2φ¯2n+2σδφ2n+2σ)\displaystyle\approx\pi^{-1}e^{\beta}2^{-2n+2}\beta^{-1}\left(\left\langle\overline{\varphi}^{2n+4}\right\rangle_{\sigma}\left\langle\delta\varphi^{2n}\right\rangle_{\sigma}+\left\langle\overline{\varphi}^{2n}\right\rangle_{\sigma}\left\langle\delta\varphi^{2n+4}\right\rangle_{\sigma}-2\left\langle\overline{\varphi}^{2n+2}\right\rangle_{\sigma}\left\langle\delta\varphi^{2n+2}\right\rangle_{\sigma}\right) (53)
=π1β2n323eβ((2n+2)!(n+1)!)212n+1\displaystyle=\pi^{-1}\beta^{-2n-3}2^{3}e^{\beta}\left(\frac{(2n+2)!}{(n+1)!}\right)^{2}\frac{1}{2n+1}

where σ\langle\cdot\rangle_{\sigma} is the Gaussian average with variance σ=4β1\sigma=4\beta^{-1}. Going from the first line to the second line, we use z2nσ=σn(2n)!2nn!\langle z^{2n}\rangle_{\sigma}=\sigma^{n}\frac{(2n)!}{2^{n}n!}.

Using Stirling formula for large but finite nn, we find that the scaling behaviour of logμ2n\log\mu_{2n} with respect to nn:

logμ2n2nlogn+(4log22+2logβ1)n+O(logn)\log\mu_{2n}\approx 2n\log n+(4\log 2-2+2\log\beta^{-1})n+O(\log n) (54)

Assuming ansatz bn=αnδb_{n}=\alpha n^{\delta} and using inequality (14), we can fix δ=1\delta=1 and provide a bound for α\alpha:

4β1>α>2β14\beta^{-1}>\alpha>2\beta^{-1} (55)

This is indeed consistent with numerical simulation in the right panel of figure 5, where we found bn=πβ1nb_{n}=\pi\beta^{-1}n, with α=πβ1\alpha=\pi\beta^{-1}, which is indeed within the bound.

We notice that from the continuous limit of the operator wave function on Krylov chain, as analyzed in section 2, we immediately conclude that the Krylov complexity 𝒦(t)\mathcal{K}(t) at low temperature will exhibit the following two-stage growth:

  • 1.

    Exponential growth before scrambling time. 𝒦(t)e2πβt\mathcal{K}(t)\sim e^{\frac{2\pi}{\beta}t}. We also notice that 𝒦(t)\mathcal{K}(t) is interpreted as the average position of the wave function on the semi-infinite Krylov chain, therefore this exponential growth will proceed until the average position reaches the plateau, where the linear growth of bnb_{n} ceases, namely when 𝒦(t)O(β)\mathcal{K}(t_{*})\sim O(\beta). This shows the scrambling time scale tt_{*} is of order O(βlogβ)O(\beta\log\beta).

  • 2.

    Linear growth after scrambling time. 𝒦(t)t\mathcal{K}(t)\sim t. When ttt\gtrsim t_{*}, the average position of the wave packet will travel to the region where bnb_{n} is constant. Therefore it becomes a usual wave packet traveling ahead with constant velocity.

We also notice that at low temperature and before scrambling time, the exponential growth of Krylov complexity saturates the chaos bound of Lyaponov exponent λL=2πβ\lambda_{L}=\frac{2\pi}{\beta} Maldacena2016bound .

On the one hand, this is natural since the original literature on Krylov complexity Parker2019 has proved that K-complexity can bound many other complexity measures like OTOC and operator size at infinite temperature, and for finite temperature the authors there conjectured the same bound to be true and verified on SYK model.

On the other hand, the initial linear grow of bnb_{n} in RMT should come as a surprise, since the linear growing behavior of bnb_{n}, though generic in chaotic many body systems, actually originates from the locality of Hamiltonian Parker2019 . But clearly a random matrix Hamiltonian lacks locality. One interpretation is that the spectrum of RMT at low energy, which scales as ρ(E)E\rho(E)\propto\sqrt{E} agrees with the spectrum of a quantum black hole in JT gravity (see reference Mertens2023 for a recent review) or SYK model Maldecena2016 in low energy, which scales as ρJT(E)sinh(2π2CE)E\rho_{JT}(E)\propto\sinh(2\pi\sqrt{2CE})\sim\sqrt{E} Mertens2023 . Matching of low energy density of state suggests that RMT has similar behavior with black hole in the view of fast scrambler YasuhiroSekino2008 .

To justify that this low temperature chaos bound is universal, one necessary condition is that the coefficient of exponential growth in time, equivalently the slope of bnb_{n} wrt to nn, should not change when we vary the overall scaling of Hamiltonian while keeping temperature to be fixed. We notice that this is not possible in infinite temperature case. We recall the fact that {bn}\{b_{n}\} is simply the matrix elements of Liouville superoperator =[H,]\mathcal{L}=[H,\cdot] writing in an orthonormal Krylov basis {|On)}\{|O_{n})\}. Therefore, naively we should expect {bn}\{b_{n}\} to be proportional to the overall scaling of HH. This is indeed true in infinite temperature, since the inner product (A|B)=tr[AB](A|B)=\operatorname{tr}[A^{\dagger}B] used to normalize the basis {|On)}\{|O_{n})\} do not depend on HH, therefore one can show by induction that {|On)}\{|O_{n})\} do not depend on the overall scaling of HH. However, this is not the case for finite temperature, where we choose Wightman function to define the inner product (A|B)β=tr[eβH/2AeβH/2B](A|B)_{\beta}=\operatorname{tr}[e^{-\beta H/2}A^{\dagger}e^{-\beta H/2}B], which explicitly depend on HH. As a result the basis {|On)}\{|O_{n})\} depends on the overall scaling of HH in a complicated way and {bn}\{b_{n}\} no longer proportional to it.

From right panel of figure 4 we find that for finite temperature, the constant plateau part of {bn}\{b_{n}\} is indeed affected by overall scaling, resembling infinite temperature behavior, while the initial slope remains intact from overall scaling! This perfectly justified the fact that the saturation of RMT to chaos bound is not a coincidence.

One interesting thing is that for RMT in NN\rightarrow\infty, we obtain a finite scrambling time tRMTO(βlogβ)t_{*}^{\text{RMT}}\sim O(\beta\log\beta), which is in contrast with chaotic local quantum systems tchaoticO(βlogS)t_{*}^{\text{chaotic}}\sim O(\beta\log S), diverging in thermodynamic limit SS\rightarrow\infty. Similar to the situation discussed in section 3.2, the reason is merely the overall scaling of Hamiltonian. From the right panel of figure 5, we can of course prolong the scrambling time by scaling HRMTO(S)HRMTH_{\text{RMT}}\rightarrow O(S)\cdot H_{\text{RMT}}, in order to match the scrambling time. Since SlogNS\sim\log N, this rescaling is equivalent to changing the variance of RMT probability distribution: logP(H)2NtrH2logP(H)2Nlog2NtrH2\log P(H)\sim-2N\operatorname{tr}H^{2}\longrightarrow\log P(H)\sim-\frac{2N}{\log^{2}N}\operatorname{tr}H^{2}, which is strange.

In RMT with scale controlled by probability distribution P(H)exp(2NtrH2)P(H)\propto\exp(-2N\operatorname{tr}H^{2}), the energy spectrum is of order O(1)O(1), which remains finite when NN\rightarrow\infty. For chaotic local quantum systems, it energy spectrum diverges since energy is extensive.

4 Remarks on finite NN corrections

A general remark would be that the level repulsion in connected part would have more impact for μ2n\mu_{2n} with larger nn. This is because level repulsion would make λiλj\lambda_{i}-\lambda_{j} larger. We also notice that, as nn gets larger, x2nx^{2n} have increasingly more weight on large-xx region.

To observe this, we first collect the exact result of R1(λ)R_{1}(\lambda) and T2(λ,λ)T_{2}(\lambda,\lambda^{\prime}) without any approximation on NN:

R1(λ)=2N1KN(2Nλ,2Nλ)\displaystyle R_{1}(\lambda)=\sqrt{2N}^{1}K_{N}\left(\sqrt{2N}\lambda,\sqrt{2N}\lambda\right) (56)
T2(λ,λ)=2N2[KN(2Nλ,2Nλ)]2\displaystyle T_{2}(\lambda,\lambda^{\prime})=\sqrt{2N}^{2}\left[K_{N}\left(\sqrt{2N}\lambda,\sqrt{2N}\lambda^{\prime}\right)\right]^{2} (57)

with the kernel function given in terms of normalized wave-function of harmonic oscillator φj(x)=(2jj!π)1/2exp(x2/2)Hj(x)\varphi_{j}(x)=(2^{j}j!\sqrt{\pi})^{-1/2}\exp(-x^{2}/2)H_{j}(x):

KN(x,y)=j=0N1φj(x)φj(y)K_{N}(x,y)=\sum\limits_{j=0}^{N-1}\varphi_{j}(x)\varphi_{j}(y) (58)
Refer to caption
Refer to caption
Figure 6: Left: μ2n\mu_{2n}-dependence on nn, calculated at full NN-dependence, from numerical integrating first line of equation (59). Right: Same calculation as in the left panel, but to a larger scale.

Then from the exact expression of μ2n\mu_{2n}:

μ2n=N2𝑑λaλ(λ2)(λλ)2n[R1(λ)R1(λ)T2(λ,λ)],\mu_{2n}=N^{-2}\int d\lambda a\lambda^{\prime}(\lambda^{2})(\lambda-\lambda^{\prime})^{2n}\left[R_{1}(\lambda)R_{1}(\lambda^{\prime})-T_{2}(\lambda,\lambda^{\prime})\right], (59)

we obtain some numerical results shown in figure 6. We first indeed see that when NN becomes large, the result approaches infinite NN limit. In the right panel of figure 6, we see that even for finite NN, the large nn behavior of μ2n\mu_{2n} is still exponential in nn. From equation (11), a simple interpretation is that μ2n\mu_{2n} would be exponential in the largest eigenvalue of LL.

We can also infer information of bn{b_{n}} from the exponential growth. Notice that this does not contradict with the fact that bnb_{n} in large but finite nn, namely nN2n\leq N^{2} would become zero, due to the termination of Krylov subspace. For example, for N=20N=20’s case, we necessarily have b400=0b_{400}=0, and from equation (12) we only need information of μ2n\mu_{2n} at most up to μ400\mu_{400}, so all the long exponential growth(μ2n>400\mu_{2n>400}) in figure 6 actually do not provide independent information, they only provide information on bmaxb_{\max}:

max0nN2[bn]limn+[log(μ2n)2n]\max\limits_{0\leq n\leq N^{2}}[b_{n}]\sim\lim\limits_{n\rightarrow+\infty}\left[\frac{\log(\mu_{2n})}{2n}\right] (60)

So, the decrease and termination Rabinovici2021 of bN2=0b_{N^{2}}=0 actually hide in the subleading term on the exponential.

The argument of equation (60) is in the following. For large nn, μ2n\mu_{2n}’s dependence on bn{b_{n}} is actually like a path integral:

μ2n=patheS(path),path=(i1,i2,,i2n),withi1=i2n=0\displaystyle\mu_{2n}=\sum\limits_{\text{path}}e^{S(\text{path})},\ \text{path}=(i_{1},i_{2},...,i_{2n}),\ \text{with}\ i_{1}=i_{2n}=0 (61)
S(path)=k=12nlogbik2n𝑑xlog(b(x))=2n01𝑑tlog(b(x(t))),wheret=k/2n\displaystyle S(\text{path})=\sum\limits_{k=1}^{2n}\log b_{i_{k}}\approx 2n\oint dx\ \log(b(x))=2n\int_{0}^{1}dt\ \log(b(x(t))),\ \text{where}\ t=k/2n (62)
elog(μ2n)𝒟[x(t)]exp[2n01𝑑tlog(b(x(t)))]|x(0)=x(1)=0,x˙(t)<ϵ\displaystyle\Longrightarrow e^{\log(\mu_{2n})}\approx\int\mathcal{D}[x(t)]\ \exp\left[2n\int_{0}^{1}dt\log(b(x(t)))\right]\bigg{|}_{x(0)=x(1)=0,\ \dot{x}(t)<\epsilon} (63)

For large nn, this path integral is dominated by its saddle point, i.e., the particular path with the largest action. For finite NN, bnb_{n} would first grow and then decrease to zero Parker2019 , meaning that there is necessarily a bmaxb_{\text{max}}, therefore, the saddle path would be ‘wandering around the site with bmaxb_{\text{max}}’, which gives the argument of equation (60).

In the limit NN\rightarrow\infty, the decreasing slope of bnb_{n} would go to zero as N2N^{-2} Parker2019 , then there are many paths with near the same action, and the saddle point approximation may not be valid.

In conclusion, although bnb_{n} and μ2n\mu_{2n} have an exact one-to-one mapping rule, this mapping is complicated. The large-scale structure of bnb_{n} is hidden in the subleading terms and early-nn-behaviour of μ2n\mu_{2n}, which is relatively hard to extract theoretically.

Averaged Krylov complexity and Spectrum Form Factor

As an aside, an interesting observation is that μ2n\mu_{2n} in RMT is actually the moment of the superoperator. At infinite temperature, from equation (21) we observe:

μ2n=N2i,j(λiλj)2n=N2tr(2n)=N2(1)n(t)2ntr(eit)|t=0\mu_{2n}=N^{-2}\sum\limits_{i,j}(\lambda_{i}-\lambda_{j})^{2n}=N^{-2}\operatorname{tr}(\mathcal{L}^{2n})=N^{-2}(-1)^{n}\left(\frac{\partial}{\partial t}\right)^{2n}\operatorname{tr}(e^{i\mathcal{L}t})\bigg{|}_{t=0} (64)

which means that it is related to the higher derivative of SSF (Spectrum Form Factor) or analytically continued partition function. For finite temperature, from equation (45) we obtain:

μ2n\displaystyle\mu_{2n} =(1)n(t)2n[eβ(λi+λj)/2+it(λiλj)]|t=0\displaystyle=(-1)^{n}\left(\frac{\partial}{\partial t}\right)^{2n}\left[\sum e^{-\beta(\lambda_{i}+\lambda_{j})/2+it(\lambda_{i}-\lambda_{j})}\right]\bigg{|}_{t=0} (65)
=(1)n(t)2n|Z(β/2,t)|2|t=0\displaystyle=(-1)^{n}\left(\frac{\partial}{\partial t}\right)^{2n}\big{|}Z(\beta/2,t)\big{|}^{2}\bigg{|}_{t=0}

This observation also shows that the following quantity contains the same spectral information in random matrix theory:

𝒦(t){bn}{μ2n}Z(t)\mathcal{K}(t)\longleftrightarrow\{b_{n}\}\longleftrightarrow\{\mu_{2n}\}\longleftrightarrow Z(t) (66)

Therefore, we may expect that the decay-dip-ramp-plateau structure Cotler2017 in SSF(t)(t) may translate correspondingly to some behavior in time dependence of complexity 𝒦(t)\mathcal{K}(t). A possible obstacle is that the discontinuity of Z(t)Z(t) at dip-time and plateau-time, which is highly non-perturbative physics, would be hard to extract from its derivatives at t=0t=0, which is perturbative.

5 Conclusions

This study delves into the realm of Random Matrix Theory (RMT) to explore Krylov complexity 𝒦(t)\mathcal{K}(t). In the large NN limit at infinite temperature, we analytically demonstrate the saturation of the Lanczos coefficient bn{b_{n}} to a constant plateau limnbn=b\lim\limits_{n\rightarrow\infty}b_{n}=b, resulting in a linear increase in complexity 𝒦(t)t\mathcal{K}(t)\sim t. Comparison of this plateau value bb with a diverse set of chaotic local quantum systems suggests its bounding effect on bn{b_{n}} in such systems. Hence, we conjecture that, after scrambling time, the linear growth rate of Krylov complexity in chaotic local systems cannot exceed that in RMT.

At low temperature, we analytically establish that bnb_{n} initially undergoes linear growth with nn, reaching a plateau characterized by the renowned chaos bound. Upon reaching this common plateau bb, bnb_{n} stabilizes, indicating 𝒦(t)e2πt/β\mathcal{K}(t)\sim e^{2\pi t/\beta} before the scrambling time tO(βlogβ)t_{*}\sim O(\beta\log\beta). Subsequently, a linear growth phase ensues, mirroring the behavior at infinite temperature.

We conclude by addressing the influence of finite NN corrections, and the relation between K-complexity and spectrum form factor. Our results confirmed and strengthened previous understanding of RMT as a fast scrambler in the view of complexity growth.

Acknowledgements.
I would thank Chang Liu and Hui Zhai for previous collaboration on related topics. I would also thank Yiming Chen, Yingfei Gu, Chang Liu, Xiao-Liang Qi, Hui Zhai, Pengfei Zhang, Tian-Gang Zhou, and Yi-Neng Zhou for helpful discussion. I would especially thank professor Hui Zhai, not only for leading me to the topic of Krylov complexity but also for his encouragement and support throughout this project. The majority of this work was finished when I was studying at Institute for Adanced Study, Tsinghua University, and I would like to thank for their hospitality.

References