This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Can Direct Latent Model Learning Solve
Linear Quadratic Gaussian Control?

\nameYi Tian \email[email protected]
\addrMassachusetts Institute of Technology
\nameKaiqing Zhang \email[email protected]
\addrUniversity of Maryland, College Park
\nameRuss Tedrake \email[email protected]
\addrMassachusetts Institute of Technology
\nameSuvrit Sra \email[email protected]
\addrMassachusetts Institute of Technology
Abstract

We study the task of learning state representations from potentially high-dimensional observations, with the goal of controlling an unknown partially observable system. We pursue a direct latent model learning approach, where a dynamic model in some latent state space is learned by predicting quantities directly related to planning (e.g., costs) without reconstructing the observations. In particular, we focus on an intuitive cost-driven state representation learning method for solving Linear Quadratic Gaussian (LQG) control, one of the most fundamental partially observable control problems. As our main results, we establish finite-sample guarantees of finding a near-optimal state representation function and a near-optimal controller using the directly learned latent model. To the best of our knowledge, despite various empirical successes, prior to this work it was unclear if such a cost-driven latent model learner enjoys finite-sample guarantees. Our work underscores the value of predicting multi-step costs, an idea that is key to our theory, and notably also an idea that is known to be empirically valuable for learning state representations.

1 Introduction

We consider state representation learning for control in partially observable systems, inspired by the recent successes of control from pixels (Hafner et al., 2019b, a). Control from pixels is an everyday task for human beings, but it remains challenging for learning agents. Methods to achieve it generally fall into two main categories: model-free and model-based ones. Model-free methods directly learn a visuomotor policy, also known as direct reinforcement learning (RL) (Sutton and Barto, 2018). On the other hand, model-based methods, also known as indirect RL (Sutton and Barto, 2018), attempt to learn a latent model that is a compact representation of the system, and to synthesize a policy in the latent model. Compared with model-free methods, model-based ones facilitate generalization across tasks and enable efficient planning (Hafner et al., 2020), and are sometimes more sample efficient (Tu and Recht, 2019; Sun et al., 2019; Zhang et al., 2019) than the model-free ones.

In latent-model-based control, the state of the latent model is also referred to as a state representation in the deep RL literature, and the mapping from an observed history to a latent state is referred to as the (state) representation function. Reconstructing the observation often serves as a supervision for representation learning for control in the empirical RL literature (Hafner et al., 2019b, a, 2020; Fu et al., 2021; Wang et al., 2022). This is in sharp contrast to model-free methods, where the policy improvement step is completely cost-driven. Reconstructing observations provides a rich supervision signal for learning a task-agnostic world model, but they are high-dimensional and noisy, so the reconstruction requires an expressive reconstruction function; latent states learned by reconstruction contain irrelevant information for control, which can distract RL algorithms (Zhang et al., 2020; Fu et al., 2021; Wang et al., 2022). This is especially the case for practical visuomotor control tasks, e.g., robotic manipulation and self-driving cars, where the visual images contain predominately task-irrelevant objects and backgrounds.

Various empirical attempts (Schrittwieser et al., 2020; Zhang et al., 2020; Okada and Taniguchi, 2021; Deng et al., 2021; Yang et al., 2022) have been made to bypass observation reconstruction. Apart from observation, the interaction involves two other variables: actions (control inputs) and costs. Inverse model methods (Lamb et al., 2022) reconstruct actions; while other methods rely on costs. We argue that since neither the reconstruction function nor the inverse model is used for policy learning, cost-driven state representation learning is the most direct one, in that costs are directly relevant for control purposes. In this paper, we aim to examine the soundness of this methodology in linear quadratic Gaussian (LQG) control, one of the most fundamental partially observable control models.

Parallel to the empirical advances of learning for control from pixels, partially observable linear systems has been extensively studied in the context of learning for dynamic control (Oymak and Ozay, 2019; Simchowitz et al., 2020; Lale et al., 2020, 2021; Zheng et al., 2021; Minasyan et al., 2021; Umenberger et al., 2022). In this context, the representation function is more formally referred to as a filter, the optimal one being the Kalman filter. Most existing model-based learning approaches for LQG control focus on the linear time-invariant (LTI) case, and are based on the idea of learning Markov parameters (Ljung, 1998), the mapping from control inputs to observations. Hence, they need to predict observations by definition. Motivated by the empirical successes in control from pixels, we take a different, cost-driven route, in hope of avoiding reconstructing observations or control inputs.

We focus on finite-horizon time-varying LQG control and address the following question:

Can direct, cost-driven state representation learning provably solve LQG control?

This work answers the question in the affirmative, by establishing finite-sample guarantees for for a cost-driven state representation learning method.

Challenges & Our techniques. Overall, to establish finite-sample guarantees, a major technical challenge is to deal with the quadratic regression problem in cost prediction, arising from the inherent quadratic form of the cost function in LQG. Directly solving the problem for the representation function involves quartic optimization; instead, we propose to solve a quadratic regression problem, followed by low-rank approximate factorization. The quadratic regression problem also appears in identifying the cost matrices, which involves concentration for random variables that are fourth powers of Gaussians. We believe these techniques might be of independent interest.

Moreover, the first \ell-step latent states may not be adequately excited (having full-rank covariance), which invalidates the use of most system identification techniques. We instead identify only relevant directions of the system parameters, and prove that this is sufficient for learning a near-optimal controller by analyzing state covariance mismatch. This fact is reflected in the separation in the statement of Theorem 1; developing finite-sample analysis in this case is technically challenging.

Implications. For practitioners, one takeaway from our work is the benefit of predicting multi-step cumulative costs in cost-driven state representation learning. Whereas the cost at a single time step may not be revealing enough of the latent state, the cumulative cost across multiple steps can be. This is an intuitive idea for the control community, given the multi-step nature in the classical definitions of controllability and observability. Its effectiveness has also been empirically observed in MuZero (Schrittwieser et al., 2020) in state representation learning for control, and our work can be viewed as a formal understanding of it in the LQG control setting.

Notation. We use 0 (resp. 11) to denote either the scalar or a matrix consisting of all zeros (resp. all ones); we use II to denote an identity matrix. The dimension, when emphasized, is specified in subscripts, e.g., 0dx×dx,1dx,Idx0_{d_{x}\times d_{x}},1_{d_{x}},I_{d_{x}}. Let 𝕀S\mathbb{I}_{S} denote the indicator function for set SS and 𝕀S(A)\mathbb{I}_{S}(A) apply to matrix AA elementwise. For some positive semidefinite PP, we define vP:=(vPv)1/2\|v\|_{P}:=(v^{\top}Pv)^{1/2}. Semicolon “;” denotes stacking vectors or matrices vertically. For a collection of dd-dimensional vectors (vt)t=ij(v_{t})_{t=i}^{j}, let vi:j:=[vi;vi+1;;vj]d(ji+1)v_{i:j}:=[v_{i};v_{i+1};\ldots;v_{j}]\in\mathbb{R}^{d(j-i+1)} denote the concatenation along the column. For random variable η\eta, let ηψβ\|\eta\|_{\psi_{\beta}} denote its β\beta-sub-Weibull norm, a special case of Orlicz norms (Zhang and Wei, 2022), with β=1,2\beta=1,2 corresponding to subexponential and sub-Gaussian norms. σi(A),σmin(A),σmin+(A),σmax(A)\sigma_{i}(A),\sigma_{\min}(A),\sigma_{\min}^{+}(A),\sigma_{\max}(A) denote its iith largest, minimum, minimum positive, maximum singular values, respectively. A2,AF,A\|A\|_{2},\|A\|_{F},\|A\|_{\ast} denote the operator (induced by vector 22-norms), Frobenius, nuclear norms of matrix AA, respectively. ,F\left\langle\cdot,\cdot\right\rangle_{F} denotes the Frobenius inner product between matrices. The Kronecker, symmetric Kronecker and Hadamard products between matrices are denoted by “\otimes”, “s\otimes_{s}” and “\odot”, respectively. vec()\mathrm{vec}(\cdot) and svec()\mathrm{svec}(\cdot) denote flattening a matrix and a symmetric matrix by stacking their columns; svec()\mathrm{svec}(\cdot) does not repeat the off-diagonal elements, but scales them by 2\sqrt{2} (Schacke, 2004). Let diag()\textnormal{diag}(\cdot) denote the block diagonal matrix formed by the matrices inside the parentheses.

2 Problem setup

We study a partially observable linear time-varying (LTV) dynamical system

xt+1=Atxt+Btut+wt,yt=Ctxt+vt,\displaystyle x_{t+1}=A^{\ast}_{t}x_{t}+B^{\ast}_{t}u_{t}+w_{t},\quad y_{t}=C^{\ast}_{t}x_{t}+v_{t}, (2.1)

for t=0,1,,T1t=0,1,\ldots,T-1 and yT=CTxT+vTy_{T}=C^{\ast}_{T}x_{T}+v_{T}. For all t0t\geq 0, we have the notation of state xtdxx_{t}\in\mathbb{R}^{d_{x}}, observation ytdyy_{t}\in\mathbb{R}^{d_{y}}, and control input utduu_{t}\in\mathbb{R}^{d_{u}}. Process noises (wt)t=0T1(w_{t})_{t=0}^{T-1} and observation noises (vt)t=0T(v_{t})_{t=0}^{T} are i.i.d. sampled from 𝒩(0,Σwt)\mathcal{N}(0,\Sigma_{w_{t}}) and 𝒩(0,Σvt)\mathcal{N}(0,\Sigma_{v_{t}}), respectively. Let initial state x0x_{0} be sampled from 𝒩(0,Σ0)\mathcal{N}(0,\Sigma_{0}).

Let Φt,t0=At1At2At0\Phi_{t,t_{0}}=A^{\ast}_{t-1}A^{\ast}_{t-2}\cdots A^{\ast}_{t_{0}} for t>t0t>t_{0} and Φt,t=I\Phi_{t,t}=I. Then xt=Φt,t0xt0+τ=t0t1Φt,τ+1wτx_{t}=\Phi_{t,t_{0}}x_{t_{0}}+\sum_{\tau=t_{0}}^{t-1}\Phi_{t,\tau+1}w_{\tau} under zero control input. To ensure the state and the cumulative noise do not grow with time, we make the following uniform exponential stability assumption.

Assumption 1 (Uniform exponential stability).

The system is uniformly exponentially stable, i.e., there exists α>0,ρ(0,1)\alpha>0,\rho\in(0,1) such that for any 0t0<tT0\leq t_{0}<t\leq T, Φt,t02αρtt0\|\Phi_{t,t_{0}}\|_{2}\leq\alpha\rho^{t-t_{0}}.

Assumption 1 is standard in controlling LTV systems (Zhou and Zhao, 2017; Minasyan et al., 2021), satisfied by a stable LTI system. It essentially says that zero control is a stabilizing policy, and can be potentially relaxed to the assumption of being given a stabilizing policy (Lale et al., 2020; Simchowitz et al., 2020). Specifically, one can excite the system using the stabilizing policy plus Gaussian random noises.

Define the \ell-step controllability matrix

Φt,c:=[Bt,AtBt1,,AtAt1At+2Bt+1]dx×du\displaystyle\Phi_{t,\ell}^{c}:=[B^{\ast}_{t},A^{\ast}_{t}B^{\ast}_{t-1},\ldots,A^{\ast}_{t}A^{\ast}_{t-1}\cdots A^{\ast}_{t-\ell+2}B^{\ast}_{t-\ell+1}]\in\mathbb{R}^{d_{x}\times\ell d_{u}}

for 1tT1\ell-1\leq t\leq T-1, which reduces to the standard controllability matrix [B,,A1B][B,\ldots,A^{\ell-1}B] in the LTI setting. We make the following controllability assumption.

Assumption 2 (Controllability).

For all 1tT1\ell-1\leq t\leq T-1, rank(Φt,c)=dx\mathrm{rank}(\Phi_{t,\ell}^{c})=d_{x}, σmin(Φt,c)ν>0\sigma_{\min}(\Phi_{t,\ell}^{c})\geq\nu>0.

Under zero noise, we have

xt+=Φt+,txt+Φt+1,c[ut+1;;ut],\displaystyle x_{t+\ell}=\Phi_{t+\ell,t}x_{t}+\Phi_{t+\ell-1,\ell}^{c}[u_{t+\ell-1};\ldots;u_{t}],

so Assumption 2 ensures that from any state xx, there exist control inputs that drive the state to 0 in \ell steps, and ν\nu ensures that the equation leading to them is well conditioned. We do not assume controllability for 0t<10\leq t<\ell-1, since we do not want to impose the constraint that du>dxd_{u}>d_{x}. This turns out to present a significant challenge for state representation learning, as seen from the separation of the results before and after the \ell-steps in Theorem 1.

The quadratic cost functions are given by

ct(x,u)=xQt2+uRt2,0tT1,\displaystyle c_{t}(x,u)=\|x\|_{Q^{\ast}_{t}}^{2}+\|u\|_{R^{\ast}_{t}}^{2},\quad 0\leq t\leq T-1, (2.2)

and cT(x)=xQT2c_{T}(x)=\|x\|_{Q^{\ast}_{T}}^{2}, where (Qt)t=0T(Q^{\ast}_{t})_{t=0}^{T} are positive semidefinite matrices and (Rt)t=0T1(R^{\ast}_{t})_{t=0}^{T-1} are positive definite matrices. Sometimes the cost is defined as a function on observation yy. Since the quadratic form yQty=x(Ct)QtCtxy^{\top}Q^{\ast}_{t}y=x^{\top}(C^{\ast}_{t})^{\top}Q^{\ast}_{t}C^{\ast}_{t}x, our analysis still applies if the assumptions on (Qt)t=0T(Q^{\ast}_{t})_{t=0}^{T} hold for ((Ct)QtCt)t=0T((C^{\ast}_{t})^{\top}Q^{\ast}_{t}C^{\ast}_{t})_{t=0}^{T} instead.

(A,C)(A,C) and (A,Q1/2)(A,Q^{1/2}) observabilities are standard assumptions in controlling LTI systems. To differentiate from the former, we call the latter cost observability, since it implies the states are observable through costs. Whereas Markov-parameter-based approaches need to assume (A,C)(A,C) observability to identify the system, our cost-driven approach does not. Here we deal with the more difficult problem of having only the scalar cost as the supervision signal (instead of the concatenation of all observations, as in Markov-parameter-based ones). Nevertheless, the notion of cost observability is still important for our approach, formally defined as follows.

Assumption 3 (Cost observability).

For all 0t10\leq t\leq\ell-1, Qtμ2IQ^{\ast}_{t}\succcurlyeq\mu^{2}I. For all tT\ell\leq t\leq T, there exists m>0m>0 such that the cost observability Gramian (Kailath, 1980)

τ=tt+k1Φτ,tQτΦτ,tμ2I,\displaystyle\sum\nolimits_{\tau=t}^{t+k-1}\Phi_{\tau,t}^{\top}Q^{\ast}_{\tau}\Phi_{\tau,t}\succcurlyeq\mu^{2}I,

where k=m(Tt+1)k=m\wedge(T-t+1).

This assumption ensures that without noises, if we start with a nonzero state, the cumulative cost becomes positive in mm steps. The special requirement for 0t10\leq t\leq\ell-1 results from the difficulty in lacking controllability in these time steps. The following is a regularity assumption on system parameters.

Assumption 4.

(σmin(Σvt))t=0T(\sigma_{\min}(\Sigma_{v_{t}}))_{t=0}^{T} are uniformly lower bounded by some σv>0\sigma_{v}>0. The operator norms of all matrices in the problem definition are uniformly upper bounded, including (At,Bt,Rt,Σwt)t=0T1(A^{\ast}_{t},B^{\ast}_{t},R^{\ast}_{t},\Sigma_{w_{t}})_{t=0}^{T-1}, (Ct,Qt,Σvt)t=0T(C^{\ast}_{t},Q^{\ast}_{t},\Sigma_{v_{t}})_{t=0}^{T}. In other words, they are all 𝒪(1)\mathcal{O}(1).

Let ht:=[y0:t;u0:(t1)](t+1)dy+tduh_{t}:=[y_{0:t};u_{0:(t-1)}]\in\mathbb{R}^{(t+1)d_{y}+td_{u}} denote the available history before deciding control utu_{t}. A policy π=(πt:htut)t=0T1\pi=(\pi_{t}:h_{t}\mapsto u_{t})_{t=0}^{T-1} determines at time tt a control input utu_{t} based on history hth_{t}. With a slight abuse of notation, let ct:=ct(xt,ut)c_{t}:=c_{t}(x_{t},u_{t}) for 0tT10\leq t\leq T-1 and cT:=cT(xT)c_{T}:=c_{T}(x_{T}) denote the cost at each time step. Then, Jπ:=𝔼π[t=0Tct]J^{\pi}:=\mathbb{E}^{\pi}[\sum_{t=0}^{T}c_{t}] is the expected cumulative cost under policy π\pi, where the expectation is taken over the randomness in the process noises, observation noises, and controls (if the policy π\pi is stochastic). The objective of LQG control is to find a policy π\pi such that JπJ^{\pi} is minimized.

If the system parameters ((At,Bt,Rt)t=0T1,(Ct,Qt)t=0T)((A^{\ast}_{t},B^{\ast}_{t},R^{\ast}_{t})_{t=0}^{T-1},(C^{\ast}_{t},Q^{\ast}_{t})_{t=0}^{T}) are known, the optimal control is obtained by combining the Kalman filter z0=L0y0z^{\ast}_{0}=L^{\ast}_{0}y_{0},

zt+1=Atzt+Btut+Lt+1(yt+1Ct+1(Atzt+Btut))\displaystyle z^{\ast}_{t+1}=A^{\ast}_{t}z^{\ast}_{t}+B^{\ast}_{t}u_{t}+L^{\ast}_{t+1}(y_{t+1}-C^{\ast}_{t+1}(A^{\ast}_{t}z^{\ast}_{t}+B^{\ast}_{t}u_{t}))

for 0tT10\leq t\leq T-1, with the optimal feedback control gains of the linear quadratic regulator (LQR) (Kt)t=0T1(K^{\ast}_{t})_{t=0}^{T-1} such that ut=Ktztu^{\ast}_{t}=K^{\ast}_{t}z^{\ast}_{t}, where (Lt)t=0T(L^{\ast}_{t})_{t=0}^{T} are the Kalman gains; this is known as the separation principle (Åström, 2012). The Kalman gains and optimal feedback control gains are given by

Lt=St(Ct)(CtSt(Ct)+Σvt)1,\displaystyle L^{\ast}_{t}=S^{\ast}_{t}(C^{\ast}_{t})^{\top}(C^{\ast}_{t}S^{\ast}_{t}(C^{\ast}_{t})^{\top}+\Sigma_{v_{t}})^{-1},
Kt=((Bt)Pt+1Bt+Rt)1(Bt)Pt+1At,\displaystyle K^{\ast}_{t}=-((B^{\ast}_{t})^{\top}P^{\ast}_{t+1}B^{\ast}_{t}+R_{t})^{-1}(B^{\ast}_{t})^{\top}P^{\ast}_{t+1}A^{\ast}_{t},

where StS^{\ast}_{t} and PtP^{\ast}_{t} are determined by their corresponding Riccati difference equations (RDEs):

St+1=At(StSt(Ct)(CtSt(Ct)+Σvt)1CtSt)(At)+Σwt,\displaystyle S^{\ast}_{t+1}=A^{\ast}_{t}(S^{\ast}_{t}-S^{\ast}_{t}(C^{\ast}_{t})^{\top}(C^{\ast}_{t}S^{\ast}_{t}(C^{\ast}_{t})^{\top}+\Sigma_{v_{t}})^{-1}C^{\ast}_{t}S^{\ast}_{t})(A^{\ast}_{t})^{\top}+\Sigma_{w_{t}}, (2.3)
Pt=(At)(Pt+1Pt+1Bt((Bt)Pt+1Bt+Rt)1(Bt)Pt+1)At+Qt,\displaystyle P^{\ast}_{t}=(A^{\ast}_{t})^{\top}(P^{\ast}_{t+1}-P^{\ast}_{t+1}B^{\ast}_{t}((B^{\ast}_{t})^{\top}P^{\ast}_{t+1}B^{\ast}_{t}+R^{\ast}_{t})^{-1}(B^{\ast}_{t})^{\top}P^{\ast}_{t+1})A^{\ast}_{t}+Q^{\ast}_{t}, (2.4)

with S0=Σ0S^{\ast}_{0}=\Sigma_{0} and PT=QTP^{\ast}_{T}=Q^{\ast}_{T}.

We consider data-driven control in a partially observable LTV system (2.1) with unknown cost matrices (Qt)t=0T(Q^{\ast}_{t})_{t=0}^{T}. For simplicity, we assume (Rt)t=0T(R^{\ast}_{t})_{t=0}^{T} is known, though our approaches can be readily generalized to the case without knowing them; one can identify them in the quadratic regression (3.3).

2.1 Latent model of LQG

Under the Kalman filter, the observation prediction error it+1:=yt+1Ct+1(Atzt+Btut)i_{t+1}:=y_{t+1}-C^{\ast}_{t+1}(A^{\ast}_{t}z^{\ast}_{t}+B^{\ast}_{t}u_{t}) is called an innovation. It is known that iti_{t} is independent of history hth_{t} and (it)t=1T(i_{t})_{t=1}^{T} are independent (Bertsekas, 2012).

Now we are ready to present the following proposition that represents the system in terms of the state estimates by the Kalman filter, which we shall refer to as the latent model.

Proposition 1.

Let (zt)t=0T(z^{\ast}_{t})_{t=0}^{T} be state estimates given by the Kalman filter. Then,

zt+1=Atzt+Btut+Lt+1it+1,\displaystyle z^{\ast}_{t+1}=A^{\ast}_{t}z^{\ast}_{t}+B^{\ast}_{t}u_{t}+L^{\ast}_{t+1}i_{t+1},

where Lt+1it+1L^{\ast}_{t+1}i_{t+1} is independent of ztz^{\ast}_{t} and utu_{t}, i.e., the state estimates follow the same linear dynamics as the underlying state, with noises Lt+1it+1L^{\ast}_{t+1}i_{t+1}. The cost at step tt can then be reformulated as functions of the state estimates by

ct=ztQt2+utRt2+bt+γt+ηt,\displaystyle c_{t}=\|z^{\ast}_{t}\|_{Q^{\ast}_{t}}^{2}+\|u_{t}\|_{R^{\ast}_{t}}^{2}+b_{t}+\gamma_{t}+\eta_{t},

where bt>0b_{t}>0 is a problem-dependent constant, and γt=ztxtQt2bt\gamma_{t}=\|z^{\ast}_{t}-x_{t}\|_{Q^{\ast}_{t}}^{2}-b_{t}, ηt=zt,xtztQt\eta_{t}=\bigl{\langle}z^{\ast}_{t},x_{t}-z^{\ast}_{t}\bigr{\rangle}_{Q^{\ast}_{t}} are both zero-mean subexponential random variables. Under Assumptions 1 and 4, bt=𝒪(1)b_{t}=\mathcal{O}(1) and γtψ1=𝒪(dx1/2)\|\gamma_{t}\|_{\psi_{1}}=\mathcal{O}(d_{x}^{1/2}); moreover, if control ut𝒩(0,σu2I)u_{t}\sim\mathcal{N}(0,\sigma_{u}^{2}I) for 0tT0\leq t\leq T, then ηtψ1=𝒪(dx1/2)\|\eta_{t}\|_{\psi_{1}}=\mathcal{O}(d_{x}^{1/2}).

Proof.

By the property of the Kalman filter, zt=𝔼[xt|y0:t,u0:(t1)]z^{\ast}_{t}=\mathbb{E}[x_{t}\;|\;y_{0:t},u_{0:(t-1)}] is a function of the past history (y0:t,u0:(t1))(y_{0:t},u_{0:(t-1)}). utu_{t} is a function of the past history (y0:t,u0:(t1))(y_{0:t},u_{0:(t-1)}) and some independent random variables. Since it+1i_{t+1} is independent of the past history (y0:t,u0:(t1))(y_{0:t},u_{0:(t-1)}), it is independent of ztz^{\ast}_{t} and utu_{t}. For the cost function,

ct=ztQt2+utRt2+ztxtQt2+2zt,xtztQt.\displaystyle c_{t}=\|z^{\ast}_{t}\|_{Q^{\ast}_{t}}^{2}+\|u_{t}\|_{R^{\ast}_{t}}^{2}+\|z^{\ast}_{t}-x_{t}\|_{Q^{\ast}_{t}}^{2}+2\bigl{\langle}z^{\ast}_{t},x_{t}-z^{\ast}_{t}\bigr{\rangle}_{Q^{\ast}_{t}}.

Let bt=𝔼[xtztQt2]b_{t}=\mathbb{E}[\|x_{t}-z^{\ast}_{t}\|_{Q^{\ast}_{t}}^{2}] be a constant that depends on system parameters (At,Bt,Σwt)t=0T1(A^{\ast}_{t},B^{\ast}_{t},\Sigma_{w_{t}})_{t=0}^{T-1}, (Ct,Σvt)t=0T(C^{\ast}_{t},\Sigma_{v_{t}})_{t=0}^{T} and Σ0\Sigma_{0}. Then, random variable γt:=ztxtQt2bt\gamma_{t}:=\|z^{\ast}_{t}-x_{t}\|_{Q^{\ast}_{t}}^{2}-b_{t} has zero mean. Since (xtzt)(x_{t}-z^{\ast}_{t}) is Gaussian, its squared norm is subexponential. Since ztz^{\ast}_{t} and (xtzt)(x_{t}-z^{\ast}_{t}) are independent zero-mean Gaussian random vectors (Bertsekas, 2012), their inner product ηt\eta_{t} is a zero-mean subexponential random variable.

If the system is uniformly exponentially stable (Assumption 1) and the system parameters are regular (c.f. Assumption 4), then (St)t=0T(S^{\ast}_{t})_{t=0}^{T} given by RDE (2.3) has a bounded operator norm determined by system parameters (At,Bt,Ct,Σwt)t=0T1(A^{\ast}_{t},B^{\ast}_{t},C^{\ast}_{t},\Sigma_{w_{t}})_{t=0}^{T-1}, (Σvt)t=0T(\Sigma_{v_{t}})_{t=0}^{T} and Σ0\Sigma_{0} (Zhang and Zhang, 2021). Since St=Cov(xtzt)S^{\ast}_{t}=\mathrm{Cov}(x_{t}-z^{\ast}_{t}), γtψ1=𝒪(dx1/2)\|\gamma_{t}\|_{\psi_{1}}=\mathcal{O}(d_{x}^{1/2}) by Lemma 7. By Assumption 1, if we apply zero control to the system, then Cov(zt)2=𝒪(1)\|\mathrm{Cov}(z^{\ast}_{t})\|_{2}=\mathcal{O}(1). By Lemma 7, ηt=zt,xtzt\eta_{t}=\left\langle z^{\ast}_{t},x_{t}-z^{\ast}_{t}\right\rangle satisfies ηtψ1=𝒪(dx1/2)\|\eta_{t}\|_{\psi_{1}}=\mathcal{O}(d_{x}^{1/2}). ∎

Proposition 1 states that: 1) the dynamics of the state estimates produced by the Kalman filter remains the same as the original system up to noises, determined by (At,Bt)t=0T1(A^{\ast}_{t},B^{\ast}_{t})_{t=0}^{T-1}; 2) the costs (of the latent model) are still determined by (Qt)t=0T(Q^{\ast}_{t})_{t=0}^{T} and (Rt)t=0T1(R^{\ast}_{t})_{t=0}^{T-1}, up to constants and noises. Hence, a latent model can be parameterized by ((At,Bt)t=0T1,(Qt)t=0T)((A_{t},B_{t})_{t=0}^{T-1},(Q_{t})_{t=0}^{T}) (recall that we assume (Rt)t=0T(R^{\ast}_{t})_{t=0}^{T} is known for convenience). Note that observation matrices (Ct)t=0T(C^{\ast}_{t})_{t=0}^{T} are not involved.

Now let us take a closer look at the state representation function. The Kalman filter can be written as zt+1=A¯tzt+B¯tut+Lt+1yt+1z^{\ast}_{t+1}=\overline{A}^{\ast}_{t}z^{\ast}_{t}+\overline{B}^{\ast}_{t}u_{t}+L^{\ast}_{t+1}y_{t+1}, where A¯t=(ILt+1Ct+1)At\overline{A}^{\ast}_{t}=(I-L^{\ast}_{t+1}C^{\ast}_{t+1})A^{\ast}_{t} and B¯t=(ILt+1Ct+1)Bt\overline{B}^{\ast}_{t}=(I-L^{\ast}_{t+1}C^{\ast}_{t+1})B^{\ast}_{t}. For 0tT0\leq t\leq T, unrolling the recursion gives

zt\displaystyle z^{\ast}_{t} =A¯t1zt1+B¯t1ut1+Ltyt\displaystyle=\overline{A}^{\ast}_{t-1}z^{\ast}_{t-1}+\overline{B}^{\ast}_{t-1}u_{t-1}+L^{\ast}_{t}y_{t}
=[A¯t1A¯t2A¯0L0,,Lt][y0;;yt]+[A¯t1A¯t2A¯1B¯0,,B¯t1][u0;;ut1]\displaystyle=[\overline{A}^{\ast}_{t-1}\overline{A}^{\ast}_{t-2}\cdots\overline{A}^{\ast}_{0}L^{\ast}_{0},\ldots,L^{\ast}_{t}][y_{0};\ldots;y_{t}]+[\overline{A}^{\ast}_{t-1}\overline{A}^{\ast}_{t-2}\cdots\overline{A}^{\ast}_{1}\overline{B}^{\ast}_{0},\ldots,\overline{B}^{\ast}_{t-1}][u_{0};\ldots;u_{t-1}]
=:Mt[y0:t;u0:(t1)],\displaystyle=:M^{\ast}_{t}[y_{0:t};u_{0:(t-1)}],

where Mtdx×((t+1)dy+tdu)M^{\ast}_{t}\in\mathbb{R}^{d_{x}\times((t+1)d_{y}+td_{u})}. This means the optimal state representation function is linear in the history of observations and controls. A state representation function can then be parameterized by matrices (Mt)t=0T(M_{t})_{t=0}^{T}, and the latent state at step tt is given by zt=Mthtz_{t}=M_{t}h_{t}.

Overall, a policy π\pi is a combination of state representation function (Mt)t=0T1(M_{t})_{t=0}^{T-1} (MTM_{T} is not needed) and feedback gain (Kt)t=0T1(K_{t})_{t=0}^{T-1} in the latent model; in this case, we write π=(Mt,Kt)t=0T1\pi=(M_{t},K_{t})_{t=0}^{T-1} as the composition of the two.

3 Methodology: cost-driven state representation learning

State representation learning involves history data that contains samples of three variables: observation, control input, and cost. Each of them can potentially be used as a supervision signal, and be used to define a type of state representation learning algorithms. We summarize our categorization of the methods in the literature as follows.

  • Predicting observations defines the class of observation-reconstruction-based methods, including methods based on Markov parameters (mapping from control actions to observations) in linear systems (Lale et al., 2021; Zheng et al., 2021) and methods that learn a mapping from states to observations in more complex systems (Ha and Schmidhuber, 2018; Hafner et al., 2019b, a). This type of method tends to recover all state components.

  • Predicting actions defines the class of inverse model methods, where the control is predicted from states across different time steps (Mhammedi et al., 2020; Frandsen et al., 2022; Lamb et al., 2022). This type of method tends to recover the control-relevant state components.

  • Predicting (cumulative) costs defines the class of cost-driven state representation learning methods (Zhang et al., 2020; Schrittwieser et al., 2020; Yang et al., 2022). This type of methods tend to recover the state components relevant to the cost.

Our method falls into the cost-driven category, which is more direct than the other two types, in the sense that the cost is directly relevant to planning with a dynamic model, whereas the observation reconstruction functions and inverse models are not. Another reason why we call our method direct latent model learning is that compared with Markov parameter-based approaches for linear systems, our approach directly parameterizes the state representation function, without exploiting the structure of the Kalman filter, making our approach closer to empirical practice that was designed for general RL settings.

Reference (Subramanian et al., 2020) proposes to optimize a simple combination of cost and transition prediction errors to learn the approximate information state (Subramanian et al., 2020). That is, we parameterize a state representation function by matrices (Mt)t=0T(M_{t})_{t=0}^{T} and a latent model by matrices ((At,Bt)t=0T1,(Qt)t=0T)((A_{t},B_{t})_{t=0}^{T-1},(Q_{t})_{t=0}^{T}) and then solve

min(Mt,Qt,bt)t=0T,(At,Bt)t=0T1t=0Ti=1nlt(i),\displaystyle\min\nolimits_{(M_{t},Q_{t},b_{t})_{t=0}^{T},(A_{t},B_{t})_{t=0}^{T-1}}\sum\nolimits_{t=0}^{T}\sum\nolimits_{i=1}^{n}l_{t}^{(i)}, (3.1)

where (bt)t=0T(b_{t})_{t=0}^{T} are additional scalar parameters to account for noises, and the loss at step tt for trajectory ii is defined by

lt(i)=(Mtht(i)Qt2+ut(i)Rt2+btct(i))2+Mt+1ht+1(i)AtMtht(i)Btut(i)2,l_{t}^{(i)}=\big{(}\|M_{t}h_{t}^{(i)}\|_{Q_{t}}^{2}+\|u_{t}^{(i)}\|_{R^{\ast}_{t}}^{2}+b_{t}-c_{t}^{(i)}\big{)}^{2}+\big{\|}M_{t+1}h_{t+1}^{(i)}-A_{t}M_{t}h_{t}^{(i)}-B_{t}u_{t}^{(i)}\big{\|}^{2}, (3.2)

for 0tT10\leq t\leq T-1 and lT(i)=(MThT(i)QT2+bTcT(i))2l_{T}^{(i)}=\big{(}\|M_{T}h_{T}^{(i)}\|_{Q_{T}}^{2}+b_{T}-c_{T}^{(i)}\big{)}^{2}. The optimization problem (3.1) is nonconvex; even if we can find a global minimizer, it is unclear how to establish finite-sample guarantees for it. A main finding of this work is that for LQG, we can solve the cost and transition loss optimization problems sequentially, with the caveat of using cumulative costs.

Algorithm 1 Direct latent model learning
1:Input: sample size nn, input noise magnitude σu\sigma_{u}, singular value threshold θ=Θ(n1/4)\theta=\Theta(n^{-1/4})
2:Collect nn trajectories using ut𝒩(0,σu2I)u_{t}\sim\mathcal{N}(0,\sigma_{u}^{2}I), for 0tT10\leq t\leq T-1, to obtain data in the form of
𝒟raw=(y0(i),u0(i),c0(i),,yT1(i),uT1(i),cT1(i),yT(i),cT(i))i=1n\displaystyle\mathcal{D}_{\textnormal{raw}}=(y_{0}^{(i)},u_{0}^{(i)},c_{0}^{(i)},\ldots,y_{T-1}^{(i)},u_{T-1}^{(i)},c_{T-1}^{(i)},y_{T}^{(i)},c_{T}^{(i)})_{i=1}^{n}
3:Run CoReL(𝒟raw\mathcal{D}_{\textnormal{raw}}, θ\theta) (Algorithm 2) to obtain state representation function estimate (M^t)t=0T(\hat{M}_{t})_{t=0}^{T} and latent state estimates (zt(i))t=0,i=1T,n(z_{t}^{(i)})_{t=0,i=1}^{T,n}, so that the data are converted to
𝒟state=(z0(i),u0(i),c0(i),,zT1(i),uT1(i),cT1(i),zT(i),cT(i))i=1n\displaystyle\mathcal{D}_{\textnormal{state}}=(z_{0}^{(i)},u_{0}^{(i)},c_{0}^{(i)},\ldots,z_{T-1}^{(i)},u_{T-1}^{(i)},c_{T-1}^{(i)},z_{T}^{(i)},c_{T}^{(i)})_{i=1}^{n}
4:Run SysId(𝒟state\mathcal{D}_{\textnormal{state}}) (Algorithm 3) to obtain system parameter estimates ((A^t,B^t)t=0T1,(Q^t)t=0T)((\hat{A}_{t},\hat{B}_{t})_{t=0}^{T-1},(\hat{Q}_{t})_{t=0}^{T})
5:Find feedback gains (K^t)t=0T1(\hat{K}_{t})_{t=0}^{T-1} from ((A^t,B^t,Rt)t=0T1,(Q^t)t=0T)((\hat{A}_{t},\hat{B}_{t},R^{\ast}_{t})_{t=0}^{T-1},(\hat{Q}_{t})_{t=0}^{T}) by RDE (2.4)
6:Return: policy π^=(M^t,K^t)t=0T1\hat{\pi}=(\hat{M}_{t},\hat{K}_{t})_{t=0}^{T-1}

Our method is summarized in Algorithm 1. It has three steps: cost-driven state representation function learning (CoReL, Algorithm 2), latent system identification (SysId, Algorithm 3), and planning by RDE (2.4).

This three-step approach is very similar to the World Model approach (Ha and Schmidhuber, 2018) used in empirical RL, except that in the first step, instead of using an autoencoder to learn the state representation function, we use cost values to supervise the representation learning. Most empirical state representation learning methods (Hafner et al., 2019b, a; Schrittwieser et al., 2020) use cost supervision as one loss term; the special structure of LQG allows us to use it alone and have theoretical guarantees.

CoReL (Algorithm 2) is the core of our algorithm. Once the state representation function (M^t)t=0T(\hat{M}_{t})_{t=0}^{T} is obtained, SysId (Algorithm 3) identifies the latent system using linear and quadratic regression, followed by planning using RDE (2.4) to obtain controller (K^t)t=0T1(\hat{K}_{t})_{t=0}^{T-1} from ((A^t,B^t,Rt)t=0T1,((\hat{A}_{t},\hat{B}_{t},R^{\ast}_{t})_{t=0}^{T-1}, (Q^t)t=0T)(\hat{Q}_{t})_{t=0}^{T}). SysId consists of the standard regression procedures; the full algorithmic detail is shown in Algorithm 3. Below we explain the cost-driven state representation learning algorithm (CoReL, Algorithm 2) in detail.

3.1 Learning the state representation function

Algorithm 2 CoReL: cost driven state representation learning
1:Input: raw data 𝒟raw\mathcal{D}_{\textnormal{raw}}, singular value threshold θ=Θ(((dy+du))1/2dx3/4n1/4)\theta=\Theta((\ell(d_{y}+d_{u}))^{1/2}d_{x}^{3/4}n^{-1/4})
2:Estimate the state representation function and cost constants by solving (N^t,b^t)t=0T(\hat{N}_{t},\hat{b}_{t})_{t=0}^{T}\in
argmin(Nt=Nt,bt)t=0T\displaystyle\operatorname*{argmin}_{(N_{t}=N_{t}^{\top},b_{t})_{t=0}^{T}} t=0Ti=1n([y0:t(i);u0:(t1)(i)]Nt2+τ=tt+k1uτ(i)Rτ2+btc¯t(i))2,\displaystyle\sum_{t=0}^{T}\sum_{i=1}^{n}\Big{(}\bigl{\|}[y_{0:t}^{(i)};u_{0:(t-1)}^{(i)}]\bigr{\|}_{N_{t}}^{2}+\sum_{\tau=t}^{t+k-1}\|u_{\tau}^{(i)}\|_{R^{\ast}_{\tau}}^{2}+b_{t}-\overline{c}_{t}^{(i)}\Big{)}^{2}, (3.3)
where k=1k=1 for 0tl10\leq t\leq l-1 and k=m(Tt+1)k=m\wedge(T-t+1) for tT\ell\leq t\leq T
3:Find M~tdx×((t+1)dy+tdu)\widetilde{M}_{t}\in\mathbb{R}^{d_{x}\times((t+1)d_{y}+td_{u})} such that M~tM~t\widetilde{M}_{t}^{\top}\widetilde{M}_{t} is an approximation of N^t\hat{N}_{t}
4:For all 0t10\leq t\leq\ell-1, set M^t=TruncSV(M~t,θ)\hat{M}_{t}=\textsc{TruncSV}{}(\widetilde{M}_{t},\theta); for all tT\ell\leq t\leq T, set M^t=M~t\hat{M}_{t}=\widetilde{M}_{t}
5:Compute z^t(i)=M^t[y0:t(i);u0:(t1)(i)]\hat{z}_{t}^{(i)}=\hat{M}_{t}[y_{0:t}^{(i)};u_{0:(t-1)}^{(i)}] for all t=0,,Tt=0,\ldots,T and i=1,,ni=1,\ldots,n
6:Return: state representation estimate (M^t)t=0T(\hat{M}_{t})_{t=0}^{T} and latent state estimates (z^t(i))t=0,i=1T,n(\hat{z}_{t}^{(i)})_{t=0,i=1}^{T,n}
Algorithm 3 SysId: system identification
1:Input: data in the form of (z0(i),u0(i),c0(i),,zT1(i),uT1(i),cT1(i),zT(i),cT(i))i=1n(z_{0}^{(i)},u_{0}^{(i)},c_{0}^{(i)},\ldots,z_{T-1}^{(i)},u_{T-1}^{(i)},c_{T-1}^{(i)},z_{T}^{(i)},c_{T}^{(i)})_{i=1}^{n}
2:Estimate the system dynamics by (A^t,B^t)t=0T1(\hat{A}_{t},\hat{B}_{t})_{t=0}^{T-1}\in \triangleright pick min. Frobenius norm solution by pseudoinverse
argmin(At,Bt)t=0T1t=0T1i=1nAtzt(i)+Btut(i)zt+1(i)2\displaystyle\operatorname*{argmin}_{(A_{t},B_{t})_{t=0}^{T-1}}\sum_{t=0}^{T-1}\sum_{i=1}^{n}\|A_{t}z_{t}^{(i)}+B_{t}u_{t}^{(i)}-z_{t+1}^{(i)}\|^{2} (3.4)
3:For all 0t10\leq t\leq\ell-1 and t=Tt=T, set Q^t=Idx\hat{Q}_{t}=I_{d_{x}}
4:For all tT1\ell\leq t\leq T-1, obtain Q~t\widetilde{Q}_{t} by Q~t,b^t\widetilde{Q}_{t},\hat{b}_{t}\in
argminQt=Qt,bti=1n(zt(i)Qt2+ut(i)Rt2+btct(i))2,\displaystyle\operatorname*{argmin}_{Q_{t}=Q_{t}^{\top},b_{t}}\sum_{i=1}^{n}(\|z_{t}^{(i)}\|_{Q_{t}}^{2}+\|u_{t}^{(i)}\|_{R^{\ast}_{t}}^{2}+b_{t}-c_{t}^{(i)})^{2}, (3.5)
and set Q^t=Umax(Λ,0)U\hat{Q}_{t}=U\max(\Lambda,0)U^{\top}, where Q~t=UΛU\widetilde{Q}_{t}=U\Lambda U^{\top} is its eigenvalue decomposition
5:Return: system parameters ((A^t,B^t)t=0T1,(Q^t)t=0T)((\hat{A}_{t},\hat{B}_{t})_{t=0}^{T-1},(\hat{Q}_{t})_{t=0}^{T})

The state representation function is learned via CoReL (Algorithm 2). Given the raw data consisting of nn trajectories, CoReL first solves the regression problem (3.3) to recover the symmetric matrix N^t\hat{N}_{t}. The target c¯t\overline{c}_{t} of regression (3.3) is defined by

c¯t:=ct+ct+1++ct+k1,\displaystyle\overline{c}_{t}:=c_{t}+c_{t+1}+\ldots+c_{t+k-1},

where k=1k=1 for 0t10\leq t\leq\ell-1 and k=m(Tt+1)k=m\wedge(T-t+1) for tT\ell\leq t\leq T. The superscript in c¯t(i)\overline{c}_{t}^{(i)} denotes the observed c¯t\overline{c}_{t} in the iith trajectory. The quadratic regression has a closed-form solution, by converting it to linear regression using vP2=vv,PF=svec(vv),svec(P)\|v\|_{P}^{2}=\left\langle vv^{\top},P\right\rangle_{F}=\left\langle\mathrm{svec}(vv^{\top}),\mathrm{svec}(P)\right\rangle.

Why cumulative cost? The state representation function is parameterized by (Mt)t=0T(M_{t})_{t=0}^{T} and the latent state at step tt is given by zt=Mthtz_{t}=M_{t}h_{t}. The single-step cost prediction (neglecting control cost utRt2\|u_{t}\|_{R^{\ast}_{t}}^{2} and constant btb_{t}) is given by ztQt2=htMtQtMtht\|z_{t}\|_{Q_{t}}^{2}=h_{t}^{\top}M_{t}^{\top}Q_{t}M_{t}h_{t}. The regression recovers (Mt)QtMt(M^{\ast}_{t})^{\top}Q^{\ast}_{t}M^{\ast}_{t} as a whole, from which we can recover (Qt)1/2Mt(Q^{\ast}_{t})^{1/2}M^{\ast}_{t} up to an orthogonal transformation. If QtQ^{\ast}_{t} is positive definite and known, then we can further recover MtM^{\ast}_{t} from it. However, if QtQ^{\ast}_{t} does not have full rank, information about MtM^{\ast}_{t} is partially lost, and there is no way to fully recover MtM^{\ast}_{t} even if QtQ^{\ast}_{t} is known. To see why multi-step cumulative cost helps, define Q¯t:=τ=tt+k1Φτ,tQτΦτ,t\overline{Q}^{\ast}_{t}:=\sum\nolimits_{\tau=t}^{t+k-1}\Phi_{\tau,t}^{\top}Q^{\ast}_{\tau}\Phi_{\tau,t} for the same kk above. Under zero control and zero noise, starting from xtx_{t} at step tt, the kk-step cumulative cost is precisely xtQ¯t2\|x_{t}\|_{\overline{Q}^{\ast}_{t}}^{2}. Under the cost observability assumption (Assumption 3), (Q¯t)t=0T(\overline{Q}^{\ast}_{t})_{t=0}^{T} are positive definite.

The normalized parameterization. Still, since Q¯t\overline{Q}^{\ast}_{t} is unknown, even if we recover (Mt)Q¯tMt(M^{\ast}_{t})^{\top}\overline{Q}^{\ast}_{t}M^{\ast}_{t} as a whole, it is not viable to extract MtM^{\ast}_{t} and Q¯t\overline{Q}^{\ast}_{t}. Such ambiguity is unavoidable; in fact, for every Q¯t\overline{Q}^{\ast}_{t} we choose, there is an equivalent parameterization of the system such that the system response is exactly the same. In partially observable LTI systems, it is well-known that the system parameters can only be recovered up to a similarity transform (Oymak and Ozay, 2019). Since every parameterization is correct, we simply choose Q¯t=I\overline{Q}^{\ast}_{t}=I, which we refer to as the normalized parameterization. Concretely, let us define xt=(Q¯t)1/2xtx^{\prime}_{t}=(\overline{Q}^{\ast}_{t})^{1/2}x_{t}. Then, the new parameterization is given by

xt+1=Atxt+Btut+wt,yt=Ctxt+vt,ct(x,u)=xQt2+uRt2,\displaystyle x^{\prime}_{t+1}=A^{\ast\prime}_{t}x^{\prime}_{t}+B^{\ast\prime}_{t}u_{t}+w^{\prime}_{t},\quad y_{t}=C^{\ast\prime}_{t}x^{\prime}_{t}+v_{t},\quad c^{\prime}_{t}(x^{\prime},u)=\|x^{\prime}\|_{Q^{\ast\prime}_{t}}^{2}+\|u\|_{R^{\ast}_{t}}^{2},

and cT(x)=x(QT)2c^{\prime}_{T}(x^{\prime})=\|x^{\prime}\|_{(Q^{\ast}_{T})^{\prime}}^{2}, where for all t0t\geq 0,

At=(Q¯t+1)1/2At(Q¯t)1/2,Bt=(Q¯t+1)1/2Bt,Ct=Ct(Q¯t)1/2,\displaystyle A^{\ast\prime}_{t}=(\overline{Q}^{\ast}_{t+1})^{1/2}A^{\ast}_{t}(\overline{Q}^{\ast}_{t})^{-1/2},\quad B^{\ast\prime}_{t}=(\overline{Q}^{\ast}_{t+1})^{1/2}B^{\ast}_{t},\quad C^{\ast\prime}_{t}=C^{\ast}_{t}(\overline{Q}^{\ast}_{t})^{-1/2},
wt=(Q¯t+1)1/2wt,(Qt)=(Q¯t)1/2Qt(Q¯t)1/2.\displaystyle w^{\prime}_{t}=(\overline{Q}^{\ast}_{t+1})^{1/2}w_{t},\quad(Q^{\ast}_{t})^{\prime}=(\overline{Q}^{\ast}_{t})^{-1/2}Q^{\ast}_{t}(\overline{Q}^{\ast}_{t})^{-1/2}.

It is easy to verify that under the normalized parameterization the system satisfies Assumptions 123, and 4, up to a change of some constants in the bounds. Without loss of generality, we assume system (2.1) is in the normalized parameterization; if not, the recovered state representation function and latent system are with respect to the normalized parameterization.

Low-rank approximate factorization. Regression (3.3) has a closed-form solution; solving it gives (N^t,b^t)t=0T(\hat{N}_{t},\hat{b}_{t})_{t=0}^{T}. Constants (b^t)t=0T(\hat{b}_{t})_{t=0}^{T} account for the variance of the state estimation error, and are not part of the state representation function; dh×dhd_{h}\times d_{h} symmetric matrices (N^t)t=0T(\hat{N}_{t})_{t=0}^{T} are estimates of (Mt)Mt(M^{\ast}_{t})^{\top}M^{\ast}_{t} under the normalized parameterization, where dh=(t+1)dy+tdud_{h}=(t+1)d_{y}+td_{u}. MtM^{\ast}_{t} can only be recovered up to an orthogonal transformation, since for any orthogonal Sdx×dxS\in\mathbb{R}^{d_{x}\times d_{x}}, (SMt)SMt=(Mt)Mt(SM^{\ast}_{t})^{\top}SM^{\ast}_{t}=(M^{\ast}_{t})^{\top}M^{\ast}_{t}.

We want to recover M~t\widetilde{M}_{t} from N^t\hat{N}_{t} such that N^t=M~tM~t\hat{N}_{t}=\widetilde{M}_{t}^{\top}\widetilde{M}_{t}. Let UΛU=N^tU\Lambda U^{\top}=\hat{N}_{t} be its eigenvalue decomposition. Let Σ:=max(Λ,0)\Sigma:=\max(\Lambda,0) be the positive semidefinite diagonal matrix containing nonnegative eigenvalues, where “max\max” applies elementwise. If dhdxd_{h}\leq d_{x}, we can construct M~t=[Σ1/2U;0(dxdh)×dh]\widetilde{M}_{t}=[\Sigma^{1/2}U^{\top};0_{(d_{x}-d_{h})\times d_{h}}] by padding zeros. If dh>dxd_{h}>d_{x}, however, rank(N^t)\mathrm{rank}(\hat{N}_{t}) may exceed dxd_{x}. Without loss of generality, assume that the diagonal elements of Σ\Sigma are in descending order. Let Σdx\Sigma_{d_{x}} be the left-top dx×dxd_{x}\times d_{x} block of Σ\Sigma and UdxU_{d_{x}} be the left dxd_{x} columns of UU. By the Eckart-Young-Mirsky theorem, M~t=Σdx1/2Udx\widetilde{M}_{t}=\Sigma_{d_{x}}^{1/2}U_{d_{x}}^{\top} provides the best approximation of N^t\hat{N}_{t} with M~tM~t\widetilde{M}_{t}^{\top}\widetilde{M}_{t} among dx×dhd_{x}\times d_{h} matrices in terms of the Frobenius norm.

Why singular value truncation in the first \ell steps? The latent states are used to identify the latent system dynamics, so whether they are sufficiently excited, namely having full-rank covariance, makes a big difference: if not, the system matrices can only be identified partially. Proposition 2 below confirms that the optimal latent state zt=Mthtz^{\ast}_{t}=M^{\ast}_{t}h_{t} indeed has full-rank covariance for tt\geq\ell.

Proposition 2.

If system (2.1) satisfies Assumptions 2 (controllability) and 4 (regularity), then under control (ut)t=0T1(u_{t})_{t=0}^{T-1}, where ut𝒩(0,σu2I)u_{t}\sim\mathcal{N}(0,\sigma_{u}^{2}I), σmin(Cov(zt))=Ω(ν2)\sigma_{\min}(\mathrm{Cov}(z^{\ast}_{t}))=\Omega(\nu^{2}), MtM^{\ast}_{t} has rank dxd_{x} and σmin(Mt)=Ω(νt1/2)\sigma_{\min}(M^{\ast}_{t})=\Omega(\nu t^{-1/2}) for all tT\ell\leq t\leq T.

Proof.

For tT\ell\leq t\leq T, unrolling the Kalman filter gives

zt=\displaystyle z^{\ast}_{t}=\; At1zt1+Bt1ut1+Ltit\displaystyle A^{\ast}_{t-1}z^{\ast}_{t-1}+B^{\ast}_{t-1}u_{t-1}+L^{\ast}_{t}i_{t}
=\displaystyle=\; At1(At2zt2+Bt2ut2+Lt1it1)+Bt1ut1+Ltit\displaystyle A^{\ast}_{t-1}(A^{\ast}_{t-2}z^{\ast}_{t-2}+B^{\ast}_{t-2}u_{t-2}+L^{\ast}_{t-1}i_{t-1})+B^{\ast}_{t-1}u_{t-1}+L^{\ast}_{t}i_{t}
=\displaystyle=\; [Bt1,,At1At2At+1Bt][ut1;;ut]+At1At2Atzt\displaystyle[B^{\ast}_{t-1},\ldots,A^{\ast}_{t-1}A^{\ast}_{t-2}\cdots A^{\ast}_{t-\ell+1}B^{\ast}_{t-\ell}][u_{t-1};\ldots;u_{t-\ell}]+A^{\ast}_{t-1}A^{\ast}_{t-2}\cdots A^{\ast}_{t-\ell}z^{\ast}_{t-\ell}
+[Lt,,At1At2At+1Lt+1][it;;it+1],\displaystyle\quad+[L^{\ast}_{t},\ldots,A^{\ast}_{t-1}A^{\ast}_{t-2}\cdots A^{\ast}_{t-\ell+1}L^{\ast}_{t-\ell+1}][i_{t};\ldots;i_{t-\ell+1}],

where (uτ)τ=tt1(u_{\tau})_{\tau=t-\ell}^{t-1}, ztz^{\ast}_{t-\ell} and (iτ)τ=t+1t(i_{\tau})_{\tau=t-\ell+1}^{t} are independent. The matrix multiplied by [ut1;;ut][u_{t-1};\ldots;u_{t-\ell}] is precisely the controllability matrix Φt1,c\Phi_{t-1,\ell}^{c}. Then

Cov(zt)=𝔼[zt(zt)]\displaystyle\mathrm{Cov}(z^{\ast}_{t})=\mathbb{E}[z^{\ast}_{t}(z^{\ast}_{t})^{\top}]\succcurlyeq\; Φt1,c𝔼[[ut1;;ut][ut1;;ut]](Φt1,c)\displaystyle\Phi_{t-1,\ell}^{c}\mathbb{E}[[u_{t-1};\ldots;u_{t-\ell}][u_{t-1};\ldots;u_{t-\ell}]^{\top}](\Phi_{t-1,\ell}^{c})^{\top}
=\displaystyle=\; σu2Φt1,c(Φt1,c).\displaystyle\sigma_{u}^{2}\Phi_{t-1,\ell}^{c}(\Phi_{t-1,\ell}^{c})^{\top}.

By the controllability assumption (Assumption 2), Cov(zt)\mathrm{Cov}(z^{\ast}_{t}) has full rank and

σmin(Cov(zt))σu2ν2.\displaystyle\sigma_{\min}(\mathrm{Cov}(z^{\ast}_{t}))\geq\sigma_{u}^{2}\nu^{2}.

On the other hand, since zt=Mthtz^{\ast}_{t}=M^{\ast}_{t}h_{t},

Cov(zt)=𝔼[Mththt(Mt)]σmax(𝔼[htht])Mt(Mt).\displaystyle\mathrm{Cov}(z^{\ast}_{t})=\mathbb{E}[M^{\ast}_{t}h_{t}h_{t}^{\top}(M^{\ast}_{t})^{\top}]\preccurlyeq\sigma_{\max}(\mathbb{E}[h_{t}h_{t}^{\top}])M^{\ast}_{t}(M^{\ast}_{t})^{\top}.

Since ht=[y0:t;u0:(t1)]h_{t}=[y_{0:t};u_{0:(t-1)}] and (Cov(yt))t=0T,(Cov(ut))t=0T1(\mathrm{Cov}(y_{t}))_{t=0}^{T},(\mathrm{Cov}(u_{t}))_{t=0}^{T-1} have 𝒪(1)\mathcal{O}(1) operator norms, by Lemma 8, Cov(ht)=𝔼[htht]=𝒪(t)\|\mathrm{Cov}(h_{t})\|=\|\mathbb{E}[h_{t}h_{t}^{\top}]\|=\mathcal{O}(t). Hence,

0<σu2ν2σmin(Cov(zt))=𝒪(t)σdx2(Mt).\displaystyle 0<\sigma_{u}^{2}\nu^{2}\leq\sigma_{\min}(\mathrm{Cov}(z^{\ast}_{t}))=\mathcal{O}(t)\sigma_{d_{x}}^{2}(M^{\ast}_{t}).

This implies that rank(Mt)=dx\mathrm{rank}(M^{\ast}_{t})=d_{x} and σmin(Mt)=Ω(νt1/2)\sigma_{\min}(M^{\ast}_{t})=\Omega(\nu t^{-1/2}). ∎

Proposition 2 implies that for all tT\ell\leq t\leq T, NtN^{\ast}_{t} has rank dxd_{x}, so if dxd_{x} is not provided, this gives a way to discover it. For tT\ell\leq t\leq T, Proposition 2 guarantees that as long as M~t\widetilde{M}_{t} is close enough to MtM^{\ast}_{t}, it also has full rank, and so does Cov(M~tht)\mathrm{Cov}(\widetilde{M}_{t}h_{t}). Hence, we simply take the final estimate M^t=M~t\hat{M}_{t}=\widetilde{M}_{t}. Without further assumptions, however, there is no such a full-rank guarantee for (Cov(zt))t=01(\mathrm{Cov}(z^{\ast}_{t}))_{t=0}^{\ell-1} and (Mt)t=01(M^{\ast}_{t})_{t=0}^{\ell-1}. We make the following minimal assumption to ensure that the minimum positive singular values (σmin+(Cov(zt)))t=01(\sigma_{\min}^{+}(\mathrm{Cov}(z^{\ast}_{t})))_{t=0}^{\ell-1} are uniformly lower bounded. Note that (Cov(zt))t=01(\mathrm{Cov}(z^{\ast}_{t}))_{t=0}^{\ell-1} are not required to have full rank.

Assumption 5.

For 0t10\leq t\leq\ell-1, σmin+(Mt)β>0\sigma_{\min}^{+}(M^{\ast}_{t})\geq\beta>0.

Still, for 0t10\leq t\leq\ell-1, Assumption 5 does not guarantee the full-rankness of Cov(M~tht)\mathrm{Cov}(\widetilde{M}_{t}h_{t}), not even a lower bound on its minimum positive singular value; that is why we introduce TruncSV that truncates the singular values of M~t\widetilde{M}_{t} by a threshold θ>0\theta>0. Concretely, we take M^t=(𝕀[θ,+)(Σdx1/2)Σdx1/2)Udx\hat{M}_{t}=(\mathbb{I}_{[\theta,+\infty)}(\Sigma_{d_{x}}^{1/2})\odot\Sigma_{d_{x}}^{1/2})U_{d_{x}}^{\top}. Then, M^t\hat{M}_{t} has the same singular values as M~t\widetilde{M}_{t} except that those below θ\theta are zeroed. We take θ=Θ((dy+du)dx3/4n1/4log1/4(1/p))\theta=\Theta(\ell(d_{y}+d_{u})d_{x}^{3/4}n^{-1/4}\log^{1/4}(1/p)) to ensure a sufficient lower bound on the minimum positive singular value of M^t\hat{M}_{t}, without increasing the statistical errors.

4 Theoretical guarantees

Theorem 1 below offers a finite-sample guarantee for our approach. Overall, it confirms cost-driven latent model learning (Algorithm 1) as a viable path to solving LQG control.

Theorem 1.

Given an unknown LQG system (2.1), under Assumptions 1234 and 5, if we run Algorithm 1 with npoly(T,dx,dy,du,log(1/p))n\geq\mathrm{poly}(T,d_{x},d_{y},d_{u},\log(1/p)), then with probability at least 1p1-p, state representation function (M^t)t=0T(\hat{M}_{t})_{t=0}^{T} is poly(,dx,dy,du)n1/4\mathrm{poly}(\ell,d_{x},d_{y},d_{u})n^{-1/4} optimal in the first \ell steps, and poly(ν1,T,dx,dy,du)n1/2\mathrm{poly}(\nu^{-1},T,d_{x},d_{y},d_{u})n^{-1/2} optimal in the next (T)(T-\ell) steps. Also, the learned controller (K^t)t=0T1(\hat{K}_{t})_{t=0}^{T-1} is poly(,m,dx,dy,du)cn1/4\mathrm{poly}(\ell,m,d_{x},d_{y},d_{u})c^{\ell}n^{-1/4} optimal for some dimension-free constant c>0c>0 depending on system parameters in the first \ell steps, and poly(T,m,dx,dy,du,log(1/p))n1\mathrm{poly}(T,m,d_{x},d_{y},d_{u},\log(1/p))n^{-1} optimal in the last (T)(T-\ell) steps.

From Theorem 1, we observe a separation of the sample complexities before and after time step \ell, resulting from the loss of the full-rankness of (Cov(zt))t=01(\mathrm{Cov}(z^{\ast}_{t}))_{t=0}^{\ell-1} and (Mt)t=01(M^{\ast}_{t})_{t=0}^{\ell-1}. In more detail, a proof sketch goes as follows. Quadratic regression guarantees that N^t\hat{N}_{t} converges to NtN^{\ast}_{t} at a rate of n1/2n^{-1/2} for all 0tT0\leq t\leq T. Before step \ell, M^t\hat{M}_{t} suffers a square root decay of the rate to n1/4n^{-1/4} because MtM^{\ast}_{t} may not have rank dxd_{x}. Since (z^t)t=01(\hat{z}_{t})_{t=0}^{\ell-1} may not have full-rank covariances, (At)t=01(A^{\ast}_{t})_{t=0}^{\ell-1} are only recovered partially. As a result, (K^t)t=01(\hat{K}_{t})_{t=0}^{\ell-1} may not stabilize (At,Bt)t=01(A^{\ast}_{t},B^{\ast}_{t})_{t=0}^{\ell-1}, causing the exponential dependence on \ell. This means if nn is not big enough, this controller may be inferior to zero control, since the system (At,Bt)t=01(A^{\ast}_{t},B^{\ast}_{t})_{t=0}^{\ell-1} is uniformly exponential stable (Assumption 1) and zero control has suboptimality gap linear in \ell. After step \ell, M^t\hat{M}_{t} retains the n1/2n^{-1/2} sample complexity, and so do (A^t,B^t)(\hat{A}_{t},\hat{B}_{t}); the certainty equivalent controller then has an order of n1n^{-1} suboptimality gap for linear quadratic control (Mania et al., 2019).

Theorem 1 states the guarantees for the state representation function (M^t)t=0T(\hat{M}_{t})_{t=0}^{T} and the controller (K^t)t=0T1(\hat{K}_{t})_{t=0}^{T-1} separately. One may wonder the suboptimality gap of π^=(M^t,K^t)t=0T1\hat{\pi}=(\hat{M}_{t},\hat{K}_{t})_{t=0}^{T-1} in combination; after all, this is the output policy. The new challenge is that a suboptimal controller is applied to a suboptimal state estimation. An exact analysis requires more effort, but a reasonable conjecture is that (M^t,K^t)t=0T1(\hat{M}_{t},\hat{K}_{t})_{t=0}^{T-1} has the same order of suboptimality gap as (K^t)t=0T1(\hat{K}_{t})_{t=0}^{T-1}: before step \ell, the extra suboptimality gap resulted from (M^t)t=01(\hat{M}_{t})_{t=0}^{\ell-1} can be analyzed by considering perturbation K^t(M^tMt)ht\hat{K}_{t}(\hat{M}_{t}-M^{\ast}_{t})h_{t} on controls; after step \ell, similar to the analysis of the LQG suboptimality gap in (Mania et al., 2019), the overall suboptimality gap can be analyzed by a Taylor expansion of the value function at (Mt,Kt)t=T1(M^{\ast}_{t},K^{\ast}_{t})_{t=\ell}^{T-1}, with (K^tM^tKtMt)t=T1(\hat{K}_{t}\hat{M}_{t}-K^{\ast}_{t}M^{\ast}_{t})_{t=\ell}^{T-1} being perturbations.

4.1 Proof of Theorem 1

Proof.

Recall that Algorithm 1 has three main steps: state representation learning (CoReL, Algorithm 2), latent system identification (SysId, Algorithm 3), and planning by RDE (2.4). Correspondingly, the analysis below is organized around these three steps.

Recovery of the state representation function. By Proposition 3, with ut=𝒩(0,σu2I)u_{t}=\mathcal{N}(0,\sigma_{u}^{2}I) for all 0tT10\leq t\leq T-1 to system (2.1), the kk-step cumulative cost starting from step tt, where k=1k=1 for 0t10\leq t\leq\ell-1 and k=m(Tt+1)k=m\wedge(T-t+1) for tT\ell\leq t\leq T, is given under the normalized parameterization by

c¯t:=ct+ct+1++ct+k1=zt2+τ=tt+k1uτRτ2+b¯t+e¯t,\displaystyle\overline{c}_{t}:=c_{t}+c_{t+1}+\ldots+c_{t+k-1}=\|z^{\ast\prime}_{t}\|^{2}+\sum\nolimits_{\tau=t}^{t+k-1}\|u_{\tau}\|_{R^{\ast}_{\tau}}^{2}+\overline{b}_{t}+\overline{e}_{t},

where b¯t=𝒪(k)\overline{b}_{t}=\mathcal{O}(k), and e¯t\overline{e}_{t} is a zero-mean subexponential random variable with e¯tψ1=𝒪(kdx1/2)\|\overline{e}_{t}\|_{\psi_{1}}=\mathcal{O}(kd_{x}^{1/2}). Then, it is clear that Algorithm 2 recovers latent states zt=Mthtz^{\ast\prime}_{t}=M^{\ast\prime}_{t}h_{t}, where 0tT0\leq t\leq T, by a combination of quadratic regression and low-rank approximate factorization. Below we drop the superscript prime for notational simplicity, but keep in mind that the optimal state representation function (Mt)t=0T(M^{\ast}_{t})_{t=0}^{T}, the corresponding latent states (zt)t=0T(z^{\ast}_{t})_{t=0}^{T}, and the true latent system parameters ((At,Bt)t=0T1,(Qt)t=0T)((A^{\ast}_{t},B^{\ast}_{t})_{t=0}^{T-1},(Q^{\ast}_{t})_{t=0}^{T}) are all with respect to the normalized parameterization.

Let Nt:=(Mt)MtN^{\ast}_{t}:=(M^{\ast}_{t})^{\top}M^{\ast}_{t} for all 0tT0\leq t\leq T. For quadratic regression, Lemma 2 (detailed later) and the union bound over all time steps guarantees that as long as naT4(dy+du)4log(aT3(dy+du)2/p)log(T/p)n\geq aT^{4}(d_{y}+d_{u})^{4}\log(aT^{3}(d_{y}+d_{u})^{2}/p)\log(T/p) for an absolute constant a>0a>0, with probability at least 1p1-p, for all 0tT0\leq t\leq T,

N^tNtF=𝒪(kt2(dy+du)2dx1/2n1/2log1/2(1/p)).\displaystyle\|\hat{N}_{t}-N^{\ast}_{t}\|_{F}=\mathcal{O}(kt^{2}(d_{y}+d_{u})^{2}d_{x}^{1/2}n^{-1/2}\log^{1/2}(1/p)).

Now let us bound the distance between M^t\hat{M}_{t} and MtM^{\ast}_{t}. Recall that we use dh=(t+1)dy+tdud_{h}=(t+1)d_{y}+td_{u} as a shorthand. The estimate N^t\hat{N}_{t} may not be positive semidefinite. Let N^t=UΛU\hat{N}_{t}=U\Lambda U^{\top} be its eigenvalue decomposition, with the dh×dhd_{h}\times d_{h} matrix Λ\Lambda having descending diagonal elements. Let Σ:=max(Λ,0)\Sigma:=\max(\Lambda,0). Then N~t=:UΣU\widetilde{N}_{t}=:U\Sigma U^{\top} is the projection of N^t\hat{N}_{t} onto the positive semidefinite cone (Boyd and Vandenberghe, 2004, Section 8.1.1) with respect to the Frobenius norm. Since Nt0N^{\ast}_{t}\succcurlyeq 0,

N~tNtFN^tNtF.\displaystyle\|\widetilde{N}_{t}-N^{\ast}_{t}\|_{F}\leq\|\hat{N}_{t}-N^{\ast}_{t}\|_{F}.

The low-rank factorization is essentially a combination of low-rank approximation and matrix factorization. For dh<dxd_{h}<d_{x}, M~t=:[Σ1/2U;0(dxdh)×dh]\widetilde{M}_{t}=:[\Sigma^{1/2}U^{\top};0_{(d_{x}-d_{h})\times d_{h}}] constructed by padding zeros satisfies M~tM~t=N~t\widetilde{M}_{t}^{\top}\widetilde{M}_{t}=\widetilde{N}_{t}. For dhdxd_{h}\geq d_{x}, construct M~t=:Σdx1/2Udx\widetilde{M}_{t}=:\Sigma_{d_{x}}^{1/2}U_{d_{x}}^{\top}, where Σdx\Sigma_{d_{x}} is the left-top dx×dxd_{x}\times d_{x} block in Σ\Sigma and UdxU_{d_{x}} consists of dxd_{x} columns of UU from the left. By the Eckart-Young-Mirsky theorem, M~tM~t=UdxΣdxUdx\widetilde{M}_{t}^{\top}\widetilde{M}_{t}=U_{d_{x}}\Sigma_{d_{x}}U_{d_{x}}^{\top} satisfies

M~tM~tN~tF=minNdh×dh,rank(N)dxNN~tF.\displaystyle\|\widetilde{M}_{t}^{\top}\widetilde{M}_{t}-\widetilde{N}_{t}\|_{F}=\min_{N\in\mathbb{R}^{d_{h}\times d_{h}},\mathrm{rank}(N)\leq d_{x}}\|N-\widetilde{N}_{t}\|_{F}.

Hence,

M~tM~tNtF\displaystyle\|\widetilde{M}_{t}^{\top}\widetilde{M}_{t}-N^{\ast}_{t}\|_{F}\leq\; M~tM~tN~tF+N~tNtF\displaystyle\|\widetilde{M}_{t}^{\top}\widetilde{M}_{t}-\widetilde{N}_{t}\|_{F}+\|\widetilde{N}_{t}-N^{\ast}_{t}\|_{F}
\displaystyle\leq\; 2N~tNtF2N^tNtF\displaystyle 2\|\widetilde{N}_{t}-N^{\ast}_{t}\|_{F}\leq 2\|\hat{N}_{t}-N^{\ast}_{t}\|_{F}
=\displaystyle=\; 𝒪(kt2(dy+du)2dx1/2n1/2log1/2(1/p)).\displaystyle\mathcal{O}(kt^{2}(d_{y}+d_{u})^{2}d_{x}^{1/2}n^{-1/2}\log^{1/2}(1/p)).

From now on, we consider 0t10\leq t\leq\ell-1 and tT\ell\leq t\leq T separately, since, as we will show, in the latter case we have the additional condition that rank(Mt)=dx\mathrm{rank}(M^{\ast}_{t})=d_{x}.

For 0t10\leq t\leq\ell-1, k=1k=1. By Lemma 4 to be proved later, there exists a dx×dxd_{x}\times d_{x} orthogonal matrix StS_{t}, such that M~tStMt2M~tStMtF=𝒪(t(dy+du)dx3/4n1/4log1/4(1/p))\|\widetilde{M}_{t}-S_{t}M^{\ast}_{t}\|_{2}\leq\|\widetilde{M}_{t}-S_{t}M^{\ast}_{t}\|_{F}=\mathcal{O}(t(d_{y}+d_{u})d_{x}^{3/4}n^{-1/4}\log^{1/4}(1/p)). Recall that M^t=TruncSV(M~t,θ)\hat{M}_{t}=\textsc{TruncSV}{}(\widetilde{M}_{t},\theta); that is, M^t=(𝕀[θ,+)(Σdx1/2)Σdx1/2)Udx\hat{M}_{t}=(\mathbb{I}_{[\theta,+\infty)}(\Sigma_{d_{x}}^{1/2})\odot\Sigma_{d_{x}}^{1/2})U_{d_{x}}^{\top}. Then

M^tM~t2=\displaystyle\|\hat{M}_{t}-\widetilde{M}_{t}\|_{2}=\; (𝕀[θ,+)(Σdx1/2)Σdx1/2Σdx1/2)Udx2\displaystyle\|(\mathbb{I}_{[\theta,+\infty)}(\Sigma_{d_{x}}^{1/2})\odot\Sigma_{d_{x}}^{1/2}-\Sigma_{d_{x}}^{1/2})U_{d_{x}}^{\top}\|_{2}
\displaystyle\leq\; 𝕀[θ,+)(Σdx1/2)Σdx1/2Σdx1/22θ.\displaystyle\|\mathbb{I}_{[\theta,+\infty)}(\Sigma_{d_{x}}^{1/2})\odot\Sigma_{d_{x}}^{1/2}-\Sigma_{d_{x}}^{1/2}\|_{2}\leq\theta.

Hence, the distance between M^t\hat{M}_{t} and MtM^{\ast}_{t} satisfies

M^tStMt2=\displaystyle\|\hat{M}_{t}-S_{t}M^{\ast}_{t}\|_{2}=\; M^tM~t+M~tStMt2\displaystyle\|\hat{M}_{t}-\widetilde{M}_{t}+\widetilde{M}_{t}-S_{t}M^{\ast}_{t}\|_{2}
\displaystyle\leq\; M^tM~t2+M~tStMt2\displaystyle\|\hat{M}_{t}-\widetilde{M}_{t}\|_{2}+\|\widetilde{M}_{t}-S_{t}M^{\ast}_{t}\|_{2}
\displaystyle\leq\; θ+𝒪(t(dy+du)dx3/4n1/4log1/4(1/p)).\displaystyle\theta+\mathcal{O}(t(d_{y}+d_{u})d_{x}^{3/4}n^{-1/4}\log^{1/4}(1/p)).

θ\theta should be chosen in such a way that it keeps the error on the same order; that is, θ=𝒪(t(dy+du)dx3/4n1/4log1/4(1/p))\theta=\mathcal{O}(t(d_{y}+d_{u})d_{x}^{3/4}n^{-1/4}\log^{1/4}(1/p)). As a result, since z^t=M^tht\hat{z}_{t}=\hat{M}_{t}h_{t} and zt=Mthtz^{\ast}_{t}=M^{\ast}_{t}h_{t},

z^tStzt=\displaystyle\|\hat{z}_{t}-S_{t}z^{\ast}_{t}\|=\; (M^tStMt)htM^tStMt2ht.\displaystyle\|(\hat{M}_{t}-S_{t}M^{\ast}_{t})h_{t}\|\leq\|\hat{M}_{t}-S_{t}M^{\ast}_{t}\|_{2}\|h_{t}\|.

Since ht=[y0:t;u0:(t1)]h_{t}=[y_{0:t};u_{0:(t-1)}] whose 2\ell_{2}-norm is sub-Gaussian with its mean and sub-Gaussian norm bounded by 𝒪((t(dy+du))1/2)\mathcal{O}((t(d_{y}+d_{u}))^{1/2}), z^tStzt\|\hat{z}_{t}-S_{t}z^{\ast}_{t}\| is sub-Gaussian with its mean and sub-Gaussian norm bounded by

𝒪((t(dy+du))3/2dx3/4n1/4log1/4(1/p)).\displaystyle\mathcal{O}((t(d_{y}+d_{u}))^{3/2}d_{x}^{3/4}n^{-1/4}\log^{1/4}(1/p)).

On the other hand, threshold θ\theta ensures a lower bound on σmin+(i=1nzt(i)(zt(i)))\sigma_{\min}^{+}(\sum\nolimits_{i=1}^{n}z_{t}^{(i)}(z_{t}^{(i)})^{\top}). As shown in the proof of Lemma 5, this property is important for ensuring the system identification outputs A^t\hat{A}_{t} and B^t\hat{B}_{t} have bounded norms. Specifically,

σmin+(i=1nzt(i)(zt(i)))=\displaystyle\sigma_{\min}^{+}(\sum\nolimits_{i=1}^{n}z_{t}^{(i)}(z_{t}^{(i)})^{\top})=\; σmin+(i=1nM^tht(i)(ht(i))M^t)\displaystyle\sigma_{\min}^{+}(\sum\nolimits_{i=1}^{n}\hat{M}_{t}h_{t}^{(i)}(h_{t}^{(i)})^{\top}\hat{M}_{t}^{\top})
=\displaystyle=\; (σmin+(M^t))2σmin(i=1nht(i)(ht(i)))\displaystyle(\sigma_{\min}^{+}(\hat{M}_{t}))^{2}\sigma_{\min}\big{(}\sum\nolimits_{i=1}^{n}h_{t}^{(i)}(h_{t}^{(i)})^{\top}\big{)}
=(i)\displaystyle\overset{(i)}{=}\; θ2Ω(n)=Ω(θ2n),\displaystyle\theta^{2}\cdot\Omega(n)=\Omega(\theta^{2}n),

where (i)(i) holds with probability at least 1p1-p, as long as nalog(1/p)n\geq a\log(1/p) for some absolute constant a>0a>0. This concentration is standard in analyzing linear regression (Wainwright, 2019).

For tT\ell\leq t\leq T, kmk\leq m. By Proposition 2, Cov(zt)\mathrm{Cov}(z^{\ast}_{t}) has full rank, σmin(Cov(zt))=Ω(ν2)\sigma_{\min}(\mathrm{Cov}(z^{\ast}_{t}))=\Omega(\nu^{2}) and σmin(Mt)=Ω(νt1/2)\sigma_{\min}(M^{\ast}_{t})=\Omega(\nu t^{-1/2}). Recall that for tT\ell\leq t\leq T, we simply set M^t=M~t\hat{M}_{t}=\widetilde{M}_{t}. Then, by Lemma 3, there exists a dx×dxd_{x}\times d_{x} orthogonal matrix StS_{t}, such that

M^tStMtF=\displaystyle\|\hat{M}_{t}-S_{t}M^{\ast}_{t}\|_{F}=\; 𝒪(σmin1(Mt))M~tM~tNtF\displaystyle\mathcal{O}(\sigma_{\min}^{-1}(M^{\ast}_{t}))\|\widetilde{M}_{t}^{\top}\widetilde{M}_{t}-N^{\ast}_{t}\|_{F}
=\displaystyle=\; 𝒪(ν1mt5/2(dy+du)2dx1/2n1/2log1/2(1/p)),\displaystyle\mathcal{O}(\nu^{-1}mt^{5/2}(d_{y}+d_{u})^{2}d_{x}^{1/2}n^{-1/2}\log^{1/2}(1/p)),

which is also an upper bound on M^tStMt2\|\hat{M}_{t}-S_{t}M^{\ast}_{t}\|_{2}. As a result,

z^tStzt2\displaystyle\|\hat{z}_{t}-S_{t}z^{\ast}_{t}\|_{2}\leq\; M^tStMt2ht,\displaystyle\|\hat{M}_{t}-S_{t}M^{\ast}_{t}\|_{2}\|h_{t}\|,

from which we have that z^tStzt2\|\hat{z}_{t}-S_{t}z^{\ast}_{t}\|_{2} is sub-Gaussian with its mean and sub-Gaussian norm bounded by

𝒪(ν1mt3(dy+du)5/2dx1/2n1/2log1/2(1/p)).\displaystyle\mathcal{O}(\nu^{-1}mt^{3}(d_{y}+d_{u})^{5/2}d_{x}^{1/2}n^{-1/2}\log^{1/2}(1/p)).

Consider

i=1nz^t(i)(z^t(i))St(zt)(i)((zt)(i))St2=\displaystyle\big{\|}\sum\nolimits_{i=1}^{n}\hat{z}_{t}^{(i)}(\hat{z}_{t}^{(i)})^{\top}-S_{t}(z^{\ast}_{t})^{(i)}((z^{\ast}_{t})^{(i)})^{\top}S_{t}^{\top}\big{\|}_{2}=\; i=1n(z^t(i)+(zt)(i))z^t(i)St(zt)(i)\displaystyle\sum\nolimits_{i=1}^{n}\big{(}\big{\|}\hat{z}_{t}^{(i)}\big{\|}+\big{\|}(z^{\ast}_{t})^{(i)}\big{\|}\big{)}\cdot\big{\|}\hat{z}_{t}^{(i)}-S_{t}(z^{\ast}_{t})^{(i)}\big{\|}
=(i)\displaystyle\overset{(i)}{=}\; ndx1/2log1/2(n/p)z^t(i)St(zt)(i),\displaystyle nd_{x}^{1/2}\log^{1/2}(n/p)\cdot\big{\|}\hat{z}_{t}^{(i)}-S_{t}(z^{\ast}_{t})^{(i)}\big{\|},

where (i)(i) holds with probability 1p1-p. Hence, there exists an absolute constant c>0c>0, such that if ncν6m2T6(dy+du)5dx2log2(n/p)n\geq c\nu^{-6}m^{2}T^{6}(d_{y}+d_{u})^{5}d_{x}^{2}\log^{2}(n/p),

σmin(i=1nz^t(i)(z^t(i)))\displaystyle\sigma_{\min}\big{(}\sum\nolimits_{i=1}^{n}\hat{z}_{t}^{(i)}(\hat{z}_{t}^{(i)})^{\top}\big{)}\geq\; σmin(i=1n(zt)(i)((zt)(i)))\displaystyle\sigma_{\min}\big{(}\sum\nolimits_{i=1}^{n}(z^{\ast}_{t})^{(i)}((z^{\ast}_{t})^{(i)})^{\top}\big{)}
i=1nz^t(i)(z^t(i))St(zt)(i)((zt)(i))St\displaystyle\quad-\big{\|}\sum\nolimits_{i=1}^{n}\hat{z}_{t}^{(i)}(\hat{z}_{t}^{(i)})^{\top}-S_{t}(z^{\ast}_{t})^{(i)}((z^{\ast}_{t})^{(i)})^{\top}S_{t}^{\top}\big{\|}
\displaystyle\geq\; σmin(i=1n(zt)(i)((zt)(i)))/2=Ω(ν2n).\displaystyle\sigma_{\min}\big{(}\sum\nolimits_{i=1}^{n}(z^{\ast}_{t})^{(i)}((z^{\ast}_{t})^{(i)})^{\top}\big{)}/2=\Omega(\nu^{2}n).

This is needed for the analysis of latent system identification in the next step.

Recovery of the latent dynamics. The latent system (At,Bt)t=0T1(A^{\ast}_{t},B^{\ast}_{t})_{t=0}^{T-1} is identified in Algorithm 3, using (z^t(i))i=1,t=0N,T(\hat{z}_{t}^{(i)})_{i=1,t=0}^{N,T} produced by Algorithm 2, by ordinary least squares. Recall from Proposition 1 that zt+1=Atzt+Btut+Lt+1it+1z^{\ast}_{t+1}=A^{\ast}_{t}z^{\ast}_{t}+B^{\ast}_{t}u_{t}+L_{t+1}i_{t+1}. With the transforms on ztz^{\ast}_{t} and zt+1z^{\ast}_{t+1}, we have

St+1zt+1=(St+1AtSt)Stzt+St+1Btut+St+1Lt+1it+1,\displaystyle S_{t+1}z^{\ast}_{t+1}=(S_{t+1}A^{\ast}_{t}S_{t}^{\top})S_{t}z^{\ast}_{t}+S_{t+1}B^{\ast}_{t}u_{t}+S_{t+1}L_{t+1}i_{t+1},

and (zt)Qtzt=(Stzt)StQtStStzt(z^{\ast}_{t})^{\top}Q^{\ast}_{t}z^{\ast}_{t}=(S_{t}z^{\ast}_{t})^{\top}S_{t}Q^{\ast}_{t}S_{t}^{\top}S_{t}z^{\ast}_{t}. Under control ut𝒩(0,σu2Idu)u_{t}\sim\mathcal{N}(0,\sigma_{u}^{2}I_{d_{u}}) for 0tT10\leq t\leq T-1, we know that ztz^{\ast}_{t} is a zero-mean Gaussian random vector; so is StztS_{t}z^{\ast}_{t}. Let Σt=𝔼[Stzt(zt)St]\Sigma^{\ast}_{t}=\mathbb{E}[S_{t}z^{\ast}_{t}(z^{\ast}_{t})^{\top}S_{t}^{\top}] be its covariance.

For 0t10\leq t\leq\ell-1, we need a bound for the estimation error of rank-deficient and noisy linear regression. By Lemma 5, there exists an absolute constant c>0c>0, such that as long as nc(dx+du+log(1/p))n\geq c(d_{x}+d_{u}+\log(1/p)), with probability at least 1p1-p,

([A^t,B^t][St+1AtSt,St+1Bt])diag((Σt)1/2,σuIdu)2\displaystyle\|([\hat{A}_{t},\hat{B}_{t}]-[S_{t+1}A^{\ast}_{t}S_{t}^{\top},S_{t+1}B^{\ast}_{t}])\textnormal{diag}((\Sigma^{\ast}_{t})^{1/2},\sigma_{u}I_{d_{u}})\|_{2}
=\displaystyle=\; 𝒪((1+β1)ztStztlog1/2(n/p)+n1/2(dx+du+log(1/p))1/2),\displaystyle\mathcal{O}\big{(}(1+\beta^{-1})\|z_{t}-S_{t}z^{\ast}_{t}\|\log^{1/2}(n/p)+n^{-1/2}(d_{x}+d_{u}+\log(1/p))^{1/2}\big{)},

which implies

(A^tSt+1AtSt)(Σt)1/22=𝒪((1+β1)(t(dy+du))3/2dx3/4n1/4log3/4(n/p)),\displaystyle\|(\hat{A}_{t}-S_{t+1}A^{\ast}_{t}S_{t}^{\top})(\Sigma^{\ast}_{t})^{1/2}\|_{2}=\mathcal{O}((1+\beta^{-1})(t(d_{y}+d_{u}))^{3/2}d_{x}^{3/4}n^{-1/4}\log^{3/4}(n/p)),

which is also a bound on B^tSt+1Bt2\|\hat{B}_{t}-S_{t+1}B^{\ast}_{t}\|_{2}.

For tT1\ell\leq t\leq T-1, by Lemma 6, with probability at least 1p1-p,

[A^t,B^t][St+1AtSt,St+1Bt]2\displaystyle\|[\hat{A}_{t},\hat{B}_{t}]-[S_{t+1}A^{\ast}_{t}S_{t}^{\top},S_{t+1}B^{\ast}_{t}]\|_{2}
=\displaystyle=\; 𝒪((ν1+σu1)(z^tStztlog1/2(n/p)+n1/2(dx+du+log(1/p))1/2)),\displaystyle\mathcal{O}\big{(}(\nu^{-1}+\sigma_{u}^{-1})\big{(}\|\hat{z}_{t}-S_{t}z^{\ast}_{t}\|\log^{1/2}(n/p)+n^{-1/2}(d_{x}+d_{u}+\log(1/p))^{1/2}\big{)}\big{)},

which implies

A^tSt+1AtSt2=𝒪((1+ν2)(mt3(dy+du)5/2dx1/2log1/2(1/p))n1/2),\displaystyle\|\hat{A}_{t}-S_{t+1}A^{\ast}_{t}S_{t}^{\top}\|_{2}=\mathcal{O}\Big{(}(1+\nu^{-2})\big{(}mt^{3}(d_{y}+d_{u})^{5/2}d_{x}^{1/2}\log^{1/2}(1/p)\big{)}n^{-1/2}\Big{)},

which is also a bound on B^tSt+1Bt2\|\hat{B}_{t}-S_{t+1}B^{\ast}_{t}\|_{2}. We note that since the observation of utu_{t} is exact, a tighter bound on B^tSt+1Bt2\|\hat{B}_{t}-S_{t+1}B^{\ast}_{t}\|_{2} is possible; we keep the current bound since it does not affect the final bounds in Theorem 1.

By Assumption 3, (Qt)t=01(Q^{\ast}_{t})_{t=0}^{\ell-1} and QTQ^{\ast}_{T} are positive definite; they are identity matrices under the normalized parameterization. For (Qt)t=T1(Q^{\ast}_{t})_{t=\ell}^{T-1}, which may not be positive definite, we identify them in SysId (Algorithm 3) by (3.5). By Lemma 2,

Q~tStQtStF=\displaystyle\|\widetilde{Q}_{t}-S_{t}Q^{\ast}_{t}S_{t}^{\top}\|_{F}=\; 𝒪(mdx2log(n/p)mt3(dy+du)5/2dx1/2n1/2log1/2(1/p)\displaystyle\mathcal{O}(md_{x}^{2}\log(n/p)\cdot mt^{3}(d_{y}+d_{u})^{5/2}d_{x}^{1/2}n^{-1/2}\log^{1/2}(1/p)
+dx2mdx1/2n1/2log1/2(1/p))\displaystyle\quad+d_{x}^{2}md_{x}^{1/2}n^{-1/2}\log^{1/2}(1/p))
=\displaystyle=\; 𝒪(m2t3(dy+du)5/2dx5/2n1/2log1/2(1/p)),\displaystyle\mathcal{O}(m^{2}t^{3}(d_{y}+d_{u})^{5/2}d_{x}^{5/2}n^{-1/2}\log^{1/2}(1/p)),

where we have used ν=Ω(1)\nu=\Omega(1). By construction, Q^t\hat{Q}_{t} is the projection of Q~t\widetilde{Q}_{t} onto the positive semidefinite cone with respect to the Frobenius norm (Boyd and Vandenberghe, 2004, Section 8.1.1). Since StQtSt0S_{t}Q^{\ast}_{t}S_{t}^{\top}\succcurlyeq 0,

Q^tStQtStFQ~tStQtStF.\displaystyle\|\hat{Q}_{t}-S_{t}Q^{\ast}_{t}S_{t}^{\top}\|_{F}\leq\|\widetilde{Q}_{t}-S_{t}Q^{\ast}_{t}S_{t}^{\top}\|_{F}.

Certainty equivalent linear quadratic control. The last step of Algorithm 1 is to compute the optimal controller in the estimated system ((A^t,B^t,Rt)t=0T1,(Q^t)t=0T)((\hat{A}_{t},\hat{B}_{t},R^{\ast}_{t})_{t=0}^{T-1},(\hat{Q}_{t})_{t=0}^{T}) by RDE (2.4).

RDE (2.4) proceeds backward. For tT1\ell\leq t\leq T-1, A^tSt+1AtSt2\|\hat{A}_{t}-S_{t+1}A^{\ast}_{t}S_{t}^{\top}\|_{2}, B^tSt+1Bt2\|\hat{B}_{t}-S_{t+1}B^{\ast}_{t}\|_{2} and Q^tStQtSt2\|\hat{Q}_{t}-S_{t}Q^{\ast}_{t}S_{t}^{\top}\|_{2} are all bounded by

𝒪(m2T3(dy+du)5/2dx5/2n1/2log(1/p)1/2).\displaystyle\mathcal{O}(m^{2}T^{3}(d_{y}+d_{u})^{5/2}d_{x}^{5/2}n^{-1/2}\log(1/p)^{1/2}).

By Lemma 10 on certainty equivalent linear quadratic control, for npoly(T,dx,dy,du,log(1/p))n\geq\mathrm{poly}(T,d_{x},d_{y},d_{u},\log(1/p)), where hidden constants are dimension-free and depend polynomially on system parameters, the controller (K^t)t=T1(\hat{K}_{t})_{t=\ell}^{T-1} is ϵ\epsilon-optimal in system ((St+1AtSt1,St+1Bt,Rt)t=T1,(StQtSt)t=T)((S_{t+1}A^{\ast}_{t}S_{t}^{-1},S_{t+1}B^{\ast}_{t},R^{\ast}_{t})_{t=\ell}^{T-1},(S_{t}Q^{\ast}_{t}S_{t}^{\top})_{t=\ell}^{T}), for

ϵ=\displaystyle\epsilon= 𝒪((dxdu)(dy+du)5dx5m4T7log(1/p)n1),\displaystyle\mathcal{O}((d_{x}\wedge d_{u})(d_{y}+d_{u})^{5}d_{x}^{5}m^{4}T^{7}\log(1/p)n^{-1}), (4.1)

that is, if

ncm4(dy+du)5dx6T7log(1/p)ϵ1,\displaystyle n\geq cm^{4}(d_{y}+d_{u})^{5}d_{x}^{6}T^{7}\log(1/p)\epsilon^{-1},

for a dimension-free constant c>0c>0 that depends on system parameters.

For 0t10\leq t\leq\ell-1, (K^t)t=01(\hat{K}_{t})_{t=0}^{\ell-1} and (Kt)t=01(K^{\ast}_{t})_{t=0}^{\ell-1} are optimal controllers in the \ell-step systems with terminal costs given by P^\hat{P}_{\ell} and PP^{\ast}_{\ell}, respectively, where P^\hat{P}_{\ell} and PP^{\ast}_{\ell} are the solutions to RDE (2.4) in systems ((A^t,B^t,Rt)t=0T1,(Q^t)t=0T)((\hat{A}_{t},\hat{B}_{t},R^{\ast}_{t})_{t=0}^{T-1},(\hat{Q}_{t})_{t=0}^{T}) and ((St+1AtSt1,St+1Bt,Rt)t=0T1,(StQtSt)t=0T)((S_{t+1}A^{\ast}_{t}S_{t}^{-1},S_{t+1}B^{\ast}_{t},R^{\ast}_{t})_{t=0}^{T-1},(S_{t}Q^{\ast}_{t}S_{t}^{\top})_{t=0}^{T}) at step \ell, respectively. By Lemma 10,

P^P2=𝒪(m2t3(dy+du)5/2dx5/2n1/2log1/2(1/p)),\displaystyle\|\hat{P}_{\ell}-P^{\ast}_{\ell}\|_{2}=\mathcal{O}(m^{2}t^{3}(d_{y}+d_{u})^{5/2}d_{x}^{5/2}n^{-1/2}\log^{1/2}(1/p)),

which is dominated by the n1/4n^{-1/4} rate of (A^tSt+1AtSt)(Σt)1/22\|(\hat{A}_{t}-S_{t+1}A^{\ast}_{t}S_{t}^{\top})(\Sigma^{\ast}_{t})^{1/2}\|_{2} and B^tSt+1Bt2\|\hat{B}_{t}-S_{t+1}B^{\ast}_{t}\|_{2} for 0t10\leq t\leq\ell-1. Then, by Lemma 9, (K^t)t=01(\hat{K}_{t})_{t=0}^{\ell-1} is ϵ\epsilon-optimal in system ((St+1AtSt1,St+1Bt,Rt)t=01,(StQtSt)t=01)((S_{t+1}A^{\ast}_{t}S_{t}^{-1},S_{t+1}B^{\ast}_{t},R^{\ast}_{t})_{t=0}^{\ell-1},(S_{t}Q^{\ast}_{t}S_{t}^{\top})_{t=0}^{\ell-1}) with terminal cost matrix PtP^{\ast}_{t}, for

ϵ=\displaystyle\epsilon=\; 𝒪(adx((dy+du))3/2dx3/4n1/4log3/4(n/p)+dx(dx+du+log(1/p))1/2n1/2))\displaystyle\mathcal{O}(a^{\ell}d_{x}(\ell(d_{y}+d_{u}))^{3/2}d_{x}^{3/4}n^{-1/4}\log^{3/4}(n/p)+d_{x}(d_{x}+d_{u}+\log(1/p))^{1/2}n^{-1/2}))
=\displaystyle=\; 𝒪(dx7/4((dy+du))3/2an1/4log3/4(n/p)),\displaystyle\mathcal{O}(d_{x}^{7/4}(\ell(d_{y}+d_{u}))^{3/2}a^{\ell}n^{-1/4}\log^{3/4}(n/p)), (4.2)

where dimension-free constant a>0a>0 depends on the system parameters; that is, if

na0dx7(dy+du)66a1log3(n/p)ϵ4,\displaystyle n\geq a_{0}d_{x}^{7}(d_{y}+d_{u})^{6}\ell^{6}a_{1}^{\ell}\log^{3}(n/p)\epsilon^{-4},

for some dimension-free constants a0,a1>0a_{0},a_{1}>0 that depend on system parameters. The bound (4.2) is worse than (4.1), because for 0t10\leq t\leq\ell-1, ztz^{\ast}_{t} does not have full-rank covariance, and St+1AtStS_{t+1}A^{\ast}_{t}S_{t}^{\top} is only recovered partially. Even with large enough data, linear regression has no guarantee for A^tSt+1AtSt2\|\hat{A}_{t}-S_{t+1}A^{\ast}_{t}S_{t}^{\top}\|_{2} to be small; we do not know the controllability of (A^t,B^t)t=01(\hat{A}_{t},\hat{B}_{t})_{t=0}^{\ell-1}, not even its stabilizability. ∎

Next, we provide several key technical lemmas regarding the regression and factorization procedures used in our algorithm.

4.2 Quadratic regression bound

As noted in Section 3.1, the quadratic regression can be converted to linear regression using hP2=hh,PF=svec(hh),svec(P)\|h\|_{P}^{2}=\left\langle hh^{\top},P\right\rangle_{F}=\left\langle\mathrm{svec}(hh^{\top}),\mathrm{svec}(P)\right\rangle. To analyze this linear regression with an intercept, we need the following lemma. We note that a similar lemma without considering the intercept has been proved in (Jadbabaie et al., 2021, Proposition).

Lemma 1.

Let (h0(i))i=1n(h_{0}^{(i)})_{i=1}^{n} be nn independent observations of the dd-dimensional random vector h0𝒩(0,Id)h_{0}\sim\mathcal{N}(0,I_{d}). Let f0(i):=svec(h0(i)(h0(i)))f_{0}^{(i)}:=\mathrm{svec}(h_{0}^{(i)}(h_{0}^{(i)})^{\top}) and f¯0(i):=[f0(i);1]\overline{f}_{0}^{(i)}:=[f_{0}^{(i)};1]. There exists an absolute constant a>0a>0, such that as long as nar4log(ar2/p)n\geq ar^{4}\log(ar^{2}/p), with probability at least 1p1-p,

σmin(i=1nf¯0(i)(f¯0(i)))=Ω(d1n).\displaystyle\sigma_{\min}\Big{(}\sum\nolimits_{i=1}^{n}\overline{f}_{0}^{(i)}(\overline{f}_{0}^{(i)})^{\top}\Big{)}=\Omega(d^{-1}n).
Proof.

Let f0=svec(h0h0)f_{0}=\mathrm{svec}(h_{0}h_{0}^{\top}) and f¯0=[f0;1]\overline{f}_{0}=[f_{0};1]. We first show that λmin(𝔼[f¯f¯])\lambda_{\min}(\mathbb{E}[\overline{f}\;\overline{f}^{\top}]) is lower bounded. Consider

Σ¯=𝔼[f¯f¯]=[2Id(d1)/20002Id+1d1d1d01d1].\displaystyle\overline{\Sigma}=\mathbb{E}[\overline{f}\;\overline{f}^{\top}]=\begin{bmatrix}2I_{d(d-1)/2}&0&0\\ 0&2I_{d}+1_{d}1_{d}^{\top}&1_{d}\\ 0&1_{d}^{\top}&1\end{bmatrix}.

To lower bound its smallest eigenvalue, let us compute its inverse. By the Sherman-Morrison formula,

(2Id+1d1d)1=(2Id)1(2Id)11d1d(2Id)11+1(2Id)11=12Id12d+41d1d.\displaystyle(2I_{d}+1_{d}1_{d}^{\top})^{-1}=(2I_{d})^{-1}-\frac{(2I_{d})^{-1}1_{d}1_{d}^{\top}(2I_{d})^{-1}}{1+1^{\top}(2I_{d})^{-1}1}=\frac{1}{2}I_{d}-\frac{1}{2d+4}1_{d}1_{d}^{\top}.

Then, by the inverse of a block matrix,

[2Id+1d1d1d1d1]1=12[Id1d1dd+2].\displaystyle\begin{bmatrix}2I_{d}+1_{d}1_{d}^{\top}&1_{d}\\ 1_{d}^{\top}&1\end{bmatrix}^{-1}=\frac{1}{2}\begin{bmatrix}I_{d}&-1_{d}\\ -1_{d}^{\top}&d+2\end{bmatrix}.

Therefore,

Σ¯12\displaystyle\|\overline{\Sigma}^{-1}\|_{2}\leq\; 12(Id(d1)/22+Id2+21d2+d+22)\displaystyle\frac{1}{2}(\|I_{d(d-1)/2}\|_{2}+\|I_{d}\|_{2}+2\|-1_{d}\|_{2}+\|d+2\|_{2})
=\displaystyle=\; 12d+d1/2+24d.\displaystyle\frac{1}{2}d+d^{1/2}+2\leq 4d.

Hence,

λmin(Σ¯)=Σ¯121(4d)1=Ω(d1).\displaystyle\lambda_{\min}(\overline{\Sigma})=\|\overline{\Sigma}^{-1}\|_{2}^{-1}\geq(4d)^{-1}=\Omega(d^{-1}).

Then, with the similar concentration arguments to those in (Jadbabaie et al., 2021, Appendix A.1), we can show that with probability at least 1p1-p,

λmin(F0F0)=Ω(d1n).\displaystyle\lambda_{\min}(F_{0}^{\top}F_{0})=\Omega(d^{-1}n).

Lemma 1 lower bounds the minimum singular value of a matrix that contains the fourth powers of elements in standard Gaussian random vectors. The following lemma is the main result in Section 4.2.

Lemma 2 (Quadratic regression).

Define random variable c:=(h)Nh+b+ec:=(h^{\ast})^{\top}N^{\ast}h^{\ast}+b^{\ast}+e, where h𝒩(0,Σ)h^{\ast}\sim\mathcal{N}(0,\Sigma_{\ast}) is a dd-dimensional Gaussian random vector, Nd×dN^{\ast}\in\mathbb{R}^{d\times d} is a positive semidefinite matrix, bb^{\ast}\in\mathbb{R} is a constant and ee is a zero-mean subexponential random variable with eψ1E\|e\|_{\psi_{1}}\leq E. Assume that N2\|N^{\ast}\|_{2} and Σ2\|\Sigma_{\ast}\|_{2} are 𝒪(1)\mathcal{O}(1). Let σmin(Σ1/2)β>0\sigma_{\min}(\Sigma_{\ast}^{1/2})\geq\beta>0. Assume that β=Ω(1)\beta=\Omega(1). Define h:=h+δh:=h^{\ast}+\delta where the noise vector δ\delta can be correlated with hh^{\ast} and its 2\ell_{2} norm is sub-Gaussian with 𝔼[δ]ϵ\mathbb{E}[\|\delta\|]\leq\epsilon, δψ2ϵ\|\|\delta\|\|_{\psi_{2}}\leq\epsilon. Assume that ϵmin((dΣ1/2),a(β1)d3/2Σ21/2(log(n/p))1)\epsilon\leq\min((d\|\Sigma_{\ast}\|^{1/2}),a(\beta\wedge 1)d^{-3/2}\|\Sigma_{\ast}\|_{2}^{-1/2}(\log(n/p))^{-1}) for some absolute constant a>0a>0. Suppose we get nn observations h(i)h^{(i)} and c(i)c^{(i)} of hh and cc, where ((h)(i))i=1n((h^{\ast})^{(i)})_{i=1}^{n} are independent and (δ(i))i=1n(\delta^{(i)})_{i=1}^{n} can be correlated. Consider the regression problem

N^,b^argminN=N,bi=1n(c(i)h(i)N2b)2.\displaystyle\hat{N},\hat{b}\in\operatorname*{argmin}_{N=N^{\top},b}\sum\nolimits_{i=1}^{n}(c^{(i)}-\|h^{(i)}\|_{N}^{2}-b)^{2}. (4.3)

There exists an absolute constant a0>0a_{0}>0, such that as long as na0d4log(a0d2/p)log(1/p)n\geq a_{0}d^{4}\log(a_{0}d^{2}/p)\log(1/p), with probability at least 1p1-p, N^NF\|\hat{N}-N^{\ast}\|_{F} and |b^b||\hat{b}-b^{\ast}| are bounded by

𝒪((d(1+Ed1/2)log(n/p)+b)ϵ+d2En1/2log1/2(1/p)).\displaystyle\mathcal{O}((d(1+Ed^{1/2})\log(n/p)+\|b^{\ast}\|)\epsilon+d^{2}En^{-1/2}\log^{1/2}(1/p)).
Proof.

Regression (4.3) can be written as

minsvec(N),bi=1n(c(i)(svec(h(i)(h(i)))svec(N)b)2.\displaystyle\min_{\mathrm{svec}(N),b}\sum\nolimits_{i=1}^{n}\big{(}c^{(i)}-(\mathrm{svec}(h^{(i)}(h^{(i)})^{\top})^{\top}\mathrm{svec}(N)-b\big{)}^{2}.

Let f(i):=svec(h(i)(h(i)))f^{(i)}:=\mathrm{svec}(h^{(i)}(h^{(i)})^{\top}) denote the covariates and f¯(i)=[f(i);1]\overline{f}^{(i)}=[f^{(i)};1] denote the extended covariates. Define (f)(i)(f^{\ast})^{(i)} and (f¯)(i)(\overline{f}^{\ast})^{(i)} similarly by replacing h(i)h^{(i)} with (h)(i)(h^{\ast})^{(i)}. Then, regression (4.3) can be written as

minsvec(N),bi=1n(c(i)(f(i))svec(N)b)2.\displaystyle\min_{\mathrm{svec}(N),b}\sum\nolimits_{i=1}^{n}\big{(}c^{(i)}-(f^{(i)})^{\top}\mathrm{svec}(N)-b\big{)}^{2}. (4.4)

Let F¯:=[f¯(1),,f¯(n)]\overline{F}:=[\overline{f}^{(1)},\ldots,\overline{f}^{(n)}]^{\top} be an n×d(d+3)2n\times\frac{d(d+3)}{2} matrix whose iith row is (f¯(i))(\overline{f}^{(i)})^{\top}. Define F¯\overline{F}^{\ast} similarly by replacing f¯(i)\overline{f}^{(i)} with (f¯)(i)(\overline{f}^{\ast})^{(i)}. Solving linear regression (4.4) gives

F¯F¯[svec(N^);b^]=i=1nf¯(i)c(i).\displaystyle\overline{F}^{\top}\overline{F}[\mathrm{svec}(\hat{N});\hat{b}]=\sum\nolimits_{i=1}^{n}\overline{f}^{(i)}c^{(i)}.

Substituting c(i)=((f¯)(i))[svec(N);b]+e(i)c^{(i)}=((\overline{f}^{\ast})^{(i)})^{\top}[\mathrm{svec}(N^{\ast});b^{\ast}]+e^{(i)} into the above equation yields

F¯F¯[svec(N^);b^]=F¯F¯[svec(N);b]+F¯ξ,\displaystyle\overline{F}^{\top}\overline{F}[\mathrm{svec}(\hat{N});\hat{b}]=\overline{F}^{\top}\overline{F}^{\ast}[\mathrm{svec}(N^{\ast});b^{\ast}]+\overline{F}^{\top}\xi, (4.5)

where ξ\xi denotes the vector whose iith element is e(i)e^{(i)}. Rearranging the terms, we have

F¯F¯[svec(N^N);b^b]=F¯(F¯F¯)[svec(N);b]+F¯ξ.\overline{F}^{\top}\overline{F}[\mathrm{svec}(\hat{N}-N^{\ast});\hat{b}-b^{\ast}]=\overline{F}^{\top}(\overline{F}^{\ast}-\overline{F})[\mathrm{svec}(N^{\ast});b^{\ast}]+\overline{F}^{\top}\xi. (4.6)

Next, we show that F¯F¯\overline{F}^{\top}\overline{F} is invertible with high probability. We can represent hh^{\ast} by Σ1/2h0\Sigma_{\ast}^{1/2}h_{0}, where h0𝒩(0,Id)h_{0}\sim\mathcal{N}(0,I_{d}) is an dd-dimensional standard Gaussian random vector. Correspondingly, an independent observation (h)(i)(h^{\ast})^{(i)} can be expressed as Σ1/2h0(i)\Sigma_{\ast}^{1/2}h_{0}^{(i)}, where h0(i)h_{0}^{(i)} is an independent observation of h0h_{0}. It follows that h(h)=Σ1/2h0h0Σ1/2h^{\ast}(h^{\ast})^{\top}=\Sigma_{\ast}^{1/2}h_{0}h_{0}^{\top}\Sigma_{\ast}^{1/2} and (h)(i)((h)(i))=Σ1/2h0(i)(h0(i))Σ1/2(h^{\ast})^{(i)}((h^{\ast})^{(i)})^{\top}=\Sigma_{\ast}^{1/2}h_{0}^{(i)}(h_{0}^{(i)})^{\top}\Sigma_{\ast}^{1/2}. Define f0:=svec(h0h0)f_{0}:=\mathrm{svec}(h_{0}h_{0}^{\top}), f0(i):=svec(h0(i)(h0(i)))f_{0}^{(i)}:=\mathrm{svec}(h_{0}^{(i)}(h_{0}^{(i)})^{\top}), and F0:=[f0(1),,f0(n)]F_{0}:=[f_{0}^{(1)},\ldots,f_{0}^{(n)}]^{\top} be an n×r(r+1)2n\times\frac{r(r+1)}{2} matrix whose iith row is (f0(i))(f^{(i)}_{0})^{\top}. Define f¯0\overline{f}_{0}, f¯0(i)\overline{f}_{0}^{(i)} and F¯0\overline{F}_{0} as the extended counterparts. Then,

f=svec(hh)=\displaystyle f=\mathrm{svec}(hh^{\top})=\; svec(Σ1/2h0h0Σ1/2)=(Σ1/2sΣ1/2)svec(h0h0)=Φf0,\displaystyle\mathrm{svec}(\Sigma_{\ast}^{1/2}h_{0}h_{0}^{\top}\Sigma_{\ast}^{1/2})=(\Sigma_{\ast}^{1/2}\otimes_{s}\Sigma_{\ast}^{1/2})\mathrm{svec}(h_{0}h_{0}^{\top})=\Phi^{\ast}f_{0},

where Φ:=Σ1/2sΣ1/2\Phi^{\ast}:=\Sigma_{\ast}^{1/2}\otimes_{s}\Sigma_{\ast}^{1/2} is a d(d+1)2×d(d+1)2\frac{d(d+1)}{2}\times\frac{d(d+1)}{2} matrix. Then, F=F0(Φ)F^{\ast}=F_{0}(\Phi^{\ast})^{\top}. By the properties of the symmetric Kronecker product (Schacke, 2004),

σmin(Φ)=σmin(Σ1/2)2=σmin(Σ)=β>0.\displaystyle\sigma_{\min}(\Phi^{\ast})=\sigma_{\min}(\Sigma_{\ast}^{1/2})^{2}=\sigma_{\min}(\Sigma_{\ast})=\beta>0.

By Lemma 1, there exist absolute constants a0,a1>0a_{0},a_{1}>0, such that if na0d4log(a0d2/p)n\geq a_{0}d^{4}\log(a_{0}d^{2}/p), with probability at least 1p1-p, λmin(F¯0F¯0)a1d1n\lambda_{\min}(\overline{F}_{0}^{\top}\overline{F}_{0})\geq a_{1}d^{-1}n. Since f¯=diag(Φ,1)f¯0\overline{f}=\textnormal{diag}(\Phi^{\ast},1)\overline{f}_{0} and F¯=F¯0diag((Φ),1)\overline{F}^{\ast}=\overline{F}_{0}\textnormal{diag}((\Phi^{\ast})^{\top},1),

λmin((F¯)F¯)\displaystyle\lambda_{\min}((\overline{F}^{\ast})^{\top}\overline{F}^{\ast})\geq\; σmin(diag(Φ,1))2a1d1n=(β21)a1d1n.\displaystyle\sigma_{\min}(\textnormal{diag}(\Phi^{\ast},1))^{2}a_{1}d^{-1}n=(\beta^{2}\wedge 1)a_{1}d^{-1}n.

By Weyl’s inequality,

|σmin(F¯)σmin(F¯)|F¯F¯2=FF2.\displaystyle|\sigma_{\min}(\overline{F})-\sigma_{\min}(\overline{F}^{\ast})|\leq\|\overline{F}-\overline{F}^{\ast}\|_{2}=\|F-F^{\ast}\|_{2}.

Hence, we want to bound FF2\|F^{\ast}-F\|_{2}, which satisfies

FF22FFF2=i=1n(h)(i)((h)(i))h(i)(h(i))F2.\displaystyle\|F^{\ast}-F\|_{2}^{2}\leq\|F^{\ast}-F\|_{F}^{2}=\sum\nolimits_{i=1}^{n}\|(h^{\ast})^{(i)}((h^{\ast})^{(i)})^{\top}-h^{(i)}(h^{(i)})^{\top}\|_{F}^{2}.

Since h(h)hhh^{\ast}(h^{\ast})^{\top}-hh^{\top} has at most rank two, we have

h(h)hhF\displaystyle\|h^{\ast}(h^{\ast})^{\top}-hh^{\top}\|_{F}\leq\; 2h(h)hh2\displaystyle\sqrt{2}\|h^{\ast}(h^{\ast})^{\top}-hh^{\top}\|_{2}
=\displaystyle=\; h(hh)+(hh)h2\displaystyle\|h^{\ast}(h^{\ast}-h)^{\top}+(h^{\ast}-h)h^{\top}\|_{2}
\displaystyle\leq\; (h+h)δ.\displaystyle(\|h^{\ast}\|+\|h\|)\|\delta\|.

Since h𝒩(0,Σ)h^{\ast}\sim\mathcal{N}(0,\Sigma_{\ast}), h\|h^{\ast}\| is sub-Gaussian with its mean and sub-Gaussian norm bounded by 𝒪((dΣ)1/2)\mathcal{O}((d\|\Sigma_{\ast}\|)^{1/2}). Since δ\|\delta\| is sub-Gaussian with its mean and sub-Gaussian norm bounded by ϵ(dΣ)1/2\epsilon\leq(d\|\Sigma_{\ast}\|)^{1/2}, we conclude that h(h)hhF\|h^{\ast}(h^{\ast})^{\top}-hh^{\top}\|_{F} is subexponential with its mean and subexponential norm bounded by 𝒪(ϵ(dΣ)1/2)\mathcal{O}(\epsilon(d\|\Sigma_{\ast}\|)^{1/2}). Hence, with probability at least 1p1-p,

h(h)hhF2=𝒪(ϵ2dΣlog2(n/p)).\displaystyle\|h^{\ast}(h^{\ast})^{\top}-hh^{\top}\|_{F}^{2}=\mathcal{O}(\epsilon^{2}d\|\Sigma_{\ast}\|\log^{2}(n/p)).

Therefore,

FFF2=\displaystyle\|F^{\ast}-F\|_{F}^{2}=\; i=1n(h)(i)((h)(i))h(i)(h(i))F2=𝒪(ϵ2dΣ2nlog2(n/p)).\displaystyle\sum\nolimits_{i=1}^{n}\|(h^{\ast})^{(i)}((h^{\ast})^{(i)})^{\top}-h^{(i)}(h^{(i)})^{\top}\|_{F}^{2}=\mathcal{O}(\epsilon^{2}d\|\Sigma_{\ast}\|_{2}n\log^{2}(n/p)).

It follows that

FF2=𝒪(ϵ(dΣ2n)1/2log(n/p)).\displaystyle\|F^{\ast}-F\|_{2}=\mathcal{O}(\epsilon(d\|\Sigma_{\ast}\|_{2}n)^{1/2}\log(n/p)).

Hence, there exists some absolute constant a>0a>0, such that as long as

ϵa(β1)d3/2Σ21/2(log(n/p))1,\displaystyle\epsilon\leq a(\beta\wedge 1)d^{-3/2}\|\Sigma_{\ast}\|_{2}^{-1/2}(\log(n/p))^{-1},

we have

|σmin(F)σmin(F)|(β1)a1d1/2n1/2/2.\displaystyle|\sigma_{\min}(F)-\sigma_{\min}(F^{\ast})|\leq(\beta\wedge 1)a_{1}d^{-1/2}n^{1/2}/2.

It follows that

λmin(F¯F¯)=Ω((β21)d1n)=Ω(d1n).\displaystyle\lambda_{\min}(\overline{F}^{\top}\overline{F})=\Omega((\beta^{2}\wedge 1)d^{-1}n)=\Omega(d^{-1}n).

Now, we return to (4.6). By inverting F¯F¯\overline{F}^{\top}\overline{F}, we obtain

[svec(N^N);b^b]=\displaystyle\|[\mathrm{svec}(\hat{N}-N^{\ast});\hat{b}-b^{\ast}]\|= F¯(F¯F¯)[svec(N);b]+F¯ξ\displaystyle\|\overline{F}^{\dagger}(\overline{F}^{\ast}-\overline{F})[\mathrm{svec}(N^{\ast});b^{\ast}]+\overline{F}^{\dagger}\xi\| (4.7)
\displaystyle\leq F¯(F¯F¯)[svec(N);b](a)+F¯ξ(b).\displaystyle\underbrace{\|\overline{F}^{\dagger}(\overline{F}^{\ast}-\overline{F})[\mathrm{svec}(N^{\ast});b^{\ast}]\|}_{(a)}+\underbrace{\|\overline{F}^{\dagger}\xi\|}_{(b)}.

Term (a)(a) is upper bounded by

σmin(F¯)1(F¯F¯)[svec(N);b]=\displaystyle\sigma_{\min}(\overline{F})^{-1}\|(\overline{F}^{\ast}-\overline{F})[\mathrm{svec}(N^{\ast});b^{\ast}]\|=\; 𝒪(σmin(F¯)1)(FF)svec(N)+1nb]\displaystyle\mathcal{O}(\sigma_{\min}(\overline{F})^{-1})\|(F^{\ast}-F)\mathrm{svec}(N^{\ast})+1_{n}b^{\ast}]\|
=\displaystyle=\; 𝒪((d1/2n1/2)((FF)svec(N)+1nb).\displaystyle\mathcal{O}((d^{1/2}n^{-1/2})(\|(F^{\ast}-F)\mathrm{svec}(N^{\ast})\|+\|1_{n}b^{\ast}\|).

Using arguments similar to those in (Mhammedi et al., 2020, Section B.2.13), we have

(FF)svec(N)2=\displaystyle\|(F^{\ast}-F)\mathrm{svec}(N^{\ast})\|^{2}=\; i=1nsvec((h)(i)((h)(i)))svec(h(i)(h(i))),svec(N)\displaystyle\sum_{i=1}^{n}\left\langle\mathrm{svec}((h^{\ast})^{(i)}((h^{\ast})^{(i)})^{\top})-\mathrm{svec}(h^{(i)}(h^{(i)})^{\top}),\mathrm{svec}(N^{\ast})\right\rangle
=(i)\displaystyle\overset{(i)}{=}\; i=1n(h)(i)((h)(i))h(i)(h(i)),NF\displaystyle\sum\nolimits_{i=1}^{n}\left\langle(h^{\ast})^{(i)}((h^{\ast})^{(i)})^{\top}-h^{(i)}(h^{(i)})^{\top},N^{\ast}\right\rangle_{F}
(ii)\displaystyle\overset{(ii)}{\leq}\; N22i=1n(h)(i)((h)(i))h(i)(h(i))2\displaystyle\|N^{\ast}\|_{2}^{2}\sum\nolimits_{i=1}^{n}\|(h^{\ast})^{(i)}((h^{\ast})^{(i)})^{\top}-h^{(i)}(h^{(i)})^{\top}\|_{\ast}^{2}
(iii)\displaystyle\overset{(iii)}{\leq}\; 2N22i=1n(h)(i)((h)(i))h(i)(h(i))F2\displaystyle 2\|N^{\ast}\|_{2}^{2}\sum\nolimits_{i=1}^{n}\|(h^{\ast})^{(i)}((h^{\ast})^{(i)})^{\top}-h^{(i)}(h^{(i)})^{\top}\|_{F}^{2}
=\displaystyle=\; 2N22FFF2,\displaystyle 2\|N^{\ast}\|_{2}^{2}\|F^{\ast}-F\|_{F}^{2},

where ,F\left\langle\cdot,\cdot\right\rangle_{F} denotes the Frobenius product between matrices in (i)(i), \|\cdot\|_{\ast} denotes the nuclear norm in (ii)(ii), and (iii)(iii) follows from the fact that the matrix (h)(i)((h)(i))h(i)(h(i))(h^{\ast})^{(i)}((h^{\ast})^{(i)})^{\top}-h^{(i)}(h^{(i)})^{\top} has at most rank two. Combining with 1nb=n1/2b\|1_{n}b^{\ast}\|=n^{1/2}\|b^{\ast}\|, we obtain that the term (a)(a) in (4.7) is bounded by

𝒪(d1/2n1/2(N2ϵ(dΣ2)1/2n1/2log(n/p)+n1/2b))\displaystyle\mathcal{O}\big{(}d^{1/2}n^{-1/2}\cdot(\|N^{\ast}\|_{2}\cdot\epsilon(d\|\Sigma_{\ast}\|_{2})^{1/2}n^{1/2}\log(n/p)+n^{1/2}\|b^{\ast}\|)\big{)}
=\displaystyle=\; 𝒪(dlog(n/p)+b)ϵ).\displaystyle\mathcal{O}(d\log(n/p)+\|b^{\ast}\|)\epsilon).

Now we consider term (b)(b) in (4.7):

(b)=F¯ξσmin(F¯F¯)1F¯ξ=𝒪(dn1)((F¯F¯)ξ+(F¯)ξ).\displaystyle(b)=\|\overline{F}^{\dagger}\xi\|\leq\sigma_{\min}(\overline{F}^{\top}\overline{F})^{-1}\|\overline{F}^{\top}\xi\|=\mathcal{O}(dn^{-1})(\|(\overline{F}-\overline{F}^{\ast})^{\top}\xi\|+\|(\overline{F}^{\ast})^{\top}\xi\|).

Since ξ\xi is a vector of zero-mean subexponential variables with subexponential norms bounded by EE, ξ=𝒪(En1/2log(n/p))\|\xi\|=\mathcal{O}(En^{1/2}\log(n/p)). Hence,

(F¯F¯)ξ\displaystyle\|(\overline{F}-\overline{F}^{\ast})^{\top}\xi\|\leq\; F¯F¯2ξ\displaystyle\|\overline{F}-\overline{F}^{\ast}\|_{2}\|\xi\|
=\displaystyle=\; 𝒪(ϵ(dΣ2n)1/2log(n/p))𝒪(En1/2log(n/p))\displaystyle\mathcal{O}(\epsilon(d\|\Sigma_{\ast}\|_{2}n)^{1/2}\log(n/p))\cdot\mathcal{O}(En^{1/2}\log(n/p))
=\displaystyle=\; 𝒪(d1/2Enlog2(n/p)ϵ).\displaystyle\mathcal{O}(d^{1/2}En\log^{2}(n/p)\epsilon).

To bound (F¯)ξ=diag(Φ,1)F¯0ξ\|(\overline{F}^{\ast})^{\top}\xi\|=\|\textnormal{diag}(\Phi^{\ast},1)\overline{F}_{0}^{\top}\xi\|, note that F¯0ξ=i=1nf¯0(i)e(i)\|\overline{F}_{0}^{\top}\xi\|=\|\sum_{i=1}^{n}\overline{f}_{0}^{(i)}e^{(i)}\|. Consider the jjth component in the summation i=1n[f¯0(i)]j(e)(i)\sum_{i=1}^{n}[\overline{f}_{0}^{(i)}]_{j}(e^{\prime})^{(i)}. Recall that f0=svec(h0h0)f_{0}=\mathrm{svec}(h_{0}h_{0}^{\top}), so [f¯0]j[\overline{f}_{0}]_{j} is either the square of a standard Gaussian random variable, 2\sqrt{2} times the product of two independent standard Gaussian random variables, or one. Hence, [f0]j[f_{0}]_{j} is subexponential with mean and [f0]jψ1\|[f_{0}]_{j}\|_{\psi_{1}} both bounded by 𝒪(1)\mathcal{O}(1). As a result, the product [f0]je[f_{0}]_{j}e is 12\frac{1}{2}-sub-Weibull, with the sub-Weibull norm being 𝒪(E)\mathcal{O}(E). By (Hao et al., 2019, Theorem 3.1),

i=1n[f0(i)]je(i)=𝒪(En1/2log1/2(1/p)).\displaystyle\sum\nolimits_{i=1}^{n}[f_{0}^{(i)}]_{j}e^{(i)}=\mathcal{O}(En^{1/2}\log^{1/2}(1/p)).

Hence, the norm of the d(d+1)/2d(d+1)/2-dimensional vector F0ξF_{0}^{\top}\xi is 𝒪(dEn1/2log1/2(1/p))\mathcal{O}(dEn^{1/2}\log^{1/2}(1/p)). By the properties of the symmetric Kronecker product (Schacke, 2004), Φ2=Σ1/222=Σ2=𝒪(1)\|\Phi^{\ast}\|_{2}=\|\Sigma_{\ast}^{1/2}\|_{2}^{2}=\|\Sigma_{\ast}\|_{2}=\mathcal{O}(1). Then,

(F¯)ξ=𝒪(dEn1/2log1/2(1/p)).\displaystyle\|(\overline{F}^{\ast})^{\top}\xi\|=\mathcal{O}(dEn^{1/2}\log^{1/2}(1/p)).

Eventually, term (b)(b) is bounded by

𝒪(dn1)𝒪(d1/2Enlog(n/p)ϵ+dEn1/2log1/2(1/p))\displaystyle\mathcal{O}(dn^{-1})\cdot\mathcal{O}(d^{1/2}En\log(n/p)\epsilon+dEn^{1/2}\log^{1/2}(1/p))
=\displaystyle=\; 𝒪(d3/2Elog(n/p)ϵ+d2En1/2log1/2(1/p)).\displaystyle\mathcal{O}\big{(}d^{3/2}E\log(n/p)\epsilon+d^{2}En^{-1/2}\log^{1/2}(1/p)\big{)}.

Combining the bounds on (a)(a) and (b)(b), we have

[svec(N^N);b^b]\displaystyle\|[\mathrm{svec}(\hat{N}-N^{\ast});\hat{b}-b^{\ast}]\|
=\displaystyle=\; 𝒪((dlog(n/p)+b+d3/2Elog(n/p))ϵ+d2En1/2log1/2(1/p))\displaystyle\mathcal{O}((d\log(n/p)+\|b^{\ast}\|+d^{3/2}E\log(n/p))\epsilon+d^{2}En^{-1/2}\log^{1/2}(1/p))
=\displaystyle=\; 𝒪((d(1+Ed1/2)log(n/p)+b)ϵ+d2En1/2log1/2(1/p)),\displaystyle\mathcal{O}((d(1+Ed^{1/2})\log(n/p)+\|b^{\ast}\|)\epsilon+d^{2}En^{-1/2}\log^{1/2}(1/p)),

which concludes the proof. ∎

4.3 Matrix factorization bound

Given two m×nm\times n matrices A,BA,B, we are interested in bounding minSS=ISABF\min_{S^{\top}S=I}\|SA-B\|_{F} using AABBF\|A^{\top}A-B^{\top}B\|_{F}. The minimum problem is known as the orthogonal Procrustes problem, solved in (Schoenemann, 1964; Schönemann, 1966). Specifically, the minimum is attained at S=UVS=UV^{\top}, where UΣV=BAU\Sigma V^{\top}=BA^{\top} is its singular value decomposition.

If mnm\leq n and rank(A)=m\mathrm{rank}(A)=m, then the following lemma from (Tu et al., 2016) establishes that the distance between AA and BB is of the same order of AABBF\|A^{\top}A-B^{\top}B\|_{F}.

Lemma 3 ((Tu et al., 2016, Lemma 5.4)).

For m×nm\times n matrices A,BA,B, let σm(A)\sigma_{m}(A) denote its mmth largest singular value. Then

minSS=ISABF(2(21))1/2σm(A)1AABBF.\displaystyle\min_{S^{\top}S=I}\|SA-B\|_{F}\leq(2(\sqrt{2}-1))^{-1/2}\sigma_{m}(A)^{-1}\|A^{\top}A-B^{\top}B\|_{F}.

If σmin(A)\sigma_{\min}(A) equals zero, the above bound becomes vacuous. In general, the following lemma shows that the distance is of the order of the square root of the AABBF\|A^{\top}A-B^{\top}B\|_{F}, with a multiplicative d\sqrt{d} factor, where d=min(2m,n)d=\min(2m,n).

Lemma 4.

For m×nm\times n matrices A,BA,B, minSS=ISABF2dAABBF\min_{S^{\top}S=I}\|SA-B\|_{F}^{2}\leq\sqrt{d}\|A^{\top}A-B^{\top}B\|_{F}, where d=min(2m,n)d=\min(2m,n).

Proof.

Let UΣV=BAU\Sigma V^{\top}=BA^{\top} be its singular value decomposition. By substituting the solution UVUV^{\top} of the orthogonal Procrustes problem, the square of the attained minimum equals

UVABF2\displaystyle\|UV^{\top}A-B\|_{F}^{2} =UVAB,UVABF\displaystyle=\left\langle UV^{\top}A-B,UV^{\top}A-B\right\rangle_{F}
=AF2+BF22UVA,BF\displaystyle=\|A\|_{F}^{2}+\|B\|_{F}^{2}-2\left\langle UV^{\top}A,B\right\rangle_{F}
=(i)AF2+BF22tr(Σ)\displaystyle\overset{(i)}{=}\|A\|_{F}^{2}+\|B\|_{F}^{2}-2\mathrm{tr}(\Sigma)
=AA+BB2BA,\displaystyle=\|A^{\top}A\|_{\ast}+\|B^{\top}B\|_{\ast}-2\|BA^{\top}\|_{\ast},

where (i)(i) is due to the property of U,VU,V.

To establish the relationship between AA+BB2BA\|A^{\top}A\|_{\ast}+\|B^{\top}B\|_{\ast}-2\|BA^{\top}\|_{\ast} and AABBF\|A^{\top}A-B^{\top}B\|_{F}, we need to operate in the space of singular values. For m×nm\times n matrix MM, let (σ1(M),,σd(M))(\sigma_{1}(M),\ldots,\sigma_{d^{\prime}}(M)) be its singular values in descending order, where d=mnd^{\prime}=m\wedge n.

In terms of singular values,

AA+BB2BA=(i)\displaystyle\|A^{\top}A\|_{\ast}+\|B^{\top}B\|_{\ast}-2\|BA^{\top}\|_{\ast}\overset{(i)}{=}\; AA+BB2BA\displaystyle\|A^{\top}A+B^{\top}B\|_{\ast}-2\|BA^{\top}\|_{\ast}
=(ii)\displaystyle\overset{(ii)}{=}\; i=1dσi(AA+BB)2i=1dσi(BA),\displaystyle\sum\nolimits_{i=1}^{d}\sigma_{i}(A^{\top}A+B^{\top}B)-2\sum\nolimits_{i=1}^{d^{\prime}}\sigma_{i}(BA^{\top}),

where (i)(i) holds since AAA^{\top}A and BBB^{\top}B are positive semidefinite matrices, and in (ii)(ii) d:=min(2m,n)d:=\min(2m,n) since rank(AA+BB)n\mathrm{rank}(A^{\top}A+B^{\top}B)\leq n and rank(AA+BB)rank(AA)+rank(BB)2m\mathrm{rank}(A^{\top}A+B^{\top}B)\leq\mathrm{rank}(A^{\top}A)+\mathrm{rank}(B^{\top}B)\leq 2m. If xy>0x\geq y>0, then xyx2y2x-y\leq\sqrt{x^{2}-y^{2}}. For all 1id1\leq i\leq d, 2σi(BA)σi(AA+BB)2\sigma_{i}(BA^{\top})\leq\sigma_{i}(A^{\top}A+B^{\top}B) (Bhatia and Kittaneh, 1990). Take σi(AA+BB)\sigma_{i}(A^{\top}A+B^{\top}B) as xx and 2σi(BA)2\sigma_{i}(BA^{\top}) as yy; it follows that

σi(AA+BB)2σi(BA)σi2(AA+BB)4σi2(BA).\displaystyle\sigma_{i}(A^{\top}A+B^{\top}B)-2\sigma_{i}(BA^{\top})\leq\sqrt{\sigma_{i}^{2}(A^{\top}A+B^{\top}B)-4\sigma_{i}^{2}(BA^{\top})}.

Let σi(BA):=0\sigma_{i}(BA^{\top}):=0 for d<idd^{\prime}<i\leq d. Combining the above yields

AA+BB2BA\displaystyle\|A^{\top}A\|_{\ast}+\|B^{\top}B\|_{\ast}-2\|BA^{\top}\|_{\ast}\leq\; i=1dσi2(AA+BB)4σi2(BA)\displaystyle\sum_{i=1}^{d}\sqrt{\sigma_{i}^{2}(A^{\top}A+B^{\top}B)-4\sigma_{i}^{2}(BA^{\top})}
(i)\displaystyle\overset{(i)}{\leq}\; di=1dσi2(AA+BB)4i=1dσi2(BA)\displaystyle\sqrt{d}\sqrt{\sum\nolimits_{i=1}^{d}\sigma_{i}^{2}(A^{\top}A+B^{\top}B)-4\sum\nolimits_{i=1}^{d^{\prime}}\sigma_{i}^{2}(BA^{\top})}
(ii)\displaystyle\overset{(ii)}{\leq}\; dAA+BBF24AA,BBF\displaystyle\sqrt{d}\sqrt{\|A^{\top}A+B^{\top}B\|_{F}^{2}-4\left\langle A^{\top}A,B^{\top}B\right\rangle_{F}}
\displaystyle\leq\; dAABBF,\displaystyle\sqrt{d}\|A^{\top}A-B^{\top}B\|_{F},

where (i)(i) is due to the Cauchy-Schwarz inequality, and (ii)(ii) uses

i=1dσi2(BA)=BAF2=tr(ABBA)=tr(AABB)=AA,BBF.\displaystyle\sum\nolimits_{i=1}^{d^{\prime}}\sigma_{i}^{2}(BA^{\top})=\|BA^{\top}\|_{F}^{2}=\mathrm{tr}(AB^{\top}BA^{\top})=\mathrm{tr}(A^{\top}AB^{\top}B)=\left\langle A^{\top}A,B^{\top}B\right\rangle_{F}.

This completes the proof. ∎

4.4 Linear regression bound

A standard assumption in analyzing linear regression y=Ax+ey=A^{\ast}x+e is that Cov(x)\mathrm{Cov}(x) has full rank. However, as discussed in Section 3.1, we need to handle rank deficient Cov(x)\mathrm{Cov}(x) in the first \ell steps of system identification.

The following lemma is the main result in Section 4.4.

Lemma 5 (Noisy rank deficient linear regression).

Define random vector y:=Ax+ey^{\ast}:=A^{\ast}x^{\ast}+e, where x𝒩(0,Σ)x^{\ast}\sim\mathcal{N}(0,\Sigma_{\ast}) and e𝒩(0,Σe)e\sim\mathcal{N}(0,\Sigma_{e}) are d1d_{1} and d2d_{2} dimensional Gaussian random vectors, respectively. Define x:=x+δxx:=x^{\ast}+\delta_{x} and y:=y+δyy:=y^{\ast}+\delta_{y} where the noise vectors δx\delta_{x} and δy\delta_{y} can be correlated with xx^{\ast} and yy^{\ast}, and their 2\ell_{2}-norms are sub-Gaussian with 𝔼[δx]ϵx\mathbb{E}[\|\delta^{x}\|]\leq\epsilon_{x}, δxψ2ϵx\|\|\delta^{x}\|\|_{\psi_{2}}\leq\epsilon_{x}, and 𝔼[δy]ϵy\mathbb{E}[\|\delta^{y}\|]\leq\epsilon_{y}, δyψ2ϵy\|\|\delta^{y}\|\|_{\psi_{2}}\leq\epsilon_{y}. Assume that A2\|A^{\ast}\|_{2}, Σ2\|\Sigma_{\ast}\|_{2} and Σe2\|\Sigma_{e}\|_{2} are 𝒪(1)\mathcal{O}(1). Let σmin+(Σ1/2)β>0\sigma_{\min}^{+}(\Sigma_{\ast}^{1/2})\geq\beta>0. Suppose we get nn observations x(i)x^{(i)} and y(i)y^{(i)} of xx and yy, where ((x)(i))i=1n((x^{\ast})^{(i)})_{i=1}^{n} are independent and (δx(i),δy(i))i=1n(\delta_{x}^{(i)},\delta_{y}^{(i)})_{i=1}^{n} can be correlated. Assume that σmin+(i=1nx(i)(x(i)))θ2n\sigma_{\min}^{+}(\sum_{i=1}^{n}x^{(i)}(x^{(i)})^{\top})\geq\theta^{2}n, for some θ>0\theta>0 that satisfies θ=Ω(ϵx)\theta=\Omega(\epsilon_{x}), θ=Ω(ϵy)\theta=\Omega(\epsilon_{y}) for absolute constants and θ\theta has at least n1/4n^{-1/4} dependence on nn. Consider the minimum Frobenius norm solution

A^argminAi=1ny(i)Ax(i)2.\displaystyle\hat{A}\in\operatorname*{argmin}_{A}\sum\nolimits_{i=1}^{n}\|y^{(i)}-Ax^{(i)}\|^{2}.

Then, there exists an absolute constant c>0c>0, such that if nc(d1+d2+log(1/p))n\geq c(d_{1}+d_{2}+\log(1/p)), with probability at least 1p1-p,

(A^A)Σ1/22=𝒪((β1ϵx+ϵy)log1/2(n/p)+n1/2(d1+d2+log(1/p))1/2).\displaystyle\|(\hat{A}-A^{\ast})\Sigma_{\ast}^{1/2}\|_{2}=\mathcal{O}((\beta^{-1}\epsilon_{x}+\epsilon_{y})\log^{1/2}(n/p)+n^{-1/2}(d_{1}+d_{2}+\log(1/p))^{1/2}).
Proof.

Let r=rank(Σ)r=\mathrm{rank}(\Sigma_{\ast}) and Σ=DD\Sigma_{\ast}=DD^{\top} where Dd1×rD\in\mathbb{R}^{d_{1}\times r}. We can view xx^{\ast} as generated from an rr-dimensional standard Gaussian g𝒩(0,Ir)g\sim\mathcal{N}(0,I_{r}), by x=Dgx^{\ast}=Dg; x(i)x^{(i)} can then be viewed as Dg(i)+δx(i)Dg^{(i)}+\delta_{x}^{(i)}, where (g(i))i=1n(g^{(i)})_{i=1}^{n} are independent observations of gg. Let XX denote the matrix whose iith row is (x(i))(x^{(i)})^{\top}; X,Y,G,E,Δx,ΔyX^{\ast},Y,G,E,\Delta_{x},\Delta_{y} are defined similarly.

To solve the regression problem, we set its gradient to be zero and substitute Y=X(A)+E+ΔyY=X^{\ast}(A^{\ast})^{\top}+E+\Delta_{y} to obtain

A^XX=A(X)X+EX+ΔyX.\displaystyle\hat{A}X^{\top}X=A^{\ast}(X^{\ast})^{\top}X+E^{\top}X+\Delta_{y}^{\top}X.

Substituting XX by GD+ΔxGD^{\top}+\Delta_{x} gives

A^DGGD+A^(DGΔx+ΔxmGD+ΔxΔx)\displaystyle\hat{A}DG^{\top}GD^{\top}+\hat{A}(DG^{\top}\Delta_{x}+\Delta_{x}^{m}{\top}GD^{\top}+\Delta_{x}^{\top}\Delta_{x})
=\displaystyle=\; ADGGD+ADGΔx+(E+Δy)(GD+Δx).\displaystyle A^{\ast}DG^{\top}GD^{\top}+A^{\ast}DG^{\top}\Delta_{x}+(E^{\top}+\Delta_{y}^{\top})(GD^{\top}+\Delta_{x}).

By rearranging the terms, we have

(A^A)DGGD\displaystyle(\hat{A}-A^{\ast})DG^{\top}GD^{\top}
=\displaystyle=\; ADGΔxA^(DGΔx+ΔxGD+ΔxΔx)+(E+Δy)(GD+Δx).\displaystyle A^{\ast}DG^{\top}\Delta_{x}-\hat{A}(DG^{\top}\Delta_{x}+\Delta_{x}^{\top}GD^{\top}+\Delta_{x}^{\top}\Delta_{x})+(E^{\top}+\Delta_{y}^{\top})(GD^{\top}+\Delta_{x}).

Hence,

(A^A)D2\displaystyle\|(\hat{A}-A^{\ast})D\|_{2}
=\displaystyle= (ADGΔxA^(DGΔx+ΔxGD+ΔxΔx)+(E+Δy)(GD+Δx))(D)(GG)12.\displaystyle\big{\|}\big{(}A^{\ast}DG^{\top}\Delta_{x}-\hat{A}(DG^{\top}\Delta_{x}+\Delta_{x}^{\top}GD^{\top}+\Delta_{x}^{\top}\Delta_{x})+(E^{\top}+\Delta_{y}^{\top})(GD^{\top}+\Delta_{x})\big{)}(D^{\top})^{\dagger}(G^{\top}G)^{-1}\big{\|}_{2}.

We claim that as long as n16(d1+d2+log(1/p))n\geq 16(d_{1}+d_{2}+\log(1/p)), with probability at least 14p1-4p, A^2=𝒪(A2)\|\hat{A}\|_{2}=\mathcal{O}(\|A^{\ast}\|_{2}) (Claim 1). Then, since D2=(σmin+(Σ))1β1\|D^{\dagger}\|_{2}=(\sigma_{\min}^{+}(\Sigma_{\ast}))^{-1}\leq\beta^{-1},

(A^A)D2\displaystyle\|(\hat{A}-A^{\ast})D\|_{2}
=\displaystyle=\; 𝒪(β1A2)(D2G2Δx2+Δx22)(GG)12)\displaystyle\mathcal{O}(\beta^{-1}\|A^{\ast}\|_{2})\big{(}\|D\|_{2}\|G\|_{2}\|\Delta_{x}\|_{2}+\|\Delta_{x}\|_{2}^{2})\|(G^{\top}G)^{-1}\|_{2}\big{)}
+𝒪(1)(GE2+GΔy2+β1(E2+Δy2)Δx2(GG)12).\displaystyle\quad+\mathcal{O}(1)\big{(}\|G^{\dagger}E\|_{2}+\|G^{\dagger}\Delta_{y}\|_{2}+\beta^{-1}(\|E\|_{2}+\|\Delta_{y}\|_{2})\|\Delta_{x}\|_{2}\|(G^{\top}G)^{-1}\|_{2}\big{)}.

By (Wainwright, 2019, Theorem 6.1), the Gaussian ensemble GG satisfies that with probability at least 1p1-p,

G2\displaystyle\|G\|_{2}\leq\; (nIr2)1/2+(tr(Ir))1/2+(2Ir2log(1/p))1/2\displaystyle(n\|I_{r}\|_{2})^{1/2}+(\mathrm{tr}(I_{r}))^{1/2}+(2\|I_{r}\|_{2}\log(1/p))^{1/2}
\displaystyle\leq\; n1/2+d11/2+(2log(1/p))1/2,\displaystyle n^{1/2}+d_{1}^{1/2}+(2\log(1/p))^{1/2},
σmin(G)\displaystyle\sigma_{\min}(G)\geq\; (nλmin(Ir))1/2(tr(Ir))1/2(2λmin(Ir)log(1/p))1/2\displaystyle(n\lambda_{\min}(I_{r}))^{1/2}-(\mathrm{tr}(I_{r}))^{1/2}-(2\lambda_{\min}(I_{r})\log(1/p))^{1/2}
\displaystyle\geq\; n1/2d11/2(2log(1/p))1/2).\displaystyle n^{1/2}-d_{1}^{1/2}-(2\log(1/p))^{1/2}).

Since n8d1+16log(1/p)n\geq 8d_{1}+16\log(1/p), we have G2=𝒪(n1/2)\|G\|_{2}=\mathcal{O}(n^{1/2}) and σmin(G)=Ω(n1/2)\sigma_{\min}(G)=\Omega(n^{1/2}). It follows that (GG)12=𝒪(n1)\|(G^{\top}G)^{-1}\|_{2}=\mathcal{O}(n^{-1}) and G2=𝒪(n1/2)\|G^{\dagger}\|_{2}=\mathcal{O}(n^{-1/2}). Similarly, E2=𝒪((Σe2n)1/2)\|E\|_{2}=\mathcal{O}((\|\Sigma_{e}\|_{2}n)^{1/2}). Note that ΔxΔx=i=1nδx(i)(δx(i))\Delta_{x}^{\top}\Delta_{x}=\sum_{i=1}^{n}\delta_{x}^{(i)}(\delta_{x}^{(i)})^{\top}. Since δx(i)\|\delta_{x}^{(i)}\| is sub-Gaussian with 𝔼[δx(i)]ϵx\mathbb{E}[\|\delta_{x}^{(i)}\|]\leq\epsilon_{x} and δx(i)ψ2ϵx\|\|\delta_{x}^{(i)}\|\|_{\psi_{2}}\leq\epsilon_{x}, with probability at least 1p1-p, δx(i)=𝒪(ϵxlog1/2(n/p))\|\delta_{x}^{(i)}\|=\mathcal{O}(\epsilon_{x}\log^{1/2}(n/p)). Hence,

Δx2ΔxF=(i=1nδx(i)2)1/2=𝒪(ϵxn1/2log1/2(n/p)).\displaystyle\|\Delta_{x}\|_{2}\leq\|\Delta_{x}\|_{F}=\big{(}\sum\nolimits_{i=1}^{n}\|\delta_{x}^{(i)}\|^{2}\big{)}^{1/2}=\mathcal{O}(\epsilon_{x}n^{1/2}\log^{1/2}(n/p)).

Similarly, Δy2=𝒪(ϵyn1/2log1/2(n/p))\|\Delta_{y}\|_{2}=\mathcal{O}(\epsilon_{y}n^{1/2}\log^{1/2}(n/p)). Hence,

(A^A)D2=\displaystyle\|(\hat{A}-A^{\ast})D\|_{2}=\; 𝒪(β1A2)(D2ϵxlog1/2(n/p)+ϵx2log(n/p))\displaystyle\mathcal{O}(\beta^{-1}\|A^{\ast}\|_{2})(\|D\|_{2}\epsilon_{x}\log^{1/2}(n/p)+\epsilon_{x}^{2}\log(n/p))
+𝒪(1)(GE2+ϵylog1/2(n/p)\displaystyle\quad+\mathcal{O}(1)(\|G^{\dagger}E\|_{2}+\epsilon_{y}\log^{1/2}(n/p)
+β1(Σe1/22+ϵylog1/2(n/p))ϵxlog1/2(n/p))\displaystyle\qquad+\beta^{-1}(\|\Sigma_{e}^{1/2}\|_{2}+\epsilon_{y}\log^{1/2}(n/p))\epsilon_{x}\log^{1/2}(n/p))
=\displaystyle=\; 𝒪(β1A2ϵxlog1/2(n/p)+ϵylog1/2(n/p)+GE2),\displaystyle\mathcal{O}(\beta^{-1}\|A^{\ast}\|_{2}\epsilon_{x}\log^{1/2}(n/p)+\epsilon_{y}\log^{1/2}(n/p)+\|G^{\dagger}E\|_{2}),

where we consider ϵx\epsilon_{x} and ϵy\epsilon_{y} as quantities much smaller than one such that terms like ϵx2,ϵxϵy\epsilon_{x}^{2},\epsilon_{x}\epsilon_{y} are absorbed into ϵx,ϵy\epsilon_{x},\epsilon_{y}. It remains to control GE2\|G^{\dagger}E\|_{2}. (Mhammedi et al., 2020, Section B.2.11) proves via a covering number argument that with probability at least 1p1-p,

GE2=\displaystyle\|G^{\dagger}E\|_{2}=\; 𝒪(1)σmin(G)1Σe1/22(d1+d2+log(1/p))1/2\displaystyle\mathcal{O}(1)\sigma_{\min}(G)^{-1}\|\Sigma_{e}^{1/2}\|_{2}(d_{1}+d_{2}+\log(1/p))^{1/2}
=\displaystyle=\; 𝒪(n1/2(d1+d2+log(1/p))1/2).\displaystyle\mathcal{O}(n^{-1/2}(d_{1}+d_{2}+\log(1/p))^{1/2}).

Overall, we obtain that

(A^A)Σ1/22=\displaystyle\|(\hat{A}-A^{\ast})\Sigma_{\ast}^{1/2}\|_{2}=\; (A^A)D2\displaystyle\|(\hat{A}-A^{\ast})D\|_{2}
=\displaystyle=\; 𝒪(β1A2ϵxlog1/2(n/p)+ϵylog1/2(n/p)\displaystyle\mathcal{O}(\beta^{-1}\|A^{\ast}\|_{2}\epsilon_{x}\log^{1/2}(n/p)+\epsilon_{y}\log^{1/2}(n/p)
+n1/2(d1+d2+log(1/p))1/2),\displaystyle\quad+n^{-1/2}(d_{1}+d_{2}+\log(1/p))^{1/2}),

which completes the proof. ∎

Claim 1.

Under the conditions in Lemma 5, as long as n16(d1+d2+log(1/p))n\geq 16(d_{1}+d_{2}+\log(1/p)), with probability at least 14p1-4p, A^2=𝒪(A2)\|\hat{A}\|_{2}=\mathcal{O}(\|A^{\ast}\|_{2}).

Proof.

A minimum norm solution is given by the following closed-form solution of A^\hat{A} using pseudoinverse (Moore-Penrose inverse):

A^=\displaystyle\hat{A}=\; (A(X)+E+Δy)X(XX)\displaystyle(A^{\ast}(X^{\ast})^{\top}+E^{\top}+\Delta_{y}^{\top})X(X^{\top}X)^{\dagger}
=\displaystyle=\; A(XX)+(XE)+(XΔy)\displaystyle A^{\ast}(X^{\dagger}X^{\ast})^{\top}+(X^{\dagger}E)^{\top}+(X^{\dagger}\Delta_{y})^{\top}
=\displaystyle=\; A(XX)A(XΔx)+(XE)+(XΔy).\displaystyle A^{\ast}(X^{\dagger}X)^{\top}-A^{\ast}(X^{\dagger}\Delta_{x})^{\top}+(X^{\dagger}E)^{\top}+(X^{\dagger}\Delta_{y})^{\top}.

Then

A^2\displaystyle\|\hat{A}\|_{2}\leq\; A2+XE2+(A2Δx2+Δy2)(σmin+(X))1,\displaystyle\|A^{\ast}\|_{2}+\|X^{\dagger}E\|_{2}+(\|A^{\ast}\|_{2}\|\Delta_{x}\|_{2}+\|\Delta_{y}\|_{2})(\sigma_{\min}^{+}(X))^{-1},

where we note that X2=σmin+(X)1\|X^{\dagger}\|_{2}=\sigma_{\min}^{+}(X)^{-1} when X0X\neq 0. Since σmin+(X)=(σmin+(XX))1/2θn1/2\sigma_{\min}^{+}(X)=(\sigma_{\min}^{+}(X^{\top}X))^{1/2}\geq\theta n^{1/2},

(σmin+(X))1θ1n1/2.\displaystyle(\sigma_{\min}^{+}(X))^{-1}\leq\theta^{-1}n^{-1/2}.

We have shown in the proof of Lemma 5 that with probability at least 12p1-2p,

Δx2=𝒪(ϵxn1/2log1/2(n/p)),Δy2=𝒪(ϵyn1/2log1/2(n/p)).\displaystyle\|\Delta_{x}\|_{2}=\mathcal{O}(\epsilon_{x}n^{1/2}\log^{1/2}(n/p)),\quad\|\Delta_{y}\|_{2}=\mathcal{O}(\epsilon_{y}n^{1/2}\log^{1/2}(n/p)).

Similar to the proof of Lemma 5, by (Mhammedi et al., 2020, Section B.2.11), with probability at least 1p1-p,

XE2=\displaystyle\|X^{\dagger}E\|_{2}=\; 𝒪(1)σmin+(X)1Σe1/22(d1+d2+log(1/p))1/2\displaystyle\mathcal{O}(1)\sigma_{\min}^{+}(X)^{-1}\|\Sigma_{e}^{1/2}\|_{2}(d_{1}+d_{2}+\log(1/p))^{1/2}
=\displaystyle=\; 𝒪(θ1n1/2(d1+d2+log(1/p))1/2).\displaystyle\mathcal{O}(\theta^{-1}n^{-1/2}(d_{1}+d_{2}+\log(1/p))^{1/2}).

Combining the bounds above, we obtain

A^2\displaystyle\|\hat{A}\|_{2}\leq\; A2+𝒪(θ1n1/2(d1+d2+log(1/p))1/2)\displaystyle\|A^{\ast}\|_{2}+\mathcal{O}(\theta^{-1}n^{-1/2}(d_{1}+d_{2}+\log(1/p))^{1/2})
+𝒪((A2ϵx+ϵy)n1/2log1/2(n/p)θ1n1/2)\displaystyle\quad+\mathcal{O}((\|A^{\ast}\|_{2}\epsilon_{x}+\epsilon_{y})n^{1/2}\log^{1/2}(n/p)\theta^{-1}n^{-1/2})
=\displaystyle=\; 𝒪(A2(1+θ1ϵxlog1/2(n/p))+θ1ϵylog1/2(n/p))\displaystyle\mathcal{O}(\|A^{\ast}\|_{2}(1+\theta^{-1}\epsilon_{x}\log^{1/2}(n/p))+\theta^{-1}\epsilon_{y}\log^{1/2}(n/p))
+𝒪(θ1n1/2(d1+d2+log(1/p))1/2).\displaystyle\quad+\mathcal{O}(\theta^{-1}n^{-1/2}(d_{1}+d_{2}+\log(1/p))^{1/2}).

Hence, as long as θ=Ω(ϵx)\theta=\Omega(\epsilon_{x}), θ=Ω(ϵy)\theta=\Omega(\epsilon_{y}) for absolute constants and θ\theta has at least n1/4n^{-1/4} dependence on nn, A^2\|\hat{A}\|_{2} is bounded by cA2c\|A^{\ast}\|_{2} for an absolute constant c>0c>0. ∎

In Lemma 5, if Σ\Sigma_{\ast} has full rank and σmin(Σ)β>0\sigma_{\min}(\Sigma_{\ast})\geq\beta>0 and σmin(i=1nx(i)(x(i)))=Ω(β2n)\sigma_{\min}(\sum_{i=1}^{n}x^{(i)}(x^{(i)})^{\top})=\Omega(\beta^{2}n), then

A^A2=\displaystyle\|\hat{A}-A^{\ast}\|_{2}=\; 𝒪(β1((β1ϵx+ϵy)log1/2(n/p)+n1/2(d1+d2+log(1/p))1/2)).\displaystyle\mathcal{O}(\beta^{-1}((\beta^{-1}\epsilon_{x}+\epsilon_{y})\log^{1/2}(n/p)+n^{-1/2}(d_{1}+d_{2}+\log(1/p))^{1/2})).

The following lemma shows that we can strengthen the result by removing the β1\beta^{-1} factor before ϵx\epsilon_{x}.

Lemma 6 (Noisy linear regression).

Define random variable y=Ax+ey^{\ast}=A^{\ast}x^{\ast}+e, where x𝒩(0,Σ)x^{\ast}\sim\mathcal{N}(0,\Sigma_{\ast}) and e𝒩(0,Σe)e\sim\mathcal{N}(0,\Sigma_{e}) are d1d_{1} and d2d_{2} dimensional random vectors. Assume that A2\|A^{\ast}\|_{2}, Σ2\|\Sigma_{\ast}\|_{2} and Σe2\|\Sigma_{e}\|_{2} are 𝒪(1)\mathcal{O}(1), and σmin(Σ1/2)β>0\sigma_{\min}(\Sigma_{\ast}^{1/2})\geq\beta>0. Define x:=x+δxx:=x^{\ast}+\delta_{x} and y:=y+δyy:=y^{\ast}+\delta_{y} where the noise vectors δx\delta_{x} and δy\delta_{y} can be correlated with xx^{\ast} and yy^{\ast}, and their 2\ell_{2} norms are sub-Gaussian with 𝔼[δx]ϵx\mathbb{E}[\|\delta^{x}\|]\leq\epsilon_{x}, δxψ2ϵx\|\|\delta^{x}\|\|_{\psi_{2}}\leq\epsilon_{x} and 𝔼[δy]ϵy\mathbb{E}[\|\delta^{y}\|]\leq\epsilon_{y}, δyψ2ϵy\|\|\delta^{y}\|\|_{\psi_{2}}\leq\epsilon_{y}. Suppose we get nn independent observations x(i),y(i)x^{(i)},y^{(i)} of xx and yy. Assume that σmin(i=1nx(i)(x(i)))=Ω(β2n)\sigma_{\min}(\sum_{i=1}^{n}x^{(i)}(x^{(i)})^{\top})=\Omega(\beta^{2}n). Consider the minimum Frobenius norm solution

A^argminAi=1n(y(i)Ax(i))2.\displaystyle\hat{A}\in\operatorname*{argmin}_{A}\sum\nolimits_{i=1}^{n}(y^{(i)}-Ax^{(i)})^{2}.

Then, there exists an absolute constant c>0c>0, such that if nc(d1+d2+log(1/p))n\geq c(d_{1}+d_{2}+\log(1/p)), with probability at least 1p1-p,

A^A2=\displaystyle\|\hat{A}-A^{\ast}\|_{2}=\; 𝒪(β1((ϵx+ϵy)log1/2(n/p)+n1/2(d1+d2+log(1/p))1/2)).\displaystyle\mathcal{O}\big{(}\beta^{-1}\big{(}(\epsilon_{x}+\epsilon_{y})\log^{1/2}(n/p)+n^{-1/2}(d_{1}+d_{2}+\log(1/p))^{1/2}\big{)}\big{)}.
Proof.

Following the proof of Claim 1, we have

A^A2AXΔx2+XE2+XΔy2.\displaystyle\|\hat{A}-A^{\ast}\|_{2}\leq\|A^{\ast}\|\|X^{\dagger}\Delta_{x}\|_{2}+\|X^{\dagger}E\|_{2}+\|X^{\dagger}\Delta_{y}\|_{2}.

Combining with the bounds on XΔx2\|X^{\dagger}\Delta_{x}\|_{2}, XΔy2\|X^{\dagger}\Delta_{y}\|_{2} and XE2\|X^{\dagger}E\|_{2} concludes the proof. ∎

5 Concluding remarks

We examined cost-driven state representation learning methods in time-varying LQG control. With a finite-sample analysis, we showed that a direct, cost-driven state representation learning algorithm effectively solves LQG. In the analysis, we revealed the importance of using multi-step cumulative costs as the supervision signal, and a separation of the convergence rates before and after step \ell, the controllability index, due to early-stage insufficient excitement of the system. A major limitation of our method is the use of history-based state representation functions; recovering the recursive Kalman filter would be ideal.

This work has opened up many opportunities for future research. An immediate question is how our cost-driven state representation learning approach performs in the infinite-horizon LTI setting. Moreover, one may wonder about the extent to which cost-driven state representation learning generalizes to nonlinear observations or systems. Investigating the connection with reduced-order control is also an interesting question, which may reveal the unique advantage of cost-driven state representation learning. Finally, one argument for favoring latent-model-based over model-free methods is their ability to generalize across different tasks; cost-driven state representation learning may offer a perspective to formalize this intuition.

Acknowledgement

YT, SS acknowledge partial support from the NSF BIGDATA grant (number 1741341). KZ’s work was mainly done while at MIT, and acknowledges partial support from Simons-Berkeley Research Fellowship. The authors also thank Xiang Fu, Horia Mania, and Alexandre Megretski for helpful discussions.

References

  • Åström (2012) Karl J Åström. Introduction to Stochastic Control Theory. Courier Corporation, 2012.
  • Bertsekas (2012) Dimitri Bertsekas. Dynamic Programming and Optimal Control: Volume I, volume 1. Athena Scientific, 2012.
  • Bhatia and Kittaneh (1990) Rajendra Bhatia and Fuad Kittaneh. On the singular values of a product of operators. SIAM Journal on Matrix Analysis and Applications, 11(2):272–277, 1990.
  • Boyd and Vandenberghe (2004) Stephen P Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
  • Deng et al. (2021) Fei Deng, Ingook Jang, and Sungjin Ahn. Dreamerpro: Reconstruction-free model-based reinforcement learning with prototypical representations. arXiv preprint arXiv:2110.14565, 2021.
  • Fazel et al. (2018) Maryam Fazel, Rong Ge, Sham Kakade, and Mehran Mesbahi. Global convergence of policy gradient methods for the linear quadratic regulator. In International Conference on Machine Learning, pages 1467–1476. PMLR, 2018.
  • Frandsen et al. (2022) Abraham Frandsen, Rong Ge, and Holden Lee. Extracting latent state representations with linear dynamics from rich observations. In International Conference on Machine Learning, pages 6705–6725. PMLR, 2022.
  • Fu et al. (2021) Xiang Fu, Ge Yang, Pulkit Agrawal, and Tommi Jaakkola. Learning task informed abstractions. In International Conference on Machine Learning, pages 3480–3491. PMLR, 2021.
  • Ha and Schmidhuber (2018) David Ha and Jürgen Schmidhuber. World models. arXiv preprint arXiv:1803.10122, 2018.
  • Hafner et al. (2019a) Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603, 2019a.
  • Hafner et al. (2019b) Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In International conference on machine learning, pages 2555–2565. PMLR, 2019b.
  • Hafner et al. (2020) Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, and Jimmy Ba. Mastering atari with discrete world models. arXiv preprint arXiv:2010.02193, 2020.
  • Hao et al. (2019) Botao Hao, Yasin Abbasi Yadkori, Zheng Wen, and Guang Cheng. Bootstrapping upper confidence bound. Advances in Neural Information Processing Systems, 32, 2019.
  • Jadbabaie et al. (2021) Ali Jadbabaie, Horia Mania, Devavrat Shah, and Suvrit Sra. Time varying regression with hidden linear dynamics. arXiv preprint arXiv:2112.14862, 2021.
  • Kailath (1980) Thomas Kailath. Linear Systems, volume 156. Prentice-Hall Englewood Cliffs, NJ, 1980.
  • Komaroff (1996) Nicholas Komaroff. On bounds for the solution of the Riccati equation for discrete-time control systems. In Control and Dynamic Systems, volume 78, pages 275–311. Elsevier, 1996.
  • Lale et al. (2020) Sahin Lale, Kamyar Azizzadenesheli, Babak Hassibi, and Anima Anandkumar. Logarithmic regret bound in partially observable linear dynamical systems. Advances in Neural Information Processing Systems, 33:20876–20888, 2020.
  • Lale et al. (2021) Sahin Lale, Kamyar Azizzadenesheli, Babak Hassibi, and Anima Anandkumar. Adaptive control and regret minimization in linear quadratic Gaussian (LQG) setting. In 2021 American Control Conference (ACC), pages 2517–2522. IEEE, 2021.
  • Lamb et al. (2022) Alex Lamb, Riashat Islam, Yonathan Efroni, Aniket Didolkar, Dipendra Misra, Dylan Foster, Lekan Molu, Rajan Chari, Akshay Krishnamurthy, and John Langford. Guaranteed discovery of controllable latent states with multi-step inverse models. arXiv preprint arXiv:2207.08229, 2022.
  • Ljung (1998) Lennart Ljung. System identification. In Signal Analysis and Prediction, pages 163–173. Springer, 1998.
  • Mania et al. (2019) Horia Mania, Stephen Tu, and Benjamin Recht. Certainty equivalence is efficient for linear quadratic control. Advances in Neural Information Processing Systems, 32, 2019.
  • Mhammedi et al. (2020) Zakaria Mhammedi, Dylan J Foster, Max Simchowitz, Dipendra Misra, Wen Sun, Akshay Krishnamurthy, Alexander Rakhlin, and John Langford. Learning the linear quadratic regulator from nonlinear observations. Advances in Neural Information Processing Systems, 33:14532–14543, 2020.
  • Minasyan et al. (2021) Edgar Minasyan, Paula Gradu, Max Simchowitz, and Elad Hazan. Online control of unknown time-varying dynamical systems. Advances in Neural Information Processing Systems, 34, 2021.
  • Okada and Taniguchi (2021) Masashi Okada and Tadahiro Taniguchi. Dreaming: Model-based reinforcement learning by latent imagination without reconstruction. In IEEE International Conference on Robotics and Automation (ICRA), pages 4209–4215. IEEE, 2021.
  • Oymak and Ozay (2019) Samet Oymak and Necmiye Ozay. Non-asymptotic identification of LTI systems from a single trajectory. In 2019 American control conference (ACC), pages 5655–5661. IEEE, 2019.
  • Schacke (2004) Kathrin Schacke. On the Kronecker product. Master’s Thesis, University of Waterloo, 2004.
  • Schoenemann (1964) Peter Hans Schoenemann. A solution of the orthogonal Procrustes problem with applications to orthogonal and oblique rotation. University of Illinois at Urbana-Champaign, 1964.
  • Schönemann (1966) Peter H Schönemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1–10, 1966.
  • Schrittwieser et al. (2020) Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering Atari, Go, Chess and Shogi by planning with a learned model. Nature, 588(7839):604–609, 2020.
  • Simchowitz et al. (2020) Max Simchowitz, Karan Singh, and Elad Hazan. Improper learning for non-stochastic control. In Conference on Learning Theory, pages 3320–3436. PMLR, 2020.
  • Subramanian et al. (2020) Jayakumar Subramanian, Amit Sinha, Raihan Seraj, and Aditya Mahajan. Approximate information state for approximate planning and reinforcement learning in partially observed systems. arXiv preprint arXiv:2010.08843, 2020.
  • Sun et al. (2019) Wen Sun, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, and John Langford. Model-based RL in contextual decision processes: PAC bounds and exponential improvements over model-free approaches. In Conference on Learning Theory, pages 2898–2933. PMLR, 2019.
  • Sutton and Barto (2018) Richard S Sutton and Andrew G Barto. Reinforcement Learning: An Introduction. MIT Press, 2018.
  • Tu and Recht (2019) Stephen Tu and Benjamin Recht. The gap between model-based and model-free methods on the linear quadratic regulator: An asymptotic viewpoint. In Conference on Learning Theory, pages 3036–3083. PMLR, 2019.
  • Tu et al. (2016) Stephen Tu, Ross Boczar, Max Simchowitz, Mahdi Soltanolkotabi, and Ben Recht. Low-rank solutions of linear matrix equations via procrustes flow. In International Conference on Machine Learning, pages 964–973. PMLR, 2016.
  • Umenberger et al. (2022) Jack Umenberger, Max Simchowitz, Juan Carlos Perdomo, Kaiqing Zhang, and Russ Tedrake. Globally convergent policy search for output estimation. In Advances in Neural Information Processing Systems, 2022.
  • Vershynin (2018) Roman Vershynin. High-dimensional Probability: An Introduction with Applications in Data Science, volume 47. Cambridge University press, 2018.
  • Wainwright (2019) Martin J Wainwright. High-dimensional Statistics: A Non-asymptotic Viewpoint, volume 48. Cambridge University Press, 2019.
  • Wang et al. (2022) Tongzhou Wang, Simon S Du, Antonio Torralba, Phillip Isola, Amy Zhang, and Yuandong Tian. Denoised MDPs: Learning world models better than the world itself. arXiv preprint arXiv:2206.15477, 2022.
  • Yang et al. (2022) Lujie Yang, Kaiqing Zhang, Alexandre Amice, Yunzhu Li, and Russ Tedrake. Discrete approximate information states in partially observable environments. In 2022 American Control Conference (ACC), pages 1406–1413. IEEE, 2022.
  • Zhang et al. (2020) Amy Zhang, Rowan McAllister, Roberto Calandra, Yarin Gal, and Sergey Levine. Learning invariant representations for reinforcement learning without reconstruction. arXiv preprint arXiv:2006.10742, 2020.
  • Zhang and Wei (2022) Huiming Zhang and Haoyu Wei. Sharper sub-weibull concentrations. Mathematics, 10(13):2252, 2022.
  • Zhang et al. (2021) Kaiqing Zhang, Xiangyuan Zhang, Bin Hu, and Tamer Basar. Derivative-free policy optimization for linear risk-sensitive and robust control design: Implicit regularization and sample complexity. Advances in Neural Information Processing Systems, 34:2949–2964, 2021.
  • Zhang et al. (2019) Marvin Zhang, Sharad Vikram, Laura Smith, Pieter Abbeel, Matthew Johnson, and Sergey Levine. Solar: Deep structured representations for model-based reinforcement learning. In International Conference on Machine Learning, pages 7444–7453. PMLR, 2019.
  • Zhang and Zhang (2021) Qinghua Zhang and Liangquan Zhang. Boundedness of the Kalman filter revisited. IFAC-PapersOnLine, 54(7):334–338, 2021.
  • Zheng et al. (2021) Yang Zheng, Luca Furieri, Maryam Kamgarpour, and Na Li. Sample complexity of linear quadratic Gaussian (LQG) control for output feedback systems. In Learning for Dynamics and Control, pages 559–570. PMLR, 2021.
  • Zhou and Zhao (2017) Bin Zhou and Tianrui Zhao. On asymptotic stability of discrete-time linear time-varying systems. IEEE Transactions on Automatic Control, 62(8):4274–4281, 2017.

Appendix A Proposition on multi-step cumulative costs

The following proposition does not appear in the main body, but is important for analyzing CoReL (Algorithm 2) in Section 4.1.

Proposition 3.

Let (zt)t=0T(z^{\ast\prime}_{t})_{t=0}^{T} be the state estimates by the Kalman filter under the normalized parameterization. If we apply ut𝒩(0,σu2I)u_{t}\sim\mathcal{N}(0,\sigma_{u}^{2}I) for all 0tT10\leq t\leq T-1, then for 0tT0\leq t\leq T,

c¯t:=ct+ct+1++ct+k1=zt2+τ=tt+k1uτRτ2+bt+et,\displaystyle\overline{c}_{t}:=c_{t}+c_{t+1}+\ldots+c_{t+k-1}=\|z^{\ast\prime}_{t}\|^{2}+\sum\nolimits_{\tau=t}^{t+k-1}\|u_{\tau}\|_{R^{\ast}_{\tau}}^{2}+b^{\prime}_{t}+e^{\prime}_{t},

where k=1k=1 for 0t10\leq t\leq\ell-1 and k=m(Tt+1)k=m\wedge(T-t+1) for tT\ell\leq t\leq T, bt=𝒪(k)b^{\prime}_{t}=\mathcal{O}(k), and ete^{\prime}_{t} is a zero-mean subexponential random variable with etψ1=𝒪(kdx1/2)\|e^{\prime}_{t}\|_{\psi_{1}}=\mathcal{O}(kd_{x}^{1/2}).

Proof.

By Proposition 1, zt+1=Atzt+Btut+Lt+1it+1z^{\ast\prime}_{t+1}=A^{\ast\prime}_{t}z^{\ast\prime}_{t}+B^{\ast\prime}_{t}u_{t}+L^{\ast\prime}_{t+1}i^{\prime}_{t+1}, where Lt+1,it+1L^{\ast\prime}_{t+1},i^{\prime}_{t+1} are the Kalman gain and the innovation under the normalized parameterization, respectively. Under Assumptions 1 and 4, (it)t=0T(i^{\prime}_{t})_{t=0}^{T} are Gaussian random vectors whose covariances have 𝒪(1)\mathcal{O}(1) operator norms, and (Lt)t=0T(L^{\ast\prime}_{t})_{t=0}^{T} have 𝒪(1)\mathcal{O}(1) operator norms (Zhang and Zhang, 2021). Hence, The covariance of Lt+1it+1L^{\ast\prime}_{t+1}i^{\prime}_{t+1} has 𝒪(1)\mathcal{O}(1) operator norm. Since ut𝒩(0,σu2I)u_{t}\sim\mathcal{N}(0,\sigma_{u}^{2}I), jt:=Btut+itj_{t}:=B^{\ast\prime}_{t}u_{t}+i^{\prime}_{t} can be viewed as a Gaussian noise vector whose covariance has 𝒪(1)\mathcal{O}(1) operator norm. By Proposition 1,

ct=ztQt2+utRt2+bt+et,\displaystyle c_{t}=\|z^{\ast\prime}_{t}\|_{Q^{\ast\prime}_{t}}^{2}+\|u_{t}\|_{R^{\ast}_{t}}^{2}+b^{\prime}_{t}+e^{\prime}_{t},

where et:=γt+ηte^{\prime}_{t}:=\gamma^{\prime}_{t}+\eta^{\prime}_{t} is subexponential with etψ1=𝒪(dx1/2)\|e^{\prime}_{t}\|_{\psi_{1}}=\mathcal{O}(d_{x}^{1/2}). Let Φt,t0=At1At2At0\Phi^{\prime}_{t,t_{0}}=A^{\ast\prime}_{t-1}A^{\ast\prime}_{t-2}\cdots A^{\ast\prime}_{t_{0}} for t>t0t>t_{0} and Φt,t=I\Phi^{\prime}_{t,t}=I. Then, for τt\tau\geq t,

zτ=Φτ,tzt+s=tτ1Φτ,sjs:=Φτ,tzt+jτ,t,\displaystyle z^{\ast\prime}_{\tau}=\Phi^{\prime}_{\tau,t}z^{\ast\prime}_{t}+\sum\nolimits_{s=t}^{\tau-1}\Phi^{\prime}_{\tau,s}j_{s}:=\Phi^{\prime}_{\tau,t}z^{\ast\prime}_{t}+j^{\prime}_{\tau,t},

where jt,t=0j^{\prime}_{t,t}=0 and for τ>t\tau>t, jτ,tj^{\prime}_{\tau,t} is a Gaussian random vector with bounded covariance due to uniform exponential stability (Assumption 1). Therefore,

c¯t=\displaystyle\overline{c}_{t}=\; τ=tt+k1cτ\displaystyle\sum\nolimits_{\tau=t}^{t+k-1}c_{\tau}
=\displaystyle=\; τ=tt+k1(Φτ,tzt+jτ,tQτ2+uτRτ2+bτ+eτ)\displaystyle\sum\nolimits_{\tau=t}^{t+k-1}\big{(}\|\Phi^{\prime}_{\tau,t}z^{\ast\prime}_{t}+j^{\prime}_{\tau,t}\|_{Q^{\ast\prime}_{\tau}}^{2}+\|u_{\tau}\|_{R^{\ast}_{\tau}}^{2}+b^{\prime}_{\tau}+e^{\prime}_{\tau}\big{)}
=\displaystyle=\; (zt)(τ=tt+k1(Φτ,t)QtΦτ,t)zt+τ=tt+k1uτRτ2\displaystyle(z^{\ast\prime}_{t})^{\top}\Big{(}\sum\nolimits_{\tau=t}^{t+k-1}(\Phi^{\prime}_{\tau,t})^{\top}Q^{\ast\prime}_{t}\Phi^{\prime}_{\tau,t}\Big{)}z^{\ast\prime}_{t}+\sum\nolimits_{\tau=t}^{t+k-1}\|u_{\tau}\|_{R^{\ast}_{\tau}}^{2}
+τ=tt+k1(jτ,tQτ2+(jτ,t)QτΦτ,tzt+bτ+eτ)\displaystyle\quad+\sum\nolimits_{\tau=t}^{t+k-1}(\|j^{\prime}_{\tau,t}\|_{Q^{\ast\prime}_{\tau}}^{2}+(j^{\prime}_{\tau,t})^{\top}Q^{\ast\prime}_{\tau}\Phi^{\prime}_{\tau,t}z^{\ast\prime}_{t}+b^{\prime}_{\tau}+e^{\prime}_{\tau})
=\displaystyle=\; zt2+τ=tt+k1uτRτ2+b¯t+e¯t,\displaystyle\|z^{\ast\prime}_{t}\|^{2}+\sum\nolimits_{\tau=t}^{t+k-1}\|u_{\tau}\|_{R^{\ast}_{\tau}}^{2}+\overline{b}_{t}+\overline{e}_{t},

where τ=tt+k1(Φτ,t)QtΦτ,t=I\sum_{\tau=t}^{t+k-1}(\Phi^{\prime}_{\tau,t})^{\top}Q^{\ast\prime}_{t}\Phi^{\prime}_{\tau,t}=I is due to the normalized parameterization, b¯t:=τ=tt+k1(bτ+𝔼[jτQτ2])=𝒪(k)\overline{b}_{t}:=\sum_{\tau=t}^{t+k-1}(b_{\tau}+\mathbb{E}[\|j^{\prime}_{\tau}\|_{Q^{\ast\prime}_{\tau}}^{2}])=\mathcal{O}(k), and

e¯t:=τ=tt+k1(jτQτ2𝔼[jτQτ2]+(jτ)QτΦτ,tzt+eτ)\displaystyle\overline{e}_{t}:=\sum_{\tau=t}^{t+k-1}\big{(}\|j^{\prime}_{\tau}\|_{Q^{\ast\prime}_{\tau}}^{2}-\mathbb{E}[\|j^{\prime}_{\tau}\|_{Q^{\ast\prime}_{\tau}}^{2}]+(j^{\prime}_{\tau})^{\top}Q^{\ast\prime}_{\tau}\Phi^{\prime}_{\tau,t}z^{\ast\prime}_{t}+e^{\prime}_{\tau}\big{)}

has zero mean and is subexponential with e¯tψ1=𝒪(kdx1/2)\|\overline{e}_{t}\|_{\psi_{1}}=\mathcal{O}(kd_{x}^{1/2}). ∎

Appendix B Auxiliary results

Lemma 7.

Let x𝒩(0,Σx)x\sim\mathcal{N}(0,\Sigma_{x}) and y𝒩(0,Σy)y\sim\mathcal{N}(0,\Sigma_{y}) be dd-dimensional Gaussian random vectors. Let QQ be a d×dd\times d positive semidefinite matrix. Then, there exists an absolute constant c>0c>0 such that

x,yQψ1cdQ2Σx2Σy2.\displaystyle\|\bigl{\langle}x,y\bigr{\rangle}_{Q}\|_{\psi_{1}}\leq c\sqrt{d}\|Q\|_{2}\sqrt{\|\Sigma_{x}\|_{2}\|\Sigma_{y}\|_{2}}.
Proof.

Since |x,yQ|=|xQy|xQ2y|\bigl{\langle}x,y\bigr{\rangle}_{Q}|=|x^{\top}Qy|\leq\|x\|\|Q\|_{2}\|y\|,

x,yQψ1=|x,yQ|ψ1xQ2yψ1=Q2xyψ1.\displaystyle\|\bigl{\langle}x,y\bigr{\rangle}_{Q}\|_{\psi_{1}}=\||\bigl{\langle}x,y\bigr{\rangle}_{Q}|\|_{\psi_{1}}\leq\|\|x\|\|Q\|_{2}\|y\|\|_{\psi_{1}}=\|Q\|_{2}\cdot\|\|x\|\|y\|\|_{\psi_{1}}.

For x𝒩(0,Σx)x\sim\mathcal{N}(0,\Sigma_{x}), we know that x\|x\| is sub-Gaussian. Actually, by writing x=Σx1/2gx=\Sigma_{x}^{1/2}g for g𝒩(0,I)g\sim\mathcal{N}(0,I), we have

xψ2=Σx1/2gψ2Σx1/22gψ2=Σx1/22gψ2.\displaystyle\|\|x\|\|_{\psi_{2}}=\|\|\Sigma_{x}^{1/2}g\|\|_{\psi_{2}}\leq\|\|\Sigma_{x}^{1/2}\|_{2}\|g\|\|_{\psi_{2}}=\|\Sigma_{x}^{1/2}\|_{2}\|\|g\|\|_{\psi_{2}}.

The distribution of g2\|g\|_{2} is known as χ\chi distribution, and we know that gψ2=cd1/4\|\|g\|\|_{\psi_{2}}=c^{\prime}d^{1/4} for an absolute constant c>0c^{\prime}>0 (see, e.g., (Wainwright, 2019)). Hence, xψ2cd1/4Σx21/2\|\|x\|\|_{\psi_{2}}\leq c^{\prime}d^{1/4}\|\Sigma_{x}\|_{2}^{1/2}. Similarly, yψ2cd1/4Σy21/2\|\|y\|\|_{\psi_{2}}\leq c^{\prime}d^{1/4}\|\Sigma_{y}\|_{2}^{1/2}. Since xyψ1xψ2yψ2\|\|x\|\|y\|\|_{\psi_{1}}\leq\|\|x\|\|_{\psi_{2}}\|\|y\|\|_{\psi_{2}} (see, e.g., (Vershynin, 2018, Lemma 2.7.7)), we have

x,yQψ1(c)2dQ2|Σx2Σy2.\displaystyle\|\bigl{\langle}x,y\bigr{\rangle}_{Q}\|_{\psi_{1}}\leq(c^{\prime})^{2}\sqrt{d}\|Q\|_{2}\sqrt{|\Sigma_{x}\|_{2}\|\Sigma_{y}\|_{2}}.

Taking c=(c)2c=(c^{\prime})^{2} concludes the proof. ∎

Lemma 8.

Let x,yx,y be random vectors of dimensions dx,dyd_{x},d_{y}, respectively, defined on the same probability space. Then, Cov([x;y])2Cov(x)2+Cov(y)2\|\mathrm{Cov}([x;y])\|_{2}\leq\|\mathrm{Cov}(x)\|_{2}+\|\mathrm{Cov}(y)\|_{2}.

Proof.

Let Cov([x;y])=DD\mathrm{Cov}([x;y])=DD^{\top} be a factorization of the positive semidefinite matrix Cov([x;y])\mathrm{Cov}([x;y]), where D(dx+dy)×(dx+dy)D\in\mathbb{R}^{(d_{x}+d_{y})\times(d_{x}+d_{y})}. Let DxD_{x} and DyD_{y} be the matrices consisting of the first dxd_{x} rows and the last dyd_{y} rows of DD, respectively. Then,

Cov([x;y])=DD=\displaystyle\mathrm{Cov}([x;y])=DD^{\top}=\; [Dx;Dy][Dx,Dy]\displaystyle[D_{x};D_{y}][D_{x}^{\top},D_{y}^{\top}]
=\displaystyle=\; [DxDxDxDyDyDxDyDy].\displaystyle\begin{bmatrix}D_{x}D_{x}^{\top}&D_{x}D_{y}^{\top}\\ D_{y}D_{x}^{\top}&D_{y}D_{y}^{\top}\end{bmatrix}.

Hence, Cov(x)=DxDx\mathrm{Cov}(x)=D_{x}D_{x}^{\top} and Cov(y)=DyDy\mathrm{Cov}(y)=D_{y}D_{y}^{\top}. The proof is completed by noticing that

Cov([x;y])2=DD2=\displaystyle\|\mathrm{Cov}([x;y])\|_{2}=\|D^{\top}D\|_{2}=\; [Dx,Dy][Dx;Dy]2\displaystyle\|[D_{x}^{\top},D_{y}^{\top}][D_{x};D_{y}]\|_{2}
=\displaystyle=\; DxDx+DyDy2\displaystyle\|D_{x}^{\top}D_{x}+D_{y}^{\top}D_{y}\|_{2}
\displaystyle\leq\; DxDx2+DyDy2\displaystyle\|D_{x}^{\top}D_{x}\|_{2}+\|D_{y}^{\top}D_{y}\|_{2}
=\displaystyle=\; Cov(x)2+Cov(y)2.\displaystyle\|\mathrm{Cov}(x)\|_{2}+\|\mathrm{Cov}(y)\|_{2}.

Appendix C Certainty equivalent linear quadratic control

As shown in Lemma 5, if the input of linear regression does not have full-rank covariance, then the parameters can only be identified in certain directions. The following lemma studies the performance of the certainty equivalent optimal controller in this case.

Lemma 9 (Rank deficient linear quadratic control).

Consider the finite-horizon time-varying linear dynamical system xt+1=Atxt+Btut+wtx_{t+1}=A^{\ast}_{t}x_{t}+B^{\ast}_{t}u_{t}+w_{t} with unknown (At,Bt)t=0T1(A^{\ast}_{t},B^{\ast}_{t})_{t=0}^{T-1}.

  • Assume that (At)t=0T1(A^{\ast}_{t})_{t=0}^{T-1} is uniformly exponentially stable (Assumption 1).

  • Assume that x0𝒩(0,Σ0)x_{0}\sim\mathcal{N}(0,\Sigma_{0}), wt𝒩(0,Σwt)w_{t}\sim\mathcal{N}(0,\Sigma_{w_{t}}) for all 0tT10\leq t\leq T-1, are independent, where Σ0\Sigma_{0} and (Σwt)t=0T1(\Sigma_{w_{t}})_{t=0}^{T-1} do not necessarily have full rank.

  • Instead of xtx_{t}, we observe xt=xt+δxtx^{\prime}_{t}=x_{t}+\delta_{x_{t}}, where the Gaussian noise vector δxt\delta_{x_{t}} can be correlated with xtx_{t} and satisfy Cov(δxt)1/2ε\|\mathrm{Cov}(\delta_{x_{t}})^{1/2}\|\leq\varepsilon for all 0tT0\leq t\leq T.

  • Let Σt:=Cov(xt)\Sigma_{t}:=\mathrm{Cov}(x_{t}) and Σt:=Cov(xt)\Sigma^{\prime}_{t}:=\mathrm{Cov}(x^{\prime}_{t}) under control ut𝒩(0,σu2I)u_{t}\sim\mathcal{N}(0,\sigma_{u}^{2}I) for all 0tT10\leq t\leq T-1. Assume σmin+((Σt)1/2)β>0\sigma_{\min}^{+}((\Sigma_{t})^{1/2})\geq\beta>0 and σmin+((Σt)1/2)θ>0\sigma_{\min}^{+}((\Sigma^{\prime}_{t})^{1/2})\geq\theta>0 for all 0tT0\leq t\leq T, and that θ=Ω(ε)\theta=\Omega(\varepsilon) with at least n1/4n^{-1/4} dependence on nn.

  • Assume (Qt)t=0T,(Q^t)t=0T(Q^{\ast}_{t})_{t=0}^{T},(\hat{Q}_{t})_{t=0}^{T} are positive definite with 𝒪(1)\mathcal{O}(1) operator norms, and Q^tQt2ε\|\hat{Q}_{t}-Q^{\ast}_{t}\|_{2}\leq\varepsilon for all 0tT0\leq t\leq T.

Collect nn trajectories using ut𝒩(0,σu2I)u_{t}\sim\mathcal{N}(0,\sigma_{u}^{2}I) for some σu>0\sigma_{u}>0 and 0tT10\leq t\leq T-1. Identify (A^t,B^t)t=0T1(\hat{A}_{t},\hat{B}_{t})_{t=0}^{T-1} by SysId (Algorithm 3), which uses ordinary least squares and takes a minimum Frobenius norm solution. Let (K^t)t=0T1(\hat{K}_{t})_{t=0}^{T-1} be the optimal controller in system ((A^t,B^t,Rt)t=0T1,(Q^t)t=0T)((\hat{A}_{t},\hat{B}_{t},R^{\ast}_{t})_{t=0}^{T-1},(\hat{Q}_{t})_{t=0}^{T}). Then, there exists an absolute constant a>0a>0, such that if na(dx+du+log(1/p))n\geq a(d_{x}+d_{u}+\log(1/p)), with probability at least 1p1-p, (K^t)t=0T1(\hat{K}_{t})_{t=0}^{T-1} is ϵ\epsilon-optimal for solving the LQR problem ((At,Bt,Rt)t=0T1,(Qt)t=0T)((A^{\ast}_{t},B^{\ast}_{t},R^{\ast}_{t})_{t=0}^{T-1},(Q^{\ast}_{t})_{t=0}^{T}) for ϵ=\epsilon=

𝒪(aT((1+β1)dxε+dx(dx+du+log(1/p))1/2n1/2)),\displaystyle\mathcal{O}(a^{T}((1+\beta^{-1})d_{x}\varepsilon+d_{x}(d_{x}+d_{u}+\log(1/p))^{1/2}n^{-1/2})),

where dimension-free constant a>0a>0 depends on the system parameters.

Proof.

First of all, we establish that (Kt)t=0T1,(K^t)t=0T1(K^{\ast}_{t})_{t=0}^{T-1},(\hat{K}_{t})_{t=0}^{T-1} are bounded. Since (Qt)t=0T,(Q^t)t=0T(Q^{\ast}_{t})_{t=0}^{T},(\hat{Q}_{t})_{t=0}^{T} are positive definite, the system is fully cost observable. By (Zhang and Zhang, 2021), the optimal feedback gains (Kt)t=0T1(K^{\ast}_{t})_{t=0}^{T-1} have operator norms bounded by dimension-free constants depending on the system parameters. Note that (Zhang and Zhang, 2021) studies the boundedness of the Kalman filter, which is dual to our optimal control problem, but their results carry over. One can also see the boundedness from the compact formulation (Zhang et al., 2021), as described in the proof of Lemma 10, using known bounds for RDE solutions (Komaroff, 1996). The same argument applies to (K^t)t=0T1(\hat{K}_{t})_{t=0}^{T-1} if we can show (A^t,B^t)t=0T1(\hat{A}_{t},\hat{B}_{t})_{t=0}^{T-1} have 𝒪(1)\mathcal{O}(1) operator norms. SysId (Algorithm 3) identifies [A^t,B^t][\hat{A}_{t},\hat{B}_{t}] by ordinary least squares for all 0tT10\leq t\leq T-1. By Claim 1, with a union bound for all 0tT10\leq t\leq T-1, as long as n16(dx+du+log(T/p))n\geq 16(d_{x}+d_{u}+\log(T/p)), with probability at least 14p1-4p, [A^t,B^t]2=𝒪([At,Bt]2)=𝒪(1)\|[\hat{A}_{t},\hat{B}_{t}]\|_{2}=\mathcal{O}(\|[A^{\ast}_{t},B^{\ast}_{t}]\|_{2})=\mathcal{O}(1). Hence, with the same probability, (K^t)t=0T1(\hat{K}_{t})_{t=0}^{T-1} have operator norms bounded by dimension-free constants depending on the system parameters. In the following, we assume Kt2,K^t2κ\|K^{\ast}_{t}\|_{2},\|\hat{K}_{t}\|_{2}\leq\kappa for all 0tT10\leq t\leq T-1, where κ\kappa is a dimension-free constant that depends on system parameters.

Since Σt\Sigma_{t} can be rank deficient, we cannot guarantee A^tAt2\|\hat{A}_{t}-A^{\ast}_{t}\|_{2} is small. Instead, for all 0tT10\leq t\leq T-1, by Lemma 5,

([A^t,B^t][At,Bt])diag(Σt1/2,σuI)=𝒪(δ),\displaystyle([\hat{A}_{t},\hat{B}_{t}]-[A^{\ast}_{t},B^{\ast}_{t}])\textnormal{diag}(\Sigma_{t}^{1/2},\sigma_{u}I)=\mathcal{O}(\delta),

where δ:=(1+β1)ε+(dx+du+log(1/p))1/2n1/2\delta:=(1+\beta^{-1})\varepsilon+(d_{x}+d_{u}+\log(1/p))^{1/2}n^{-1/2}, and ϵx,ϵy,d1,d2\epsilon_{x},\epsilon_{y},d_{1},d_{2} in Lemma 5 correspond to ε,ε,dx+du,dx\varepsilon,\varepsilon,d_{x}+d_{u},d_{x} here, respectively. This implies that (A^tAt)(Σt)1/22=𝒪(δ)\|(\hat{A}_{t}-A^{\ast}_{t})(\Sigma_{t})^{1/2}\|_{2}=\mathcal{O}(\delta) and B^tBt2=𝒪(δ)\|\hat{B}_{t}-B^{\ast}_{t}\|_{2}=\mathcal{O}(\delta) for all 0tT10\leq t\leq T-1.

Let Ξt\Xi_{t} denote the covariance of xtx_{t} under state feedback controller (Kt)t=0T1(K_{t})_{t=0}^{T-1}, where Ktκ\|K_{t}\|\leq\kappa for all 0tT10\leq t\leq T-1. We claim that there exists bt>0b_{t}>0, such that ΞtbtΣt\Xi_{t}\preccurlyeq b_{t}\Sigma_{t}. We prove it by induction. At step 0, clearly, Ξ0=Σ0\Xi_{0}=\Sigma_{0}. Suppose ΞtbtΣt\Xi_{t}\preccurlyeq b_{t}\Sigma_{t}. For step t+1t+1,

Σt+1=\displaystyle\Sigma_{t+1}=\; AtΣt(At)+σu2Bt(Bt)+Σwt,\displaystyle A^{\ast}_{t}\Sigma_{t}(A^{\ast}_{t})^{\top}+\sigma_{u}^{2}B^{\ast}_{t}(B^{\ast}_{t})^{\top}+\Sigma_{w_{t}},
Ξt+1=\displaystyle\Xi_{t+1}=\; (At+BtKt)Ξt(At+BtKt)+Σwt\displaystyle(A^{\ast}_{t}+B^{\ast}_{t}K_{t})\Xi_{t}(A^{\ast}_{t}+B^{\ast}_{t}K_{t})^{\top}+\Sigma_{w_{t}}
\displaystyle\preccurlyeq\; 2AtΞt(At)+2BtKtΞtKt(Bt)+Σwt.\displaystyle 2A^{\ast}_{t}\Xi_{t}(A^{\ast}_{t})^{\top}+2B^{\ast}_{t}K_{t}\Xi_{t}K_{t}^{\top}(B^{\ast}_{t})^{\top}+\Sigma_{w_{t}}.

Hence, for b1b\geq 1,

bΣt+1Ξt+1\displaystyle b\Sigma_{t+1}-\Xi_{t+1} (b2bt)AtΣt(At)+bσu2Bt(Bt)2BtKtΞtKt(Bt).\displaystyle\succcurlyeq(b-2b_{t})A^{\ast}_{t}\Sigma_{t}(A^{\ast}_{t})^{\top}+b\sigma_{u}^{2}B^{\ast}_{t}(B^{\ast}_{t})^{\top}-2B^{\ast}_{t}K_{t}\Xi_{t}K_{t}^{\top}(B^{\ast}_{t})^{\top}.

To ensure bt+1Σt+1Ξt+1b_{t+1}\Sigma_{t+1}\succcurlyeq\Xi_{t+1}, it suffices to take bt+1=max{2,2σu2κ2Σt2}btb_{t+1}=\max\{2,2\sigma_{u}^{-2}\kappa^{2}\|\Sigma_{t}\|_{2}\}b_{t}. Hence, bta0tb_{t}\leq a_{0}^{t}, where constant a0>0a_{0}>0 is dimension-free and depends on system parameters. Note that if (Kt)t=0T1(K_{t})_{t=0}^{T-1} stabilizes (At,Bt)t=0T1(A^{\ast}_{t},B^{\ast}_{t})_{t=0}^{T-1}, then Ξt2=𝒪(1)\|\Xi_{t}\|_{2}=\mathcal{O}(1), and the bound on btb_{t} can be improved to σmin+(Σt)1Ξt2\sigma_{\min}^{+}(\Sigma_{t})^{-1}\|\Xi_{t}\|_{2}; so the exponential dependence on tt results from not knowing whether (Kt)t=0T1(K_{t})_{t=0}^{T-1} stabilizes the system. By the definition of the operator norm, we have (A^tAt)Ξt1/22(A^tAt)(btΣt)1/22=𝒪(bt1/2δ)\|(\hat{A}_{t}-A^{\ast}_{t})\Xi_{t}^{1/2}\|_{2}\leq\|(\hat{A}_{t}-A^{\ast}_{t})(b_{t}\Sigma_{t})^{1/2}\|_{2}=\mathcal{O}(b_{t}^{1/2}\delta) for all 0tT10\leq t\leq T-1.

Let Ξ^t\hat{\Xi}_{t} denote the covariance of xtx_{t} under state feedback controller (Kt)t=0T1(K_{t})_{t=0}^{T-1} in the system (A^t,B^t)t=0T1(\hat{A}_{t},\hat{B}_{t})_{t=0}^{T-1}. Then, by definition,

Ξt+1Ξ^t+12\displaystyle\;\bigl{\|}\Xi_{t+1}-\hat{\Xi}_{t+1}\bigr{\|}_{2}
=\displaystyle= (At+BtKt)Ξt(At+BtKt)+Σwt((A^t+B^tKt)Ξ^t(A^t+B^tKt)+Σwt)2\displaystyle\;\Bigl{\|}(A^{\ast}_{t}+B^{\ast}_{t}K_{t})\Xi_{t}(A^{\ast}_{t}+B^{\ast}_{t}K_{t})^{\top}+\Sigma_{w_{t}}-\Bigl{(}\bigl{(}\hat{A}_{t}+\hat{B}_{t}K_{t}\bigr{)}\hat{\Xi}_{t}\bigl{(}\hat{A}_{t}+\hat{B}_{t}K_{t}\bigr{)}^{\top}+\Sigma_{w_{t}}\Bigr{)}\Bigr{\|}_{2}
=\displaystyle= (At+BtKt)Ξt(At+BtKt)(A^t+B^tKt)Ξt(A^t+B^tKt)\displaystyle\;\Bigl{\|}(A^{\ast}_{t}+B^{\ast}_{t}K_{t})\Xi_{t}(A^{\ast}_{t}+B^{\ast}_{t}K_{t})^{\top}-\bigl{(}\hat{A}_{t}+\hat{B}_{t}K_{t}\bigr{)}\Xi_{t}\bigl{(}\hat{A}_{t}+\hat{B}_{t}K_{t}\bigr{)}^{\top}
+(A^t+B^tKt)(ΞtΞ^t)(A^t+B^tKt)2\displaystyle\quad+\bigl{(}\hat{A}_{t}+\hat{B}_{t}K_{t}\bigr{)}(\Xi_{t}-\hat{\Xi}_{t})\bigl{(}\hat{A}_{t}+\hat{B}_{t}K_{t}\bigr{)}^{\top}\Bigr{\|}_{2}
=\displaystyle= 𝒪(κbt1/2(At+BtKt)Ξt1/2(A^t+B^tKt)Ξt1/22)+𝒪(1+κ2)Ξ^tΞt2,\displaystyle\;\mathcal{O}\Bigl{(}\kappa b_{t}^{1/2}\Bigl{\|}(A^{\ast}_{t}+B^{\ast}_{t}K_{t})\Xi_{t}^{1/2}-\bigl{(}\hat{A}_{t}+\hat{B}_{t}K_{t}\bigr{)}\Xi_{t}^{1/2}\Bigr{\|}_{2}\Bigr{)}+\mathcal{O}(1+\kappa^{2})\|\hat{\Xi}_{t}-\Xi_{t}\|_{2},

where the operator norms of At,BtA^{\ast}_{t},B^{\ast}_{t} are 𝒪(1)\mathcal{O}(1) and the operator norms of A^t,B^t\hat{A}_{t},\hat{B}_{t} are 𝒪(1)\mathcal{O}(1) with probability at least 14p1-4p. Since

(At+BtKt)Ξt1/2(A^t+B^tKt)Ξt1/22=\displaystyle\Bigl{\|}(A^{\ast}_{t}+B^{\ast}_{t}K_{t})\Xi_{t}^{1/2}-\bigl{(}\hat{A}_{t}+\hat{B}_{t}K_{t}\bigr{)}\Xi_{t}^{1/2}\Bigr{\|}_{2}=\; (AtA^t)Ξt1/2+(BtB^t)KtΞt1/22\displaystyle\Bigl{\|}(A^{\ast}_{t}-\hat{A}_{t})\Xi_{t}^{1/2}+(B^{\ast}_{t}-\hat{B}_{t})K_{t}\Xi_{t}^{1/2}\Bigr{\|}_{2}
=\displaystyle=\; 𝒪(δκbt1/2),\displaystyle\mathcal{O}(\delta\kappa b_{t}^{1/2}),

we have Ξ^t+1Ξt+12=𝒪(δκ2bt+(1+κ2)Ξ^tΞt2)\bigl{\|}\hat{\Xi}_{t+1}-\Xi_{t+1}\bigr{\|}_{2}=\mathcal{O}(\delta\kappa^{2}b_{t}+(1+\kappa^{2})\|\hat{\Xi}_{t}-\Xi_{t}\|_{2}). Combining with Ξ^0=Ξ0=Σ0\hat{\Xi}_{0}=\Xi_{0}=\Sigma_{0} gives

Ξ^tΞt2=𝒪(bt(a1+a1κ2)tδ)=𝒪((a2+a2κ2)tδ)\displaystyle\|\hat{\Xi}_{t}-\Xi_{t}\|_{2}=\mathcal{O}(b_{t}(a_{1}+a_{1}\kappa^{2})^{t}\delta)=\mathcal{O}((a_{2}+a_{2}\kappa^{2})^{t}\delta)

for some dimension-free constants a1,a2>0a_{1},a_{2}>0 that depend on the system parameters.

Let J((Kt)t=0T1)J((K_{t})_{t=0}^{T-1}) denote the expected cumulative cost under state feedback controller (Kt)t=0T1(K_{t})_{t=0}^{T-1} in system ((At,Bt,Rt)t=0T1,(Qt)t=0T)((A^{\ast}_{t},B^{\ast}_{t},R^{\ast}_{t})_{t=0}^{T-1},(Q^{\ast}_{t})_{t=0}^{T}) and J^((Kt)t=0T1)\hat{J}((K_{t})_{t=0}^{T-1}) the corresponding expected cumulative cost in system ((A^t,B^t,Rt)t=0T1,(Q^t)t=0T)((\hat{A}_{t},\hat{B}_{t},R^{\ast}_{t})_{t=0}^{T-1},(\hat{Q}_{t})_{t=0}^{T}). Notice that

J((Kt)t=0T1)=𝔼[t=0Tct]=\displaystyle J((K_{t})_{t=0}^{T-1})=\mathbb{E}\Bigl{[}\sum\nolimits_{t=0}^{T}c_{t}\Bigr{]}= 𝔼[t=0Txt(Qt)xt]\displaystyle\;\mathbb{E}\Bigl{[}\sum\nolimits_{t=0}^{T}x_{t}^{\top}(Q^{\ast}_{t})^{\prime}x_{t}\Bigr{]}
=\displaystyle= 𝔼[t=0T(Qt),xtxtF]\displaystyle\;\mathbb{E}\Bigl{[}\sum\nolimits_{t=0}^{T}\left\langle(Q^{\ast}_{t})^{\prime},x_{t}x_{t}^{\top}\right\rangle_{F}\Bigr{]}
=\displaystyle= t=0T(Qt),ΞtF,\displaystyle\;\sum\nolimits_{t=0}^{T}\left\langle(Q^{\ast}_{t})^{\prime},\Xi_{t}\right\rangle_{F},

where (Qt):=(Qt+KtRtKt)(Q^{\ast}_{t})^{\prime}:=(Q^{\ast}_{t}+K_{t}^{\top}R^{\ast}_{t}K_{t}) for 0tT10\leq t\leq T-1 and (QT):=QT(Q^{\ast}_{T})^{\prime}:=Q^{\ast}_{T}. Similarly, J^((Kt)t=0T1)=t=0TQ^t,Ξ^tF\hat{J}((K_{t})_{t=0}^{T-1})=\sum_{t=0}^{T}\left\langle\hat{Q}^{\prime}_{t},\hat{\Xi}_{t}\right\rangle_{F}, where Q^t=Q^t+KtRtKt\hat{Q}^{\prime}_{t}=\hat{Q}_{t}+K_{t}^{\top}R^{\ast}_{t}K_{t} for 0tT10\leq t\leq T-1 and Q^T=Q^T\hat{Q}^{\prime}_{T}=\hat{Q}_{T}. Hence,

|J((Kt)t=0T1)J^((Kt)t=0T1)|=\displaystyle\big{|}J((K_{t})_{t=0}^{T-1})-\hat{J}((K_{t})_{t=0}^{T-1})\big{|}=\; t=0T(Qt)Q^t,ΞtF+t=0TQ^t,ΞtΞ^tF\displaystyle\sum\nolimits_{t=0}^{T}\left\langle(Q^{\ast}_{t})^{\prime}-\hat{Q}^{\prime}_{t},\Xi_{t}\right\rangle_{F}+\sum\nolimits_{t=0}^{T}\left\langle\hat{Q}^{\prime}_{t},\Xi_{t}-\hat{\Xi}_{t}\right\rangle_{F}
=\displaystyle=\; εdxt=0T𝒪(bt)+κ2dxt=0T𝒪((a2+a2κ2)tδ)\displaystyle\varepsilon d_{x}\sum\nolimits_{t=0}^{T}\mathcal{O}(b_{t})+\kappa^{2}d_{x}\sum\nolimits_{t=0}^{T}\mathcal{O}((a_{2}+a_{2}\kappa^{2})^{t}\delta)
=\displaystyle=\; εdxt=0T𝒪(a0t)+δκ2dxt=0T𝒪((a2+a2κ2)t)\displaystyle\varepsilon d_{x}\sum\nolimits_{t=0}^{T}\mathcal{O}(a_{0}^{t})+\delta\kappa^{2}d_{x}\sum\nolimits_{t=0}^{T}\mathcal{O}((a_{2}+a_{2}\kappa^{2})^{t})
=\displaystyle=\; 𝒪(δdxaT),\displaystyle\mathcal{O}(\delta d_{x}a^{T}),

where Q^t2=𝒪(κ2)\|\hat{Q}^{\prime}_{t}\|_{2}=\mathcal{O}(\kappa^{2}), ε\varepsilon is absorbed into δ=(1+β1)ε+(dx+du+log(1/p))1/2n1/2\delta=(1+\beta^{-1})\varepsilon+(d_{x}+d_{u}+\log(1/p))^{1/2}n^{-1/2}, and dimension-free constant a>0a>0 depends on the system parameters.

Finally, let (Kt)t=0T1(K^{\ast}_{t})_{t=0}^{T-1} be the optimal feedback gains in system ((At,Bt,Rt)t=0T1,(Qt)t=0T)((A^{\ast}_{t},B^{\ast}_{t},R^{\ast}_{t})_{t=0}^{T-1},(Q^{\ast}_{t})_{t=0}^{T}). By the union bound, with probability at least 18p1-8p,

|J((K^t)t=0T1)J^((K^t)t=0T1)|=𝒪(δdxaT),|J((Kt)t=0T1)J^((Kt)t=0T1)|=𝒪(δdxaT).\displaystyle|J((\hat{K}_{t})_{t=0}^{T-1})-\hat{J}((\hat{K}_{t})_{t=0}^{T-1})|=\mathcal{O}(\delta d_{x}a^{T}),\quad|J((K^{\ast}_{t})_{t=0}^{T-1})-\hat{J}((K^{\ast}_{t})_{t=0}^{T-1})|=\mathcal{O}(\delta d_{x}a^{T}).

Therefore,

J((K^t)t=0T1)J((Kt)t=0T1)\displaystyle J((\hat{K}_{t})_{t=0}^{T-1})-J((K^{\ast}_{t})_{t=0}^{T-1})
=\displaystyle=\; J((K^t)t=0T1)J^((K^t)t=0T1)+J^((K^t)t=0T1)J^((Kt)t=0T1)+J^((Kt)t=0T1)J((Kt)t=0T1)\displaystyle J((\hat{K}_{t})_{t=0}^{T-1})-\hat{J}((\hat{K}_{t})_{t=0}^{T-1})+\hat{J}((\hat{K}_{t})_{t=0}^{T-1})-\hat{J}((K^{\ast}_{t})_{t=0}^{T-1})+\hat{J}((K^{\ast}_{t})_{t=0}^{T-1})-J((K^{\ast}_{t})_{t=0}^{T-1})
=\displaystyle=\; 𝒪(dxaTδ)\displaystyle\mathcal{O}(d_{x}a^{T}\delta)
=\displaystyle=\; 𝒪(aT((1+β1)dxε+dx(dx+du+log(1/p))1/2n1/2)).\displaystyle\mathcal{O}(a^{T}((1+\beta^{-1})d_{x}\varepsilon+d_{x}(d_{x}+d_{u}+\log(1/p))^{1/2}n^{-1/2})).

The proof is completed by rescaling 8p8p to pp. ∎

If the input of linear regression has full-rank covariance, then by Lemma 6, the system parameters can be fully identified. The certainty equivalent optimal controller in this case has a much better guarantee compared to the rank-deficient case, as shown in the following lemma.

Lemma 10 (Linear quadratic control).

Assume the finite-horizon linear time-varying system
((At,Bt,Rt)t=0T1,(Qt)t=0T)((A^{\ast}_{t},B^{\ast}_{t},R^{\ast}_{t})_{t=0}^{T-1},(Q^{\ast}_{t})_{t=0}^{T}) is stabilizable. Let (Kt)t=0T1(K^{\ast}_{t})_{t=0}^{T-1} be the optimal controller and (Pt)t=0T(P^{\ast}_{t})_{t=0}^{T} be the solution to RDE (2.4) of system ((At,Bt,Rt)t=0T1,(Qt)t=0T)((A^{\ast}_{t},B^{\ast}_{t},R^{\ast}_{t})_{t=0}^{T-1},(Q^{\ast}_{t})_{t=0}^{T}). Let (K^t)t=0T1(\hat{K}_{t})_{t=0}^{T-1} be the optimal controller and (P^t)t=0T(\hat{P}_{t})_{t=0}^{T} be the solution to RDE (2.4) of system ((A^t,B^t,Rt)t=0T1,(Q^t)t=0T)((\hat{A}_{t},\hat{B}_{t},R^{\ast}_{t})_{t=0}^{T-1},(\hat{Q}_{t})_{t=0}^{T}), where A^tAt2ϵ\|\hat{A}_{t}-A^{\ast}_{t}\|_{2}\leq\epsilon, B^tBt2ϵ\|\hat{B}_{t}-B^{\ast}_{t}\|_{2}\leq\epsilon and Q^tQt2ϵ\|\hat{Q}_{t}-Q^{\ast}_{t}\|_{2}\leq\epsilon. Then there exists dimension-free constant ϵ0>0\epsilon_{0}>0 with ϵ01\epsilon_{0}^{-1} depending polynomially on system parameters, such that as long as ϵϵ0\epsilon\leq\epsilon_{0}, P^tPt2=𝒪(ϵ)\|\hat{P}_{t}-P^{\ast}_{t}\|_{2}=\mathcal{O}(\epsilon), K^tKt2=𝒪(ϵ)\|\hat{K}_{t}-K^{\ast}_{t}\|_{2}=\mathcal{O}(\epsilon) for all t0t\geq 0, and (K^t)t=0T1(\hat{K}_{t})_{t=0}^{T-1} is 𝒪((dxdu)Tϵ2)\mathcal{O}((d_{x}\wedge d_{u})T\epsilon^{2})-optimal in system ((At,Bt,Rt)t=0T1,(Qt)t=0T)((A^{\ast}_{t},B^{\ast}_{t},R^{\ast}_{t})_{t=0}^{T-1},(Q^{\ast}_{t})_{t=0}^{T}).

Proof.

(Mania et al., 2019) has studied this problem in the infinite-horizon LTI setting; here we extend their result to the finite-horizon LTV setting.

We adopt the following compact formulation of a finite-horizon LTV system, introduced in (Zhang et al., 2021, Section 3), to reduce our setting to the infinite-horizon LTI setting; since noisy and noiseless LQR has the same associated Riccati equation and optimal controller, we consider the noiseless case here:

x=[x0;;xT],u=[u0;;uT1],w=[x0;w0;;wT1],\displaystyle x=[x_{0};\ldots;x_{T}],\quad u=[u_{0};\ldots;u_{T-1}],\quad w=[x_{0};w_{0};\ldots;w_{T-1}],
A=[0dx×dxT0dx×dxdiag(A0,,AT1)0dxT×dx],B=[0dx×duTdiag(B0,,BT1)],\displaystyle A^{\ast}=\begin{bmatrix}0_{d_{x}\times d_{x}T}&0_{d_{x}\times d_{x}}\\ \textnormal{diag}(A^{\ast}_{0},\ldots,A^{\ast}_{T-1})&0_{d_{x}T\times d_{x}}\end{bmatrix},\quad B^{\ast}=\begin{bmatrix}0_{d_{x}\times d_{u}T}\\ \textnormal{diag}(B^{\ast}_{0},\ldots,B^{\ast}_{T-1})\end{bmatrix},
Q=diag(Q0,,QT),R=diag(R0,,RT1),K=[diag(K0,,KT1),0duT×dx].\displaystyle Q^{\ast}=\textnormal{diag}(Q^{\ast}_{0},\ldots,Q^{\ast}_{T}),\quad R^{\ast}=\textnormal{diag}(R^{\ast}_{0},\ldots,R^{\ast}_{T-1}),\quad K=[\textnormal{diag}(K_{0},\ldots,K_{T-1}),0_{d_{u}T\times d_{x}}].

The control inputs using state feedback controller (Kt)t=0T1(K_{t})_{t=0}^{T-1} can be characterized by u=Kxu=Kx. Let (PtK)t=0T(P_{t}^{K})_{t=0}^{T} be the associated cumulative cost matrix starting from step tt. Then

PtK=(At+BtKt)Pt+1K(At+BtKt)+Qt+KtRtKt,\displaystyle P_{t}^{K}=(A^{\ast}_{t}+B^{\ast}_{t}K_{t})^{\top}P_{t+1}^{K}(A^{\ast}_{t}+B^{\ast}_{t}K_{t})+Q^{\ast}_{t}+K_{t}^{\top}R^{\ast}_{t}K_{t},

and PK:=diag(P0K,,PTK)P^{K}:=\textnormal{diag}(P_{0}^{K},\ldots,P_{T}^{K}) is the solution to

PK=(A+BK)PK(A+BK)+Q+KRK.\displaystyle P^{K}=(A^{\ast}+B^{\ast}K)^{\top}P^{K}(A^{\ast}+B^{\ast}K)+Q^{\ast}+K^{\top}R^{\ast}K.

Similarly, the optimal cumulative cost matrix

P:=diag(P0,,PT),\displaystyle P^{\ast}:=\textnormal{diag}(P^{\ast}_{0},\ldots,P^{\ast}_{T}),

produced by the RDE (2.4) in system (A,B,R,Q)(A^{\ast},B^{\ast},R^{\ast},Q^{\ast}) (that is, system ((At,Bt,Rt)t=0T1,(Qt)t=0T)((A^{\ast}_{t},B^{\ast}_{t},R^{\ast}_{t})_{t=0}^{T-1},(Q^{\ast}_{t})_{t=0}^{T})), satisfies

P=(A)\displaystyle P^{\ast}=(A^{\ast})^{\top} (PPB((B)PB+R)1(B)P)A+Q.\displaystyle(P^{\ast}-P^{\ast}B^{\ast}((B^{\ast})^{\top}P^{\ast}B^{\ast}+R^{\ast})^{-1}(B^{\ast})^{\top}P^{\ast})A^{\ast}+Q^{\ast}.

Let P^=diag(P^0,,P^T)\hat{P}=\textnormal{diag}(\hat{P}_{0},\ldots,\hat{P}_{T}) be the optimal cumulative cost matrices in system (A^,B^,Q^,R)(\hat{A},\hat{B},\hat{Q},R^{\ast}) (that is, system ((A^t,B^t,Rt)t=0T1,(Q^t)t=0T)((\hat{A}_{t},\hat{B}_{t},R^{\ast}_{t})_{t=0}^{T-1},(\hat{Q}_{t})_{t=0}^{T})) by the RDE (2.4). Define K:=[diag(K0,,KT),0duT×dx]K^{\ast}:=[\textnormal{diag}(K^{\ast}_{0},\ldots,K^{\ast}_{T}),0_{d_{u}T\times d_{x}}], where (Kt)t=0T1(K^{\ast}_{t})_{t=0}^{T-1} is the optimal controller in system (A,B,Q,R)(A^{\ast},B^{\ast},Q^{\ast},R^{\ast}), and define K^\hat{K} similarly for system (A^,B^,Q^,R)(\hat{A},\hat{B},\hat{Q},R^{\ast}). By definition, (A,B,Q)(A^{\ast},B^{\ast},Q^{\ast}) is stabilizable and observable in the sense of LTI systems. Therefore, by (Mania et al., 2019, Propositions 1 and 2), there exists dimension-free constant ϵ0>0\epsilon_{0}>0 with ϵ01\epsilon_{0}^{-1} depending polynomially on system parameters such that as long as ϵϵ0\epsilon\leq\epsilon_{0},

P^P2=𝒪(ϵ),K^K2=𝒪(ϵ),\displaystyle\|\hat{P}-P^{\ast}\|_{2}=\mathcal{O}(\epsilon),\quad\|\hat{K}-K^{\ast}\|_{2}=\mathcal{O}(\epsilon),

and that K^\hat{K} stabilizes system (A,B)(A^{\ast},B^{\ast}). By (Fazel et al., 2018, Lemma 12),

J((K^t)t=0T1)J((K)t=0T1)=\displaystyle J((\hat{K}_{t})_{t=0}^{T-1})-J((K^{\ast})_{t=0}^{T-1})=\; t=0Ttr(Σt(K^tKt)(Rt+(Bt)Pt+1Bt)(K^tKt))\displaystyle\sum\nolimits_{t=0}^{T}\mathrm{tr}(\Sigma_{t}(\hat{K}_{t}-K^{\ast}_{t})^{\top}(R^{\ast}_{t}+(B^{\ast}_{t})^{\top}P^{\ast}_{t+1}B^{\ast}_{t})(\hat{K}_{t}-K^{\ast}_{t}))
=\displaystyle=\; tr(Σ(K^K)(R+(B)PB)(K^K)),\displaystyle\mathrm{tr}(\Sigma(\hat{K}-K^{\ast})^{\top}(R^{\ast}+(B^{\ast})^{\top}P^{\ast}B^{\ast})(\hat{K}-K^{\ast})),

where Σ=diag(Σ0,,ΣT)\Sigma=\textnormal{diag}(\Sigma_{0},\ldots,\Sigma_{T}) and Σt\Sigma_{t} is 𝔼[xtxt]\mathbb{E}[x_{t}x_{t}^{\top}] in system (A,B)(A^{\ast},B^{\ast}) under state feedback controller K^\hat{K}. As a result,

J((K^t)t=0T1)J((K)t=0T1)\displaystyle J((\hat{K}_{t})_{t=0}^{T-1})-J((K^{\ast})_{t=0}^{T-1})\leq\; Σ2R+(B)PB2K^KF2.\displaystyle\|\Sigma\|_{2}\|R^{\ast}+(B^{\ast})^{\top}P^{\ast}B^{\ast}\|_{2}\|\hat{K}-K^{\ast}\|_{F}^{2}.

Since K^\hat{K} stabilizes system (A,B)(A^{\ast},B^{\ast}), Σ2=𝒪(1)\|\Sigma\|_{2}=\mathcal{O}(1). Since

K^KF((dxdu)T)1/2K^K2,\displaystyle\|\hat{K}-K^{\ast}\|_{F}\leq((d_{x}\wedge d_{u})T)^{1/2}\|\hat{K}-K^{\ast}\|_{2},

we conclude that

J((K^t)t=0T1)J((K)t=0T1)=𝒪((dxdu)Tϵ2).\displaystyle J((\hat{K}_{t})_{t=0}^{T-1})-J((K^{\ast})_{t=0}^{T-1})=\mathcal{O}((d_{x}\wedge d_{u})T\epsilon^{2}).