Can Direct Latent Model Learning Solve
Linear Quadratic Gaussian Control?
Abstract
We study the task of learning state representations from potentially high-dimensional observations, with the goal of controlling an unknown partially observable system. We pursue a direct latent model learning approach, where a dynamic model in some latent state space is learned by predicting quantities directly related to planning (e.g., costs) without reconstructing the observations. In particular, we focus on an intuitive cost-driven state representation learning method for solving Linear Quadratic Gaussian (LQG) control, one of the most fundamental partially observable control problems. As our main results, we establish finite-sample guarantees of finding a near-optimal state representation function and a near-optimal controller using the directly learned latent model. To the best of our knowledge, despite various empirical successes, prior to this work it was unclear if such a cost-driven latent model learner enjoys finite-sample guarantees. Our work underscores the value of predicting multi-step costs, an idea that is key to our theory, and notably also an idea that is known to be empirically valuable for learning state representations.
1 Introduction
We consider state representation learning for control in partially observable systems, inspired by the recent successes of control from pixels (Hafner et al., 2019b, a). Control from pixels is an everyday task for human beings, but it remains challenging for learning agents. Methods to achieve it generally fall into two main categories: model-free and model-based ones. Model-free methods directly learn a visuomotor policy, also known as direct reinforcement learning (RL) (Sutton and Barto, 2018). On the other hand, model-based methods, also known as indirect RL (Sutton and Barto, 2018), attempt to learn a latent model that is a compact representation of the system, and to synthesize a policy in the latent model. Compared with model-free methods, model-based ones facilitate generalization across tasks and enable efficient planning (Hafner et al., 2020), and are sometimes more sample efficient (Tu and Recht, 2019; Sun et al., 2019; Zhang et al., 2019) than the model-free ones.
In latent-model-based control, the state of the latent model is also referred to as a state representation in the deep RL literature, and the mapping from an observed history to a latent state is referred to as the (state) representation function. Reconstructing the observation often serves as a supervision for representation learning for control in the empirical RL literature (Hafner et al., 2019b, a, 2020; Fu et al., 2021; Wang et al., 2022). This is in sharp contrast to model-free methods, where the policy improvement step is completely cost-driven. Reconstructing observations provides a rich supervision signal for learning a task-agnostic world model, but they are high-dimensional and noisy, so the reconstruction requires an expressive reconstruction function; latent states learned by reconstruction contain irrelevant information for control, which can distract RL algorithms (Zhang et al., 2020; Fu et al., 2021; Wang et al., 2022). This is especially the case for practical visuomotor control tasks, e.g., robotic manipulation and self-driving cars, where the visual images contain predominately task-irrelevant objects and backgrounds.
Various empirical attempts (Schrittwieser et al., 2020; Zhang et al., 2020; Okada and Taniguchi, 2021; Deng et al., 2021; Yang et al., 2022) have been made to bypass observation reconstruction. Apart from observation, the interaction involves two other variables: actions (control inputs) and costs. Inverse model methods (Lamb et al., 2022) reconstruct actions; while other methods rely on costs. We argue that since neither the reconstruction function nor the inverse model is used for policy learning, cost-driven state representation learning is the most direct one, in that costs are directly relevant for control purposes. In this paper, we aim to examine the soundness of this methodology in linear quadratic Gaussian (LQG) control, one of the most fundamental partially observable control models.
Parallel to the empirical advances of learning for control from pixels, partially observable linear systems has been extensively studied in the context of learning for dynamic control (Oymak and Ozay, 2019; Simchowitz et al., 2020; Lale et al., 2020, 2021; Zheng et al., 2021; Minasyan et al., 2021; Umenberger et al., 2022). In this context, the representation function is more formally referred to as a filter, the optimal one being the Kalman filter. Most existing model-based learning approaches for LQG control focus on the linear time-invariant (LTI) case, and are based on the idea of learning Markov parameters (Ljung, 1998), the mapping from control inputs to observations. Hence, they need to predict observations by definition. Motivated by the empirical successes in control from pixels, we take a different, cost-driven route, in hope of avoiding reconstructing observations or control inputs.
We focus on finite-horizon time-varying LQG control and address the following question:
Can direct, cost-driven state representation learning provably solve LQG control?
This work answers the question in the affirmative, by establishing finite-sample guarantees for for a cost-driven state representation learning method.
Challenges & Our techniques. Overall, to establish finite-sample guarantees, a major technical challenge is to deal with the quadratic regression problem in cost prediction, arising from the inherent quadratic form of the cost function in LQG. Directly solving the problem for the representation function involves quartic optimization; instead, we propose to solve a quadratic regression problem, followed by low-rank approximate factorization. The quadratic regression problem also appears in identifying the cost matrices, which involves concentration for random variables that are fourth powers of Gaussians. We believe these techniques might be of independent interest.
Moreover, the first -step latent states may not be adequately excited (having full-rank covariance), which invalidates the use of most system identification techniques. We instead identify only relevant directions of the system parameters, and prove that this is sufficient for learning a near-optimal controller by analyzing state covariance mismatch. This fact is reflected in the separation in the statement of Theorem 1; developing finite-sample analysis in this case is technically challenging.
Implications. For practitioners, one takeaway from our work is the benefit of predicting multi-step cumulative costs in cost-driven state representation learning. Whereas the cost at a single time step may not be revealing enough of the latent state, the cumulative cost across multiple steps can be. This is an intuitive idea for the control community, given the multi-step nature in the classical definitions of controllability and observability. Its effectiveness has also been empirically observed in MuZero (Schrittwieser et al., 2020) in state representation learning for control, and our work can be viewed as a formal understanding of it in the LQG control setting.
Notation. We use (resp. ) to denote either the scalar or a matrix consisting of all zeros (resp. all ones); we use to denote an identity matrix. The dimension, when emphasized, is specified in subscripts, e.g., . Let denote the indicator function for set and apply to matrix elementwise. For some positive semidefinite , we define . Semicolon “;” denotes stacking vectors or matrices vertically. For a collection of -dimensional vectors , let denote the concatenation along the column. For random variable , let denote its -sub-Weibull norm, a special case of Orlicz norms (Zhang and Wei, 2022), with corresponding to subexponential and sub-Gaussian norms. denote its th largest, minimum, minimum positive, maximum singular values, respectively. denote the operator (induced by vector -norms), Frobenius, nuclear norms of matrix , respectively. denotes the Frobenius inner product between matrices. The Kronecker, symmetric Kronecker and Hadamard products between matrices are denoted by “”, “” and “”, respectively. and denote flattening a matrix and a symmetric matrix by stacking their columns; does not repeat the off-diagonal elements, but scales them by (Schacke, 2004). Let denote the block diagonal matrix formed by the matrices inside the parentheses.
2 Problem setup
We study a partially observable linear time-varying (LTV) dynamical system
(2.1) |
for and . For all , we have the notation of state , observation , and control input . Process noises and observation noises are i.i.d. sampled from and , respectively. Let initial state be sampled from .
Let for and . Then under zero control input. To ensure the state and the cumulative noise do not grow with time, we make the following uniform exponential stability assumption.
Assumption 1 (Uniform exponential stability).
The system is uniformly exponentially stable, i.e., there exists such that for any , .
Assumption 1 is standard in controlling LTV systems (Zhou and Zhao, 2017; Minasyan et al., 2021), satisfied by a stable LTI system. It essentially says that zero control is a stabilizing policy, and can be potentially relaxed to the assumption of being given a stabilizing policy (Lale et al., 2020; Simchowitz et al., 2020). Specifically, one can excite the system using the stabilizing policy plus Gaussian random noises.
Define the -step controllability matrix
for , which reduces to the standard controllability matrix in the LTI setting. We make the following controllability assumption.
Assumption 2 (Controllability).
For all , , .
Under zero noise, we have
so Assumption 2 ensures that from any state , there exist control inputs that drive the state to in steps, and ensures that the equation leading to them is well conditioned. We do not assume controllability for , since we do not want to impose the constraint that . This turns out to present a significant challenge for state representation learning, as seen from the separation of the results before and after the -steps in Theorem 1.
The quadratic cost functions are given by
(2.2) |
and , where are positive semidefinite matrices and are positive definite matrices. Sometimes the cost is defined as a function on observation . Since the quadratic form , our analysis still applies if the assumptions on hold for instead.
and observabilities are standard assumptions in controlling LTI systems. To differentiate from the former, we call the latter cost observability, since it implies the states are observable through costs. Whereas Markov-parameter-based approaches need to assume observability to identify the system, our cost-driven approach does not. Here we deal with the more difficult problem of having only the scalar cost as the supervision signal (instead of the concatenation of all observations, as in Markov-parameter-based ones). Nevertheless, the notion of cost observability is still important for our approach, formally defined as follows.
Assumption 3 (Cost observability).
This assumption ensures that without noises, if we start with a nonzero state, the cumulative cost becomes positive in steps. The special requirement for results from the difficulty in lacking controllability in these time steps. The following is a regularity assumption on system parameters.
Assumption 4.
are uniformly lower bounded by some . The operator norms of all matrices in the problem definition are uniformly upper bounded, including , . In other words, they are all .
Let denote the available history before deciding control . A policy determines at time a control input based on history . With a slight abuse of notation, let for and denote the cost at each time step. Then, is the expected cumulative cost under policy , where the expectation is taken over the randomness in the process noises, observation noises, and controls (if the policy is stochastic). The objective of LQG control is to find a policy such that is minimized.
If the system parameters are known, the optimal control is obtained by combining the Kalman filter ,
for , with the optimal feedback control gains of the linear quadratic regulator (LQR) such that , where are the Kalman gains; this is known as the separation principle (Åström, 2012). The Kalman gains and optimal feedback control gains are given by
where and are determined by their corresponding Riccati difference equations (RDEs):
(2.3) | |||
(2.4) |
with and .
We consider data-driven control in a partially observable LTV system (2.1) with unknown cost matrices . For simplicity, we assume is known, though our approaches can be readily generalized to the case without knowing them; one can identify them in the quadratic regression (3.3).
2.1 Latent model of LQG
Under the Kalman filter, the observation prediction error is called an innovation. It is known that is independent of history and are independent (Bertsekas, 2012).
Now we are ready to present the following proposition that represents the system in terms of the state estimates by the Kalman filter, which we shall refer to as the latent model.
Proposition 1.
Let be state estimates given by the Kalman filter. Then,
where is independent of and , i.e., the state estimates follow the same linear dynamics as the underlying state, with noises . The cost at step can then be reformulated as functions of the state estimates by
where is a problem-dependent constant, and , are both zero-mean subexponential random variables. Under Assumptions 1 and 4, and ; moreover, if control for , then .
Proof.
By the property of the Kalman filter, is a function of the past history . is a function of the past history and some independent random variables. Since is independent of the past history , it is independent of and . For the cost function,
Let be a constant that depends on system parameters , and . Then, random variable has zero mean. Since is Gaussian, its squared norm is subexponential. Since and are independent zero-mean Gaussian random vectors (Bertsekas, 2012), their inner product is a zero-mean subexponential random variable.
If the system is uniformly exponentially stable (Assumption 1) and the system parameters are regular (c.f. Assumption 4), then given by RDE (2.3) has a bounded operator norm determined by system parameters , and (Zhang and Zhang, 2021). Since , by Lemma 7. By Assumption 1, if we apply zero control to the system, then . By Lemma 7, satisfies . ∎
Proposition 1 states that: 1) the dynamics of the state estimates produced by the Kalman filter remains the same as the original system up to noises, determined by ; 2) the costs (of the latent model) are still determined by and , up to constants and noises. Hence, a latent model can be parameterized by (recall that we assume is known for convenience). Note that observation matrices are not involved.
Now let us take a closer look at the state representation function. The Kalman filter can be written as , where and . For , unrolling the recursion gives
where . This means the optimal state representation function is linear in the history of observations and controls. A state representation function can then be parameterized by matrices , and the latent state at step is given by .
Overall, a policy is a combination of state representation function ( is not needed) and feedback gain in the latent model; in this case, we write as the composition of the two.
3 Methodology: cost-driven state representation learning
State representation learning involves history data that contains samples of three variables: observation, control input, and cost. Each of them can potentially be used as a supervision signal, and be used to define a type of state representation learning algorithms. We summarize our categorization of the methods in the literature as follows.
-
•
Predicting observations defines the class of observation-reconstruction-based methods, including methods based on Markov parameters (mapping from control actions to observations) in linear systems (Lale et al., 2021; Zheng et al., 2021) and methods that learn a mapping from states to observations in more complex systems (Ha and Schmidhuber, 2018; Hafner et al., 2019b, a). This type of method tends to recover all state components.
- •
- •
Our method falls into the cost-driven category, which is more direct than the other two types, in the sense that the cost is directly relevant to planning with a dynamic model, whereas the observation reconstruction functions and inverse models are not. Another reason why we call our method direct latent model learning is that compared with Markov parameter-based approaches for linear systems, our approach directly parameterizes the state representation function, without exploiting the structure of the Kalman filter, making our approach closer to empirical practice that was designed for general RL settings.
Reference (Subramanian et al., 2020) proposes to optimize a simple combination of cost and transition prediction errors to learn the approximate information state (Subramanian et al., 2020). That is, we parameterize a state representation function by matrices and a latent model by matrices and then solve
(3.1) |
where are additional scalar parameters to account for noises, and the loss at step for trajectory is defined by
(3.2) |
for and . The optimization problem (3.1) is nonconvex; even if we can find a global minimizer, it is unclear how to establish finite-sample guarantees for it. A main finding of this work is that for LQG, we can solve the cost and transition loss optimization problems sequentially, with the caveat of using cumulative costs.
Our method is summarized in Algorithm 1. It has three steps: cost-driven state representation function learning (CoReL, Algorithm 2), latent system identification (SysId, Algorithm 3), and planning by RDE (2.4).
This three-step approach is very similar to the World Model approach (Ha and Schmidhuber, 2018) used in empirical RL, except that in the first step, instead of using an autoencoder to learn the state representation function, we use cost values to supervise the representation learning. Most empirical state representation learning methods (Hafner et al., 2019b, a; Schrittwieser et al., 2020) use cost supervision as one loss term; the special structure of LQG allows us to use it alone and have theoretical guarantees.
CoReL (Algorithm 2) is the core of our algorithm. Once the state representation function is obtained, SysId (Algorithm 3) identifies the latent system using linear and quadratic regression, followed by planning using RDE (2.4) to obtain controller from . SysId consists of the standard regression procedures; the full algorithmic detail is shown in Algorithm 3. Below we explain the cost-driven state representation learning algorithm (CoReL, Algorithm 2) in detail.
3.1 Learning the state representation function
(3.3) |
(3.4) |
(3.5) |
The state representation function is learned via CoReL (Algorithm 2). Given the raw data consisting of trajectories, CoReL first solves the regression problem (3.3) to recover the symmetric matrix . The target of regression (3.3) is defined by
where for and for . The superscript in denotes the observed in the th trajectory. The quadratic regression has a closed-form solution, by converting it to linear regression using .
Why cumulative cost? The state representation function is parameterized by and the latent state at step is given by . The single-step cost prediction (neglecting control cost and constant ) is given by . The regression recovers as a whole, from which we can recover up to an orthogonal transformation. If is positive definite and known, then we can further recover from it. However, if does not have full rank, information about is partially lost, and there is no way to fully recover even if is known. To see why multi-step cumulative cost helps, define for the same above. Under zero control and zero noise, starting from at step , the -step cumulative cost is precisely . Under the cost observability assumption (Assumption 3), are positive definite.
The normalized parameterization. Still, since is unknown, even if we recover as a whole, it is not viable to extract and . Such ambiguity is unavoidable; in fact, for every we choose, there is an equivalent parameterization of the system such that the system response is exactly the same. In partially observable LTI systems, it is well-known that the system parameters can only be recovered up to a similarity transform (Oymak and Ozay, 2019). Since every parameterization is correct, we simply choose , which we refer to as the normalized parameterization. Concretely, let us define . Then, the new parameterization is given by
and , where for all ,
It is easy to verify that under the normalized parameterization the system satisfies Assumptions 1, 2, 3, and 4, up to a change of some constants in the bounds. Without loss of generality, we assume system (2.1) is in the normalized parameterization; if not, the recovered state representation function and latent system are with respect to the normalized parameterization.
Low-rank approximate factorization. Regression (3.3) has a closed-form solution; solving it gives . Constants account for the variance of the state estimation error, and are not part of the state representation function; symmetric matrices are estimates of under the normalized parameterization, where . can only be recovered up to an orthogonal transformation, since for any orthogonal , .
We want to recover from such that . Let be its eigenvalue decomposition. Let be the positive semidefinite diagonal matrix containing nonnegative eigenvalues, where “” applies elementwise. If , we can construct by padding zeros. If , however, may exceed . Without loss of generality, assume that the diagonal elements of are in descending order. Let be the left-top block of and be the left columns of . By the Eckart-Young-Mirsky theorem, provides the best approximation of with among matrices in terms of the Frobenius norm.
Why singular value truncation in the first steps? The latent states are used to identify the latent system dynamics, so whether they are sufficiently excited, namely having full-rank covariance, makes a big difference: if not, the system matrices can only be identified partially. Proposition 2 below confirms that the optimal latent state indeed has full-rank covariance for .
Proposition 2.
Proof.
For , unrolling the Kalman filter gives
where , and are independent. The matrix multiplied by is precisely the controllability matrix . Then
By the controllability assumption (Assumption 2), has full rank and
On the other hand, since ,
Since and have operator norms, by Lemma 8, . Hence,
This implies that and . ∎
Proposition 2 implies that for all , has rank , so if is not provided, this gives a way to discover it. For , Proposition 2 guarantees that as long as is close enough to , it also has full rank, and so does . Hence, we simply take the final estimate . Without further assumptions, however, there is no such a full-rank guarantee for and . We make the following minimal assumption to ensure that the minimum positive singular values are uniformly lower bounded. Note that are not required to have full rank.
Assumption 5.
For , .
Still, for , Assumption 5 does not guarantee the full-rankness of , not even a lower bound on its minimum positive singular value; that is why we introduce TruncSV that truncates the singular values of by a threshold . Concretely, we take . Then, has the same singular values as except that those below are zeroed. We take to ensure a sufficient lower bound on the minimum positive singular value of , without increasing the statistical errors.
4 Theoretical guarantees
Theorem 1 below offers a finite-sample guarantee for our approach. Overall, it confirms cost-driven latent model learning (Algorithm 1) as a viable path to solving LQG control.
Theorem 1.
Given an unknown LQG system (2.1), under Assumptions 1, 2, 3, 4 and 5, if we run Algorithm 1 with , then with probability at least , state representation function is optimal in the first steps, and optimal in the next steps. Also, the learned controller is optimal for some dimension-free constant depending on system parameters in the first steps, and optimal in the last steps.
From Theorem 1, we observe a separation of the sample complexities before and after time step , resulting from the loss of the full-rankness of and . In more detail, a proof sketch goes as follows. Quadratic regression guarantees that converges to at a rate of for all . Before step , suffers a square root decay of the rate to because may not have rank . Since may not have full-rank covariances, are only recovered partially. As a result, may not stabilize , causing the exponential dependence on . This means if is not big enough, this controller may be inferior to zero control, since the system is uniformly exponential stable (Assumption 1) and zero control has suboptimality gap linear in . After step , retains the sample complexity, and so do ; the certainty equivalent controller then has an order of suboptimality gap for linear quadratic control (Mania et al., 2019).
Theorem 1 states the guarantees for the state representation function and the controller separately. One may wonder the suboptimality gap of in combination; after all, this is the output policy. The new challenge is that a suboptimal controller is applied to a suboptimal state estimation. An exact analysis requires more effort, but a reasonable conjecture is that has the same order of suboptimality gap as : before step , the extra suboptimality gap resulted from can be analyzed by considering perturbation on controls; after step , similar to the analysis of the LQG suboptimality gap in (Mania et al., 2019), the overall suboptimality gap can be analyzed by a Taylor expansion of the value function at , with being perturbations.
4.1 Proof of Theorem 1
Proof.
Recall that Algorithm 1 has three main steps: state representation learning (CoReL, Algorithm 2), latent system identification (SysId, Algorithm 3), and planning by RDE (2.4). Correspondingly, the analysis below is organized around these three steps.
Recovery of the state representation function. By Proposition 3, with for all to system (2.1), the -step cumulative cost starting from step , where for and for , is given under the normalized parameterization by
where , and is a zero-mean subexponential random variable with . Then, it is clear that Algorithm 2 recovers latent states , where , by a combination of quadratic regression and low-rank approximate factorization. Below we drop the superscript prime for notational simplicity, but keep in mind that the optimal state representation function , the corresponding latent states , and the true latent system parameters are all with respect to the normalized parameterization.
Let for all . For quadratic regression, Lemma 2 (detailed later) and the union bound over all time steps guarantees that as long as for an absolute constant , with probability at least , for all ,
Now let us bound the distance between and . Recall that we use as a shorthand. The estimate may not be positive semidefinite. Let be its eigenvalue decomposition, with the matrix having descending diagonal elements. Let . Then is the projection of onto the positive semidefinite cone (Boyd and Vandenberghe, 2004, Section 8.1.1) with respect to the Frobenius norm. Since ,
The low-rank factorization is essentially a combination of low-rank approximation and matrix factorization. For , constructed by padding zeros satisfies . For , construct , where is the left-top block in and consists of columns of from the left. By the Eckart-Young-Mirsky theorem, satisfies
Hence,
From now on, we consider and separately, since, as we will show, in the latter case we have the additional condition that .
For , . By Lemma 4 to be proved later, there exists a orthogonal matrix , such that . Recall that ; that is, . Then
Hence, the distance between and satisfies
should be chosen in such a way that it keeps the error on the same order; that is, . As a result, since and ,
Since whose -norm is sub-Gaussian with its mean and sub-Gaussian norm bounded by , is sub-Gaussian with its mean and sub-Gaussian norm bounded by
On the other hand, threshold ensures a lower bound on . As shown in the proof of Lemma 5, this property is important for ensuring the system identification outputs and have bounded norms. Specifically,
where holds with probability at least , as long as for some absolute constant . This concentration is standard in analyzing linear regression (Wainwright, 2019).
For , . By Proposition 2, has full rank, and . Recall that for , we simply set . Then, by Lemma 3, there exists a orthogonal matrix , such that
which is also an upper bound on . As a result,
from which we have that is sub-Gaussian with its mean and sub-Gaussian norm bounded by
Consider
where holds with probability . Hence, there exists an absolute constant , such that if ,
This is needed for the analysis of latent system identification in the next step.
Recovery of the latent dynamics. The latent system is identified in Algorithm 3, using produced by Algorithm 2, by ordinary least squares. Recall from Proposition 1 that . With the transforms on and , we have
and . Under control for , we know that is a zero-mean Gaussian random vector; so is . Let be its covariance.
For , we need a bound for the estimation error of rank-deficient and noisy linear regression. By Lemma 5, there exists an absolute constant , such that as long as , with probability at least ,
which implies
which is also a bound on .
For , by Lemma 6, with probability at least ,
which implies
which is also a bound on . We note that since the observation of is exact, a tighter bound on is possible; we keep the current bound since it does not affect the final bounds in Theorem 1.
By Assumption 3, and are positive definite; they are identity matrices under the normalized parameterization. For , which may not be positive definite, we identify them in SysId (Algorithm 3) by (3.5). By Lemma 2,
where we have used . By construction, is the projection of onto the positive semidefinite cone with respect to the Frobenius norm (Boyd and Vandenberghe, 2004, Section 8.1.1). Since ,
Certainty equivalent linear quadratic control. The last step of Algorithm 1 is to compute the optimal controller in the estimated system by RDE (2.4).
RDE (2.4) proceeds backward. For , , and are all bounded by
By Lemma 10 on certainty equivalent linear quadratic control, for , where hidden constants are dimension-free and depend polynomially on system parameters, the controller is -optimal in system , for
(4.1) |
that is, if
for a dimension-free constant that depends on system parameters.
For , and are optimal controllers in the -step systems with terminal costs given by and , respectively, where and are the solutions to RDE (2.4) in systems and at step , respectively. By Lemma 10,
which is dominated by the rate of and for . Then, by Lemma 9, is -optimal in system with terminal cost matrix , for
(4.2) |
where dimension-free constant depends on the system parameters; that is, if
for some dimension-free constants that depend on system parameters. The bound (4.2) is worse than (4.1), because for , does not have full-rank covariance, and is only recovered partially. Even with large enough data, linear regression has no guarantee for to be small; we do not know the controllability of , not even its stabilizability. ∎
Next, we provide several key technical lemmas regarding the regression and factorization procedures used in our algorithm.
4.2 Quadratic regression bound
As noted in Section 3.1, the quadratic regression can be converted to linear regression using . To analyze this linear regression with an intercept, we need the following lemma. We note that a similar lemma without considering the intercept has been proved in (Jadbabaie et al., 2021, Proposition).
Lemma 1.
Let be independent observations of the -dimensional random vector . Let and . There exists an absolute constant , such that as long as , with probability at least ,
Proof.
Let and . We first show that is lower bounded. Consider
To lower bound its smallest eigenvalue, let us compute its inverse. By the Sherman-Morrison formula,
Then, by the inverse of a block matrix,
Therefore,
Hence,
Then, with the similar concentration arguments to those in (Jadbabaie et al., 2021, Appendix A.1), we can show that with probability at least ,
∎
Lemma 1 lower bounds the minimum singular value of a matrix that contains the fourth powers of elements in standard Gaussian random vectors. The following lemma is the main result in Section 4.2.
Lemma 2 (Quadratic regression).
Define random variable , where is a -dimensional Gaussian random vector, is a positive semidefinite matrix, is a constant and is a zero-mean subexponential random variable with . Assume that and are . Let . Assume that . Define where the noise vector can be correlated with and its norm is sub-Gaussian with , . Assume that for some absolute constant . Suppose we get observations and of and , where are independent and can be correlated. Consider the regression problem
(4.3) |
There exists an absolute constant , such that as long as , with probability at least , and are bounded by
Proof.
Regression (4.3) can be written as
Let denote the covariates and denote the extended covariates. Define and similarly by replacing with . Then, regression (4.3) can be written as
(4.4) |
Let be an matrix whose th row is . Define similarly by replacing with . Solving linear regression (4.4) gives
Substituting into the above equation yields
(4.5) |
where denotes the vector whose th element is . Rearranging the terms, we have
(4.6) |
Next, we show that is invertible with high probability. We can represent by , where is an -dimensional standard Gaussian random vector. Correspondingly, an independent observation can be expressed as , where is an independent observation of . It follows that and . Define , , and be an matrix whose th row is . Define , and as the extended counterparts. Then,
where is a matrix. Then, . By the properties of the symmetric Kronecker product (Schacke, 2004),
By Lemma 1, there exist absolute constants , such that if , with probability at least , . Since and ,
By Weyl’s inequality,
Hence, we want to bound , which satisfies
Since has at most rank two, we have
Since , is sub-Gaussian with its mean and sub-Gaussian norm bounded by . Since is sub-Gaussian with its mean and sub-Gaussian norm bounded by , we conclude that is subexponential with its mean and subexponential norm bounded by . Hence, with probability at least ,
Therefore,
It follows that
Hence, there exists some absolute constant , such that as long as
we have
It follows that
Now, we return to (4.6). By inverting , we obtain
(4.7) | ||||
Term is upper bounded by
Using arguments similar to those in (Mhammedi et al., 2020, Section B.2.13), we have
where denotes the Frobenius product between matrices in , denotes the nuclear norm in , and follows from the fact that the matrix has at most rank two. Combining with , we obtain that the term in (4.7) is bounded by
Now we consider term in (4.7):
Since is a vector of zero-mean subexponential variables with subexponential norms bounded by , . Hence,
To bound , note that . Consider the th component in the summation . Recall that , so is either the square of a standard Gaussian random variable, times the product of two independent standard Gaussian random variables, or one. Hence, is subexponential with mean and both bounded by . As a result, the product is -sub-Weibull, with the sub-Weibull norm being . By (Hao et al., 2019, Theorem 3.1),
Hence, the norm of the -dimensional vector is . By the properties of the symmetric Kronecker product (Schacke, 2004), . Then,
Eventually, term is bounded by
Combining the bounds on and , we have
which concludes the proof. ∎
4.3 Matrix factorization bound
Given two matrices , we are interested in bounding using . The minimum problem is known as the orthogonal Procrustes problem, solved in (Schoenemann, 1964; Schönemann, 1966). Specifically, the minimum is attained at , where is its singular value decomposition.
If and , then the following lemma from (Tu et al., 2016) establishes that the distance between and is of the same order of .
Lemma 3 ((Tu et al., 2016, Lemma 5.4)).
For matrices , let denote its th largest singular value. Then
If equals zero, the above bound becomes vacuous. In general, the following lemma shows that the distance is of the order of the square root of the , with a multiplicative factor, where .
Lemma 4.
For matrices , , where .
Proof.
Let be its singular value decomposition. By substituting the solution of the orthogonal Procrustes problem, the square of the attained minimum equals
where is due to the property of .
To establish the relationship between and , we need to operate in the space of singular values. For matrix , let be its singular values in descending order, where .
In terms of singular values,
where holds since and are positive semidefinite matrices, and in since and . If , then . For all , (Bhatia and Kittaneh, 1990). Take as and as ; it follows that
Let for . Combining the above yields
where is due to the Cauchy-Schwarz inequality, and uses
This completes the proof. ∎
4.4 Linear regression bound
A standard assumption in analyzing linear regression is that has full rank. However, as discussed in Section 3.1, we need to handle rank deficient in the first steps of system identification.
The following lemma is the main result in Section 4.4.
Lemma 5 (Noisy rank deficient linear regression).
Define random vector , where and are and dimensional Gaussian random vectors, respectively. Define and where the noise vectors and can be correlated with and , and their -norms are sub-Gaussian with , , and , . Assume that , and are . Let . Suppose we get observations and of and , where are independent and can be correlated. Assume that , for some that satisfies , for absolute constants and has at least dependence on . Consider the minimum Frobenius norm solution
Then, there exists an absolute constant , such that if , with probability at least ,
Proof.
Let and where . We can view as generated from an -dimensional standard Gaussian , by ; can then be viewed as , where are independent observations of . Let denote the matrix whose th row is ; are defined similarly.
To solve the regression problem, we set its gradient to be zero and substitute to obtain
Substituting by gives
By rearranging the terms, we have
Hence,
We claim that as long as , with probability at least , (Claim 1). Then, since ,
By (Wainwright, 2019, Theorem 6.1), the Gaussian ensemble satisfies that with probability at least ,
Since , we have and . It follows that and . Similarly, . Note that . Since is sub-Gaussian with and , with probability at least , . Hence,
Similarly, . Hence,
where we consider and as quantities much smaller than one such that terms like are absorbed into . It remains to control . (Mhammedi et al., 2020, Section B.2.11) proves via a covering number argument that with probability at least ,
Overall, we obtain that
which completes the proof. ∎
Claim 1.
Under the conditions in Lemma 5, as long as , with probability at least , .
Proof.
A minimum norm solution is given by the following closed-form solution of using pseudoinverse (Moore-Penrose inverse):
Then
where we note that when . Since ,
We have shown in the proof of Lemma 5 that with probability at least ,
Similar to the proof of Lemma 5, by (Mhammedi et al., 2020, Section B.2.11), with probability at least ,
Combining the bounds above, we obtain
Hence, as long as , for absolute constants and has at least dependence on , is bounded by for an absolute constant . ∎
In Lemma 5, if has full rank and and , then
The following lemma shows that we can strengthen the result by removing the factor before .
Lemma 6 (Noisy linear regression).
Define random variable , where and are and dimensional random vectors. Assume that , and are , and . Define and where the noise vectors and can be correlated with and , and their norms are sub-Gaussian with , and , . Suppose we get independent observations of and . Assume that . Consider the minimum Frobenius norm solution
Then, there exists an absolute constant , such that if , with probability at least ,
Proof.
5 Concluding remarks
We examined cost-driven state representation learning methods in time-varying LQG control. With a finite-sample analysis, we showed that a direct, cost-driven state representation learning algorithm effectively solves LQG. In the analysis, we revealed the importance of using multi-step cumulative costs as the supervision signal, and a separation of the convergence rates before and after step , the controllability index, due to early-stage insufficient excitement of the system. A major limitation of our method is the use of history-based state representation functions; recovering the recursive Kalman filter would be ideal.
This work has opened up many opportunities for future research. An immediate question is how our cost-driven state representation learning approach performs in the infinite-horizon LTI setting. Moreover, one may wonder about the extent to which cost-driven state representation learning generalizes to nonlinear observations or systems. Investigating the connection with reduced-order control is also an interesting question, which may reveal the unique advantage of cost-driven state representation learning. Finally, one argument for favoring latent-model-based over model-free methods is their ability to generalize across different tasks; cost-driven state representation learning may offer a perspective to formalize this intuition.
Acknowledgement
YT, SS acknowledge partial support from the NSF BIGDATA grant (number 1741341). KZ’s work was mainly done while at MIT, and acknowledges partial support from Simons-Berkeley Research Fellowship. The authors also thank Xiang Fu, Horia Mania, and Alexandre Megretski for helpful discussions.
References
- Åström (2012) Karl J Åström. Introduction to Stochastic Control Theory. Courier Corporation, 2012.
- Bertsekas (2012) Dimitri Bertsekas. Dynamic Programming and Optimal Control: Volume I, volume 1. Athena Scientific, 2012.
- Bhatia and Kittaneh (1990) Rajendra Bhatia and Fuad Kittaneh. On the singular values of a product of operators. SIAM Journal on Matrix Analysis and Applications, 11(2):272–277, 1990.
- Boyd and Vandenberghe (2004) Stephen P Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
- Deng et al. (2021) Fei Deng, Ingook Jang, and Sungjin Ahn. Dreamerpro: Reconstruction-free model-based reinforcement learning with prototypical representations. arXiv preprint arXiv:2110.14565, 2021.
- Fazel et al. (2018) Maryam Fazel, Rong Ge, Sham Kakade, and Mehran Mesbahi. Global convergence of policy gradient methods for the linear quadratic regulator. In International Conference on Machine Learning, pages 1467–1476. PMLR, 2018.
- Frandsen et al. (2022) Abraham Frandsen, Rong Ge, and Holden Lee. Extracting latent state representations with linear dynamics from rich observations. In International Conference on Machine Learning, pages 6705–6725. PMLR, 2022.
- Fu et al. (2021) Xiang Fu, Ge Yang, Pulkit Agrawal, and Tommi Jaakkola. Learning task informed abstractions. In International Conference on Machine Learning, pages 3480–3491. PMLR, 2021.
- Ha and Schmidhuber (2018) David Ha and Jürgen Schmidhuber. World models. arXiv preprint arXiv:1803.10122, 2018.
- Hafner et al. (2019a) Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603, 2019a.
- Hafner et al. (2019b) Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In International conference on machine learning, pages 2555–2565. PMLR, 2019b.
- Hafner et al. (2020) Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, and Jimmy Ba. Mastering atari with discrete world models. arXiv preprint arXiv:2010.02193, 2020.
- Hao et al. (2019) Botao Hao, Yasin Abbasi Yadkori, Zheng Wen, and Guang Cheng. Bootstrapping upper confidence bound. Advances in Neural Information Processing Systems, 32, 2019.
- Jadbabaie et al. (2021) Ali Jadbabaie, Horia Mania, Devavrat Shah, and Suvrit Sra. Time varying regression with hidden linear dynamics. arXiv preprint arXiv:2112.14862, 2021.
- Kailath (1980) Thomas Kailath. Linear Systems, volume 156. Prentice-Hall Englewood Cliffs, NJ, 1980.
- Komaroff (1996) Nicholas Komaroff. On bounds for the solution of the Riccati equation for discrete-time control systems. In Control and Dynamic Systems, volume 78, pages 275–311. Elsevier, 1996.
- Lale et al. (2020) Sahin Lale, Kamyar Azizzadenesheli, Babak Hassibi, and Anima Anandkumar. Logarithmic regret bound in partially observable linear dynamical systems. Advances in Neural Information Processing Systems, 33:20876–20888, 2020.
- Lale et al. (2021) Sahin Lale, Kamyar Azizzadenesheli, Babak Hassibi, and Anima Anandkumar. Adaptive control and regret minimization in linear quadratic Gaussian (LQG) setting. In 2021 American Control Conference (ACC), pages 2517–2522. IEEE, 2021.
- Lamb et al. (2022) Alex Lamb, Riashat Islam, Yonathan Efroni, Aniket Didolkar, Dipendra Misra, Dylan Foster, Lekan Molu, Rajan Chari, Akshay Krishnamurthy, and John Langford. Guaranteed discovery of controllable latent states with multi-step inverse models. arXiv preprint arXiv:2207.08229, 2022.
- Ljung (1998) Lennart Ljung. System identification. In Signal Analysis and Prediction, pages 163–173. Springer, 1998.
- Mania et al. (2019) Horia Mania, Stephen Tu, and Benjamin Recht. Certainty equivalence is efficient for linear quadratic control. Advances in Neural Information Processing Systems, 32, 2019.
- Mhammedi et al. (2020) Zakaria Mhammedi, Dylan J Foster, Max Simchowitz, Dipendra Misra, Wen Sun, Akshay Krishnamurthy, Alexander Rakhlin, and John Langford. Learning the linear quadratic regulator from nonlinear observations. Advances in Neural Information Processing Systems, 33:14532–14543, 2020.
- Minasyan et al. (2021) Edgar Minasyan, Paula Gradu, Max Simchowitz, and Elad Hazan. Online control of unknown time-varying dynamical systems. Advances in Neural Information Processing Systems, 34, 2021.
- Okada and Taniguchi (2021) Masashi Okada and Tadahiro Taniguchi. Dreaming: Model-based reinforcement learning by latent imagination without reconstruction. In IEEE International Conference on Robotics and Automation (ICRA), pages 4209–4215. IEEE, 2021.
- Oymak and Ozay (2019) Samet Oymak and Necmiye Ozay. Non-asymptotic identification of LTI systems from a single trajectory. In 2019 American control conference (ACC), pages 5655–5661. IEEE, 2019.
- Schacke (2004) Kathrin Schacke. On the Kronecker product. Master’s Thesis, University of Waterloo, 2004.
- Schoenemann (1964) Peter Hans Schoenemann. A solution of the orthogonal Procrustes problem with applications to orthogonal and oblique rotation. University of Illinois at Urbana-Champaign, 1964.
- Schönemann (1966) Peter H Schönemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1–10, 1966.
- Schrittwieser et al. (2020) Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering Atari, Go, Chess and Shogi by planning with a learned model. Nature, 588(7839):604–609, 2020.
- Simchowitz et al. (2020) Max Simchowitz, Karan Singh, and Elad Hazan. Improper learning for non-stochastic control. In Conference on Learning Theory, pages 3320–3436. PMLR, 2020.
- Subramanian et al. (2020) Jayakumar Subramanian, Amit Sinha, Raihan Seraj, and Aditya Mahajan. Approximate information state for approximate planning and reinforcement learning in partially observed systems. arXiv preprint arXiv:2010.08843, 2020.
- Sun et al. (2019) Wen Sun, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, and John Langford. Model-based RL in contextual decision processes: PAC bounds and exponential improvements over model-free approaches. In Conference on Learning Theory, pages 2898–2933. PMLR, 2019.
- Sutton and Barto (2018) Richard S Sutton and Andrew G Barto. Reinforcement Learning: An Introduction. MIT Press, 2018.
- Tu and Recht (2019) Stephen Tu and Benjamin Recht. The gap between model-based and model-free methods on the linear quadratic regulator: An asymptotic viewpoint. In Conference on Learning Theory, pages 3036–3083. PMLR, 2019.
- Tu et al. (2016) Stephen Tu, Ross Boczar, Max Simchowitz, Mahdi Soltanolkotabi, and Ben Recht. Low-rank solutions of linear matrix equations via procrustes flow. In International Conference on Machine Learning, pages 964–973. PMLR, 2016.
- Umenberger et al. (2022) Jack Umenberger, Max Simchowitz, Juan Carlos Perdomo, Kaiqing Zhang, and Russ Tedrake. Globally convergent policy search for output estimation. In Advances in Neural Information Processing Systems, 2022.
- Vershynin (2018) Roman Vershynin. High-dimensional Probability: An Introduction with Applications in Data Science, volume 47. Cambridge University press, 2018.
- Wainwright (2019) Martin J Wainwright. High-dimensional Statistics: A Non-asymptotic Viewpoint, volume 48. Cambridge University Press, 2019.
- Wang et al. (2022) Tongzhou Wang, Simon S Du, Antonio Torralba, Phillip Isola, Amy Zhang, and Yuandong Tian. Denoised MDPs: Learning world models better than the world itself. arXiv preprint arXiv:2206.15477, 2022.
- Yang et al. (2022) Lujie Yang, Kaiqing Zhang, Alexandre Amice, Yunzhu Li, and Russ Tedrake. Discrete approximate information states in partially observable environments. In 2022 American Control Conference (ACC), pages 1406–1413. IEEE, 2022.
- Zhang et al. (2020) Amy Zhang, Rowan McAllister, Roberto Calandra, Yarin Gal, and Sergey Levine. Learning invariant representations for reinforcement learning without reconstruction. arXiv preprint arXiv:2006.10742, 2020.
- Zhang and Wei (2022) Huiming Zhang and Haoyu Wei. Sharper sub-weibull concentrations. Mathematics, 10(13):2252, 2022.
- Zhang et al. (2021) Kaiqing Zhang, Xiangyuan Zhang, Bin Hu, and Tamer Basar. Derivative-free policy optimization for linear risk-sensitive and robust control design: Implicit regularization and sample complexity. Advances in Neural Information Processing Systems, 34:2949–2964, 2021.
- Zhang et al. (2019) Marvin Zhang, Sharad Vikram, Laura Smith, Pieter Abbeel, Matthew Johnson, and Sergey Levine. Solar: Deep structured representations for model-based reinforcement learning. In International Conference on Machine Learning, pages 7444–7453. PMLR, 2019.
- Zhang and Zhang (2021) Qinghua Zhang and Liangquan Zhang. Boundedness of the Kalman filter revisited. IFAC-PapersOnLine, 54(7):334–338, 2021.
- Zheng et al. (2021) Yang Zheng, Luca Furieri, Maryam Kamgarpour, and Na Li. Sample complexity of linear quadratic Gaussian (LQG) control for output feedback systems. In Learning for Dynamics and Control, pages 559–570. PMLR, 2021.
- Zhou and Zhao (2017) Bin Zhou and Tianrui Zhao. On asymptotic stability of discrete-time linear time-varying systems. IEEE Transactions on Automatic Control, 62(8):4274–4281, 2017.
Appendix A Proposition on multi-step cumulative costs
The following proposition does not appear in the main body, but is important for analyzing CoReL (Algorithm 2) in Section 4.1.
Proposition 3.
Let be the state estimates by the Kalman filter under the normalized parameterization. If we apply for all , then for ,
where for and for , , and is a zero-mean subexponential random variable with .
Proof.
By Proposition 1, , where are the Kalman gain and the innovation under the normalized parameterization, respectively. Under Assumptions 1 and 4, are Gaussian random vectors whose covariances have operator norms, and have operator norms (Zhang and Zhang, 2021). Hence, The covariance of has operator norm. Since , can be viewed as a Gaussian noise vector whose covariance has operator norm. By Proposition 1,
where is subexponential with . Let for and . Then, for ,
where and for , is a Gaussian random vector with bounded covariance due to uniform exponential stability (Assumption 1). Therefore,
where is due to the normalized parameterization, , and
has zero mean and is subexponential with . ∎
Appendix B Auxiliary results
Lemma 7.
Let and be -dimensional Gaussian random vectors. Let be a positive semidefinite matrix. Then, there exists an absolute constant such that
Proof.
Since ,
For , we know that is sub-Gaussian. Actually, by writing for , we have
The distribution of is known as distribution, and we know that for an absolute constant (see, e.g., (Wainwright, 2019)). Hence, . Similarly, . Since (see, e.g., (Vershynin, 2018, Lemma 2.7.7)), we have
Taking concludes the proof. ∎
Lemma 8.
Let be random vectors of dimensions , respectively, defined on the same probability space. Then, .
Proof.
Let be a factorization of the positive semidefinite matrix , where . Let and be the matrices consisting of the first rows and the last rows of , respectively. Then,
Hence, and . The proof is completed by noticing that
∎
Appendix C Certainty equivalent linear quadratic control
As shown in Lemma 5, if the input of linear regression does not have full-rank covariance, then the parameters can only be identified in certain directions. The following lemma studies the performance of the certainty equivalent optimal controller in this case.
Lemma 9 (Rank deficient linear quadratic control).
Consider the finite-horizon time-varying linear dynamical system with unknown .
-
•
Assume that is uniformly exponentially stable (Assumption 1).
-
•
Assume that , for all , are independent, where and do not necessarily have full rank.
-
•
Instead of , we observe , where the Gaussian noise vector can be correlated with and satisfy for all .
-
•
Let and under control for all . Assume and for all , and that with at least dependence on .
-
•
Assume are positive definite with operator norms, and for all .
Collect trajectories using for some and . Identify by SysId (Algorithm 3), which uses ordinary least squares and takes a minimum Frobenius norm solution. Let be the optimal controller in system . Then, there exists an absolute constant , such that if , with probability at least , is -optimal for solving the LQR problem for
where dimension-free constant depends on the system parameters.
Proof.
First of all, we establish that are bounded. Since are positive definite, the system is fully cost observable. By (Zhang and Zhang, 2021), the optimal feedback gains have operator norms bounded by dimension-free constants depending on the system parameters. Note that (Zhang and Zhang, 2021) studies the boundedness of the Kalman filter, which is dual to our optimal control problem, but their results carry over. One can also see the boundedness from the compact formulation (Zhang et al., 2021), as described in the proof of Lemma 10, using known bounds for RDE solutions (Komaroff, 1996). The same argument applies to if we can show have operator norms. SysId (Algorithm 3) identifies by ordinary least squares for all . By Claim 1, with a union bound for all , as long as , with probability at least , . Hence, with the same probability, have operator norms bounded by dimension-free constants depending on the system parameters. In the following, we assume for all , where is a dimension-free constant that depends on system parameters.
Since can be rank deficient, we cannot guarantee is small. Instead, for all , by Lemma 5,
where , and in Lemma 5 correspond to here, respectively. This implies that and for all .
Let denote the covariance of under state feedback controller , where for all . We claim that there exists , such that . We prove it by induction. At step , clearly, . Suppose . For step ,
Hence, for ,
To ensure , it suffices to take . Hence, , where constant is dimension-free and depends on system parameters. Note that if stabilizes , then , and the bound on can be improved to ; so the exponential dependence on results from not knowing whether stabilizes the system. By the definition of the operator norm, we have for all .
Let denote the covariance of under state feedback controller in the system . Then, by definition,
where the operator norms of are and the operator norms of are with probability at least . Since
we have . Combining with gives
for some dimension-free constants that depend on the system parameters.
Let denote the expected cumulative cost under state feedback controller in system and the corresponding expected cumulative cost in system . Notice that
where for and . Similarly, , where for and . Hence,
where , is absorbed into , and dimension-free constant depends on the system parameters.
Finally, let be the optimal feedback gains in system . By the union bound, with probability at least ,
Therefore,
The proof is completed by rescaling to . ∎
If the input of linear regression has full-rank covariance, then by Lemma 6, the system parameters can be fully identified. The certainty equivalent optimal controller in this case has a much better guarantee compared to the rank-deficient case, as shown in the following lemma.
Lemma 10 (Linear quadratic control).
Assume the finite-horizon linear time-varying system
is stabilizable. Let be the optimal controller and be the solution to RDE (2.4) of system .
Let be the optimal controller and be the solution to RDE (2.4) of system , where , and . Then there exists dimension-free constant with depending polynomially on system parameters, such that as long as , , for all , and is -optimal in system .
Proof.
(Mania et al., 2019) has studied this problem in the infinite-horizon LTI setting; here we extend their result to the finite-horizon LTV setting.
We adopt the following compact formulation of a finite-horizon LTV system, introduced in (Zhang et al., 2021, Section 3), to reduce our setting to the infinite-horizon LTI setting; since noisy and noiseless LQR has the same associated Riccati equation and optimal controller, we consider the noiseless case here:
The control inputs using state feedback controller can be characterized by . Let be the associated cumulative cost matrix starting from step . Then
and is the solution to
Similarly, the optimal cumulative cost matrix
produced by the RDE (2.4) in system (that is, system ), satisfies
Let be the optimal cumulative cost matrices in system (that is, system ) by the RDE (2.4). Define , where is the optimal controller in system , and define similarly for system . By definition, is stabilizable and observable in the sense of LTI systems. Therefore, by (Mania et al., 2019, Propositions 1 and 2), there exists dimension-free constant with depending polynomially on system parameters such that as long as ,
and that stabilizes system . By (Fazel et al., 2018, Lemma 12),
where and is in system under state feedback controller . As a result,
Since stabilizes system , . Since
we conclude that
∎