Data-Driven Pole Placement in LMI Regions with Robustness Constraints
Abstract
This paper proposes a robust learning methodology to place the closed-loop poles in desired convex regions in the complex plane. We considered the system state and input matrices to be unknown and can only use the measurements of the system trajectories. The closed-loop pole placement problem in the linear matrix inequality (LMI) regions is considered a classic robust control problem; however, that requires knowledge about the state and input matrices of the linear system. We bring in ideas from the behavioral system theory and persistency of excitation condition-based fundamental lemma to develop a data-driven counterpart that satisfies multiple closed-loop robustness specifications, such as -stability and mixed performance specifications. Our formulations lead to data-driven semi-definite programs (SDPs) that are coupled with sufficient theoretical guarantees. We validate the theoretical results with numerical simulations on a third-order dynamic system.
Keywords: Robust pole placement, data-driven robust control, stability guarantee, mixed , LMI regions.
1 Introduction
Recent research works in automatic control has been focused more on converting the classic model-based formulations to their data-driven counter-parts. Motivation of such designs are taken from the increasing complexity of practical dynamic systems with increase in their scale, making the dynamic model and parameters less accurately known along with several unmodeled non-idealities. Data-driven approaches varied in many forms with their distinct characteristics. In machine learning community, sequential decision making problems using Markov decision process (MDPs) have garnered lot of interest under the umbrella of reinforcement learning (RL) [1, 2, 3, 4]. Many underlying concepts of RL have been translated to dynamic system viewpoint using adaptive dynamic programming in approaches such as [5, 6, 7, 8, 9, 10], considering both partial and full model-free designs. The later class of methods intends to provide theoretical guarantees using dynamic systems theory. On continuing the path of supplementing data-driven algorithms with strong mathematical backing, behavioral system theory is recently being touted as an effective alternative approach [11]. The underlying idea is to represent the space of input-output trajectories of an LTI system to be spanned by single time shifted trajectory measurements. Research works such as [12, 13, 14] follow such underlying framework for data-driven optimal control designs.
Along with considering optimal control designs in a data-driven way, practical systems will require sufficient robustness margins. The problem of performing robust control designs with unknown state model is currently having a lot of open questions. Approaches have been developed such as [15, 16, 17, 18], that tries to augment some robustness aspects by infusing few robust considerations in the data-driven RL or optimal control setting. However, dedicated robust learning methodology is envisioned to provide much better stabilization and performance guarantees with strong underlying framework. Behavioral system theory using Willems fundamental lemma [11] can provide such foundations for robust learning control designs that helps to provide an one-to-one conversion strategy from model based to data-driven approaches. In this paper we build upon that framework and consider the pole placement problem in desired convex region of the complex plane along with sufficient robustness specifications.
Classically, decades of research in the robust control domain unearthed plethora of methods that can provide sufficient system performance in presence of noise, unmodeled dynamics, uncertainty etc [19, 20, 21]. The closed-loop pole placement problem in desired convex region in the complex plane has given rise to linear matrix inequalities in works such as [22]. However, these classical methods require the knowledge about system state and input matrices. With this motivation, this paper deals with the robust pole placement problem in the LMI regions under the assumption that the system state and input matrices are unknown and the designer only has access to trajectory measurements. The system is explored with persistently exciting inputs to make sure we do not violate the fundamental requirement of behavioral system theory. The model based formulations, thereafter, can be converted to data-driven formulations using the closed-loop data-driven parametrized representations. The robust control problem considers the LMI based pole placement conditions along with robust performance constraints such as mixed performance requirements.
Contribution. The main contribution of the paper is to propose a data-driven robust control methodology that can achieve desired closed-loop pole placement in convex regions of the complex plane along with sufficient robust and optimal system performance constrained by imposing mixed performance metrics. We use the fundamental lemma based data-driven parametrized representation to formulate the convex formulations of the robust pole placement problem in LMI regions that can achieve very close to the model-based design characteristics. We provide theoretical guarantees on the performance of the proposed algorithm along with validation on a third order dynamic system example.
The rest of the paper is organized as follows. Section II describes the model and problem statement considered for the paper. We recall some fundamentals for data-driven designs in Section III. The main results on the data-driven LMI region pole placement with robustness specifications are shown in Section IV. Numerical example is given in Section V, and we provide concluding remarks in Section VI.
Notations. denotes positive definite (semidefinite) matrix; norm : The norm of system in time domain is given by, where is the impulse response; norm : The system norm is given by , where denotes maximum singular value, and is the system transfer matrix.
2 Problem Statement
We consider a linear time-invariant (LTI) continuous-time dynamic system of the form:
(1) |
where are the states and control and extraneous inputs. We, hereby, make the following assumption.
Assumption 1: The dynamic state matrix and control input matrix are unknown. However, the values of and are known.
We also consider the following assumption about the availability of measurements.
Assumption 2: The measurements of states and control inputs are available to the designer, and the designer injects known extraneous disturbances in a controlled environment to perform the control design tasks.
We are interested in placing the closed-loop poles in some desired regions of the complex plane. These regions are classically known as LMI regions defined as follows.
Definition 1: LMI pole placement regions - A subset can be characterized as the desired pole placement regions if there exist a symmetric matrix , and a matrix such that,
(2) |
where,
(3) |
The notation denotes to be a matrix (resp. block matrix) with generic entry (resp. block) .
An LMI region is a subset of the complex plane that is representable by an LMI in and , or
equivalently, an LMI in and . As a
result, LMI regions are convex. Moreover, LMI regions are
symmetric with respect to the real axis. Various different types of LMI regions can be constructed by designer. Through out this paper, we consider the following as our working example.
Example 1: To consider the region in the left half of complex plane between the lines with slopes and is given as,
(4) |
This can be straightforwardly shown using giving,
(5) |
Therefore using Schur complement,
(6) | |||
(7) | |||
(8) |
If the sector in the left half of the complex plane is described using the inner angle , then the LMI region expression becomes,
(9) |
As the LMI region is convex, we can construct more complicated LMI regions by realizing convex polygons with intersection of simpler LMI regions. The focus of this paper to designer controllers without the state dynamics and using the state and input trajectory measurements. We also incorporate robust optimization objectives along with the LMI-based pole placement constraints. We consider the mixed optimization objective. We consider two controlled output variables along with the dynamics,
(10) |
(respectively ) denotes the trasnfer function from to controlled output (respectively to ). We intend to learn the state-feedback control such that the poles of the underlying closed-loop dynamics lie in the desired LMI region characterized by the prescribed thereby maintaining -stability, and to also satisfy mixed objectives on the regulated variables. The problem statement is given as follows:
P. With the assumptions 1 and 2, learn the state-feedback control such that:
-
•
The prescribed -stability is maintained for a desired LMI region,
-
•
Meet a prescribed robustness criterion, i.e, , or minimize the norm assuming the robustness margin as a variable,
-
•
With the desired -stability and robustness margin, minimize the performance .
3 Data-Driven Representation Fundamentals
3.1 Recalling fundamental lemma
Consider a signal , the Hankel matrix associated with it is given as,
(11) |
The Hankel matrix starts with the element , and consists of rows and columns. With we denote,
(12) |
Definition 2 [11, 12]: The signal is persistently exciting of order if the corresponding Hankel matrix has full rank . Therefore, the signal must be sufficiently extended, i.e., . We recall the Willems et al.’s fundamental lemma [11] for the discrete time dynamic system:
(13) | |||
where and .
Lemma 1 [11]: Considering the discrete-time system as given in (13), when the input is persistently exciting of order then one will have,
(14) |
Lemma 2 [11]: For the system (13), if the input is persistently exciting of order , then one can express any length input-output trajectory measurements of the system in the following form,
(15) |
where . This shows that when is taken sufficiently large, the rank condition of Lemma 1 can be satisfied, and therefore, any input-output trajectory of the system can be represented as a linear combination of collected input/output data. This property enables us to replace a parametric description of the system with a data based counterpart. For a persistently exciting input sequence of order with , is necessary for the persistence of excitation condition to hold. This results in,
(16) |
This idea can be extended for continuous-time systems as shown in [12]. For a sampling time , input and state-sampled trajectories , and are stored, and the rank condition (16) needs to be checked. [12] constructed the continuous-time counterpart for the time-shifted states in the discrete-time using the derivative information with slight abuse of notation:
(17) |
The state-dynamic data gathered over the -length window is represented as,
(18) | |||
(19) |
3.2 Data-driven Closed-loop Representation
Following [12], Lemma 2 can be exploited to derive a parametrization of the closed loop system with a state-feedback law . For the closed-loop system,
(20) |
by the Rouché–Capelli theorem, there exists a matrix such that
(21) | |||
Therefore the data-driven representation becomes,
(22) | ||||
(23) |
and the model-based closed-loop can now be made data-based as follows
(24) |
The control now becomes . Therefore, the designer needs to learn the matrix to implement the feedback control. We now provide the main results of the paper.
4 Data-Driven Robust Pole Placement Methodology
We first discuss the challenges in pole placement problems using the data-driven method without invoking any robust performance constraints. Then, we introduce the formulation of data-driven pole placement problem combining condition followed by a comprehensive discussion.
Although the pole placement requirement can be on any convex region as given by (2), we consider (4) as a working example throughout the methodology development in this section and numerical example in the following section. Considering the system , in Section-II, we defined an example LMI region (4) for -stability. Next, we will state the LMI condition needs to be satisfied to place the closed loop poles in the prescribed region using a state-feedback control .
Lemma 3[22]: For the system , given an LMI region defined by (4), the closed loop system is said to be -stable if there exists a real symmetric matrix satisfying the following condition.
(25) |
Proof[22]: The above condition can easily be obtained by replacing with and with in (4). Also, the detailed discussion of this condition can be found in [22]. This guarantees that the eigen values of belong to the conic region on the left half of complex plane.
Note that, the condition given in (25) is not an LMI, because if we replace with , we get a product term of two unknown quantities and . But, with a simple change of variable , (25) can be converted in to an LMI condition.
Next, we utilize the relation given in (24), and derive the data based condition for placing the poles of closed loop system in the region defined by the LMI condition (4).
Theorem 1: For the system with unknown state dynamics, let the input sequence is persistently exciting, i.e. the rank condition (16) holds, then any matrix satisfying (26), resulting in feedback gain , will make the system -stable for the conic region defined by (4).
(26) |
Proof: We recall Lemma 3 giving us the condition (25), to place the closed loops of in the desired conic region. However, we make the assumption that the system dynamics is unknown, therefore we rely upon the data driven persistency of excitation condition as defined in Section II. It can be seen from (24) that under a persistently exciting input, the closed loop dynamics can be represented in parameterized form derived from time evolution of system dynamics , i.e., . As such, we get the following inequality,
(27) |
To make this inequality an LMI, we consider , resulting in (26). We also have . Now recalling Section II, can be computed as . Using (21) we have , which implies, . Therefore, the feedback gain turns out to be . Please note that this condition completely relies upon collected data from system trajectories and the design parameter which determines the LMI region (4).
Remark 1: It is essential to note that the above pole placement problem is a feasibility problem. Therefore, it is apparent that there can be many feasible which lies in the convex set defined by (25). This gives rise to different controller gains which all satisfy the -stability condition. Similar characteristics will be observed during data-driven design using Theorem 1, as different exploration trajectories can result in different possible control gains, however, they all satisfy the desired closed-loop pole placement constraint (4).
As described in Remark 1, we will now consider system (10) along with desired constraints. Please note, we are now considering an extraneous input and transfer functions associated with the and problem represent the gain from to regulated outputs and , respectively. As we consider the state matrix and input matrix are unknown, the extraneous disturbance needs to pre-specified in a controlled environment during the design of the feedback gains. Recalling Section II, the trajectory based system dynamics turns out to be
(28) | |||
(29) |
Therefore, the closed loop parameterized representation modifies to (30). Note, like , can also be defined using (12). The closed loop parameterization with extraneous input becomes
(30) |
We now state the following theorem to solve problem P.
Theorem 2: For the system (10), to place the closed loop poles in the desired LMI region (4) along with sufficient mixed the following set of data driven LMIs needs to be solved, where , and , are the designable state and input penalty factors.
(31) |
subject to
(32) |
(33) |
(34) |
Proof: We start with considering the performance objective. Please note in (10), performance objective is to minimize . Following [23, 24], the model-based optimization problem for the performance is given as follows:
(35) |
subject to
We now consider the performance objective which intends to minimize , and the use of KYP lemma (Bounded real lemma) results into the following optimization problem, [25, 26, 27]
(36) |
subject to
(37) | |||
(38) |
Recalling Lemma 3, the pole placement condition needs to satisfy the following inequality,
Next, we convert the above conditions in an optimization problem by seeking a common solution of , where . Using a single Lyapunov matrix X that enforces multiple constraints has been studied in [22, 25]. As we are considering the mixed objective the stabilization constraint provided by the problem will serve as a conservative unifying condition for both and problem. Therefore, the model-based solution of problem P, can be written as,
(39) |
subject to
(40) | |||
(41) |
(42) | |||
(43) |
We represented the second term in the objective of (35) by the second term of (39) and the corresponding inequality (43). To this end, we now move into converting these model-based expressions to their data-driven counter part. Considering (40) and using (30), we can have,
(44) |
We now substitute, and , this results in , giving us (32). The substitution of also converts (43) into (45).
(45) |
Next, replace and . Note, from (21) we can write , therefore pre-multiplying with results in . Now, we have , and after applying Schur complement, we get (34). The first term of objective(39) uses the relation and converts into first term of (31). Finally, we supplement these LMI conditions with the data-driven -stability condition provided in Theorem 1, and this completes the proof of Theorem 2.
Remark 2 Please note that Theorem 2 considers the performance margin as an optimization variable. However, in many scenarios the designer may be interested in a pre-specified robustness performance gain , where is the solution of the problem in Theorem 2. To use a pre-specified , the objective function in Theorem 2 modifies into along with the in (32) will be replaced by .
5 Numerical Example
In this section, we present an illustrative example of a pole placement problem with robust performance criteria using the data-driven method discussed in Section 4. We compare the obtained results using the data-driven approach with its model-based counterpart. We define the system given in (10) with the following state and input matrices, taken from [14]:
. Two of the eigenvalues of are located on the right side of the axis resulting in an unstable open-loop system. Note while designing the data-driven control, it is assumed that and are unknown. We choose the system matrix for extraneous inputs and other performance matrices and as follows:
,
. We choose a random initial condition and a random input sequence with the magnitudes of both channels constrained at . Here, the interesting part is in deciding the length of the input sequence . For this example , , therefore to satisfy the rank condition given in (16) as a requirement of the persistently excitation, the length of the input sequence is selected as , as we require . Next, to generate the trajectory rollout as defined in (28), we choose randomly from a norm ball defined by . Trapezoidal approximations are used to generate the rollouts using continuous time dynamics. Now, for a given , the solution of problem P using Theorem 2 is as follows,
The poles of the closed-loop system are -4.2545, -1.9539, and -0.6244. These poles (marked as red* in Fig. 1) are located on the negative real axis, which includes the conic region defined by (shaded area in Fig. 1). To verify our designed data-driven controller gain and the location of closed-loop poles, we run the same experiments with the model-based equations given in (39) to (43). The computed model-based gain is shown below:
The computed gain and are identical with a difference , which validates the accuracy of our proposed data-driven design.

We do an ablation study to further check the robustness of the data driven method. We removed the pole placement constraints from the conditions given in Theorem 2. This simply converts the problem into a mixed problem. Solving the LMIs (31) to (34), except (33), we obtained,
Fig. 1 clearly indicates that these poles (marked as blue*) are located outside the LMI region (shaded area). Like previous experiments, the same results can be found in case of model-based optimization using (39) to (43) eliminating (4).
6 Conclusion
In this paper, we have presented a comprehensive data-driven methodology that satisfies multiple constraints comprising of -stability, and mixed performance guarantees. We have shown that the data-based parametrized representation of closed-loop dynamics originating from the behavioral system theory can provide a fundamental framework to solve such classic problems with unknown state and input matrices. The solutions from the proposed semi-definite programs match closely with the classical model-based solutions, which has been proven rigorously and validated numerically. Future research work will consider unknown dynamic systems coupled with structured and parametric uncertainty, and develop multiple data-driven robust control designs.
References
- [1] R. Sutton and A. Barto, Reinforcement learning - An introduction. MIT press, Cambridge, 1998, 1998.
- [2] C. Watkins, “Learning from delayed systems,” PhD thesis, King’s college of Cambridge, 1989.
- [3] D. P. Bertsekas, Dynamic Programming and Optimal Control: Approximate Dynamic Programming, 4th ed. Athena Scientific, Belmont, MA, USA., 2012.
- [4] W. Powell, Approximate dynamic programming. Wiley, 2007.
- [5] D. Vrabie, O. Pastravanu, M. Abu-Khalaf, and F. Lewis, “Adaptive optimal control for continuous-time linear systems based on policy iteration,” Automatica, vol. 45, pp. 477–484, 2009.
- [6] Y. Jiang and Z.-P. Jiang, “Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics,” Automatica, vol. 48, pp. 2699–2704, 2012.
- [7] K. Vamvoudakis, “Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach,” Systems and Control Letters, vol. 100, pp. 14–20, 2017.
- [8] B. Kiumarsi, K. Vamvoudakis, H. Modares, and F. Lewis, “Optimal and autonomous control using reinforcement learning: A survey,” IEEE Trans. on Neural Networks and Learning Systems, 2018.
- [9] S. Mukherjee, H. Bai, and A. Chakrabortty, “Reduced-dimensional reinforcement learning control using singular perturbation approximations,” Automatica, vol. 126, p. 109451, 2021.
- [10] S. Mukherjee and T. L. Vu, “On distributed model-free reinforcement learning control with stability guarantee,” IEEE Control Systems Letters, vol. 5, no. 5, pp. 1615–1620, 2020.
- [11] J. C. Willems, P. Rapisarda, I. Markovsky, and B. L. De Moor, “A note on persistency of excitation,” Systems & Control Letters, vol. 54, no. 4, pp. 325–329, 2005.
- [12] C. De Persis and P. Tesi, “Formulas for data-driven control: Stabilization, optimality, and robustness,” IEEE Transactions on Automatic Control, vol. 65, no. 3, pp. 909–924, 2019.
- [13] H. J. Van Waarde, J. Eising, H. L. Trentelman, and M. K. Camlibel, “Data informativity: a new perspective on data-driven analysis and control,” IEEE Transactions on Automatic Control, vol. 65, no. 11, pp. 4753–4768, 2020.
- [14] J. Berberich, A. Koch, C. W. Scherer, and F. Allgöwer, “Robust data-driven state-feedback design,” in 2020 American Control Conference (ACC). IEEE, 2020, pp. 1532–1538.
- [15] J. Morimoto and K. Doya, “Robust reinforcement learning,” in NIPS. Citeseer, 2000, pp. 1061–1067.
- [16] L. Pinto, J. Davidson, R. Sukthankar, and A. Gupta, “Robust adversarial reinforcement learning,” in International Conference on Machine Learning. PMLR, 2017, pp. 2817–2826.
- [17] Y. Jiang and Z.-P. Jiang, Robust Adaptive Dynamic Programming. Wiley-IEEE press, 2017.
- [18] S. Mukherjee, H. Bai, and A. Chakrabortty, “Block-decentralized model-free reinforcement learning control of two time-scale networks,” in American Control Conference 2019, Philadelphia, USA.
- [19] P. P. Khargonekar and M. A. Rotea, “Mixed control: / a convex optimization approach,” IEEE Transactions on Automatic Control, vol. 36, no. 7, pp. 824–837, 1991.
- [20] J. Doyle, K. Glover, P. Khargonekar, and B. Francis, “State-space solutions to standard and control problems,” in 1988 American Control Conference. IEEE, 1988, pp. 1691–1696.
- [21] Y. Fujisaki and T. Yoshida, “A linear matrix inequality approach to mixed / control,” IFAC Proceedings Volumes, vol. 29, no. 1, pp. 1339–1344, 1996.
- [22] M. Chilali and P. Gahinet, “/ design with pole placement constraints: an lmi approach,” IEEE Transactions on automatic control, vol. 41, no. 3, pp. 358–367, 1996.
- [23] E. Feron, V. Balakrishnan, S. Boyd, and L. El Ghaoui, “Numerical methods for related problems,” in 1992 American Control Conference. IEEE, 1992, pp. 2921–2922.
- [24] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear matrix inequalities in system and control theory. SIAM, 1994.
- [25] C. Scherer and S. Weiland, “Linear matrix inequalities in control,” Lecture Notes, Dutch Institute for Systems and Control, Delft, The Netherlands, vol. 3, no. 2, 2000.
- [26] P. Gahinet and P. Apkarian, “A linear matrix inequality approach to control,” International journal of robust and nonlinear control, vol. 4, no. 4, pp. 421–448, 1994.
- [27] T. Iwasaki and R. E. Skelton, “All controllers for the general control problem: LMI existence conditions and state space formulas,” Automatica, vol. 30, no. 8, pp. 1307–1317, 1994.