Data-Driven Predictive Control With Adaptive Disturbance Attenuation For Constrained Systems
Abstract
In this paper, we propose a novel data-driven predictive control approach for systems subject to time-domain constraints. The approach combines the strengths of control for rejecting disturbances and MPC for handling constraints. In particular, the approach can dynamically adapt disturbance attenuation performance depending on measured system state and forecasted disturbance level to satisfy constraints. We establish theoretical properties of the approach including robust guarantees of closed-loop stability, disturbance attenuation, constraint satisfaction under noisy data, as well as sufficient conditions for recursive feasibility, and illustrate the approach with a numerical example.
keywords:
Data-Driven Control, Control, Model Predictive Control, Constraints, Linear Matrix Inequality, ,
1 Introduction
In addition to stability, disturbance rejection and constraint satisfaction are important topics in control systems engineering, especially as modern systems often operate in complex and dynamic environments while increasingly stringent safety and performance requirements are imposed on them. The control is an effective approach to addressing the former – it aims to minimize the effect of disturbances on system outputs [1]. Since introduced in the early 1980s, control has been successfully implemented in aerospace, automotive, and many other sectors. However, time-domain constraints on state and control variables are not handled in conventional control. Meanwhile, Model Predictive Control (MPC) stands out from many other control approaches for its ability to explicitly and non-conservatively handle time-domain constraints [2]. Therefore, combing control and MPC to achieve desired disturbance rejection properties while satisfying constraints is appealing and has been studied in, e.g., [3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. A major strength of such a combination is the ability to dynamically adapt disturbance attenuation performance (depending on measured/estimated system state and forecasted disturbance level) to satisfy constraints through solving an MPC optimization problem repeatedly in an online manner [3, 4].
Conventional control and MPC methods rely on parametric models of the system to be controlled. As engineered systems are becoming increasingly complex and cyber-physical, first-principle models are more difficult to obtain. Meanwhile, with the rapid advances in sensing, computation, and communication technologies, data is more readily available. This has spurred the development of data-driven methods for system modeling, analysis, and control. In particular, practitioners may favor an end-to-end solution that bypasses the intermediate steps of modeling and analysis and produces a controller with desired properties directly from measured data of system behavior. Therefore, it is beneficial to extend model-based methods for control, MPC, and the aforementioned combination to their data-driven counterparts.
Here we provide a brief review of existing data-driven control methods in the literature: Reinforcement Learning (RL) can be used to train optimal controllers from data [13, 14]. However, the majority of RL methods optimize the average behavior of the closed-loop system when it is subject to disturbances with certain statistics and do not provide a worst-case robustness guarantee. Furthermore, they typically handle constraints through penalties, hence making constraints soft. Integrating data-driven uncertainty estimation/system identification with(in) the MPC framework can lead to reliable usage of data for improving MPC performance while maintaining certain robustness guarantees especially for satisfying constraints [15, 16]. Along these lines, the Data-enabled Predictive Control (DeePC) is an emerging technique and has attracted increasing attention from researchers recently. The DeePC uses measured input-output data to create a non-parametric system model based on behavioral systems theory and uses the model to predict future trajectories [17]. It has demonstrated superior performance in various applications [18, 19, 20, 21]. However, DeePC entails higher computational cost than conventional MPC due to its high-dimensional, data-based non-parametric system model [22, 23]. Also, how to handle noisy data and provide certain robustness guarantees under noisy data remains to be an open question in DeePC [24, 25, 26]. Furthermore, to the best of our knowledge, there has been no previous work integrating control and MPC (or, designing an MPC that has an -type disturbance attenuation property) in the general data-driven MPC literature. An approach to synthesizing an controller using noisy data based on a matrix S-lemma was introduced in [27]. The approach reduces a data-driven control synthesis problem to a low-dimensional Linear Matrix Inequality (LMI) optimization problem, which is computationally tractable with state-of-the-art interior-point LMI solvers. However, time-domain constraints are not handled by the approach of [27], and moving-horizon, MPC-type implementation of the approach was not considered in [27]. More classical data-driven control methods also include self-tuning regulators [28] and iterative learning control [29]. A comprehensive survey of data-driven control methods can be found in [30].
In this paper, we fill the gap in the literature by proposing a novel data-driven control approach that combines the strengths of control for rejecting disturbances and MPC for handling constraints. Our approach can be viewed as a data-driven counterpart of the model-based moving-horizon control approach of [3, 4] and enjoys similar properties including dynamic adaptation of performance depending on measured system state and forecasted disturbance level for satisfying constraints. Specifically, the contributions include:
-
1.
Our approach is the first data-driven MPC method in the literature that focuses on -type disturbance attenuation for systems with time-domain constraints.
-
2.
We conduct a comprehensive analysis of the theoretical properties of our approach. The results include robust guarantees of closed-loop stability, disturbance attenuation, and constraint satisfaction under noisy data, as well as conditions for online problem recursive feasibility.
The paper is organized as follows: We describe the problem treated in this paper including key assumptions and preliminaries in Section 2. We develop an approach to synthesizing an controller that also enforces time-domain constraints for an unknown system using noisy trajectory data in Section 3. The development in this section has merit in its own right because it extends the data-driven control synthesis approach of [27] (which does not handle constraints) to the constrained case. It is also an essential building block of the data-driven MPC algorithm developed in Section 4. Then, Section 4 presents our proposed data-driven MPC algorithm and analyzes its theoretical properties. We illustrate the algorithm with a numerical example in Section 5. Finally, we conclude the paper in Section 6.
The notations used in this paper are mostly standard. We use to denote the space of -dimensional real vectors, the space of -by- real matrices, and the set of natural numbers including zero. Given a vector , we use to denote its Euclidean norm, i.e., . Given a matrix , its kernel is the subspace of all such that , i.e., . Given two symmetric matrices , means that is positive definite, i.e., for all non-zero , and means that is positive semidefinite, i.e., for all . Similarly, and means that is negative definite and negative semidefinite, respectively. For an optimization problem, by “solution” we mean a feasible solution, i.e., a set of values for the decision variables that satisfies all constraints, and by “optimal solution” we mean a feasible solution where the objective function (almost) reaches its maximum (or minimum) value. Because the optimization problems appearing in this paper are all convex problems, an “optimal solution” is globally optimal.
2 Problem Statement and Preliminaries
We consider the control of dynamic systems that can be represented by the following linear time-invariant model:
(1a) | ||||
(1b) | ||||
(1c) |
where denotes the system state at the discrete time , denotes the control input, denotes an unmeasured disturbance input, and are two outputs the roles of which are introduced below. The goal is to design a control algorithm that achieves the following three objectives:
-
1.
Guaranteeing closed-loop stability;
-
2.
Optimizing disturbance attenuation in terms of the performance from to output ;
-
3.
Enforcing the following constraints on output at all times :
(2) where denotes the th entry of and for all . Constraints in this form can represent state/output variable bounds, control input limits, etc.
In addition to the standard assumptions of being stabilizable and being detectable, we also assume that the system model is unknown and only trajectory data are available. This calls for a data-driven control approach.
Assume we have data , where denotes a pair of previous state and control input values, denotes the corresponding next state value, and the subscript indicates the th data point. The data can be collected from a single or multiple trajectories. According to (1a), we have
(3) |
where is the disturbance input value at the time of collecting the data point . Organizing data with the following matrices:
(4a) | ||||
(4b) | ||||
(4c) | ||||
(4d) |
the relation (3) implies
(5) |
Because the disturbance input is not measured, in (4d) and (5) is unknown. We make the following assumption about the data:
Assumption 1. The disturbance input values , collected in , satisfy the following quadratic matrix inequality:
(6) |
where , , and are known matrices.
When and , (6) reduces to , which has the interpretation that the total energy of the disturbance inputs in data is bounded by . A known norm bound on each individual disturbance , , implies a bound in the form of (6) with , , and .
Plugging into (6) and after some algebra, we obtain
(7) |
where , and indicates the transpose of the related element below the diagonal. We let be the collection of satisfying (2):
(8) |
According to (5), .
Under Assumption 1, we propose a control approach to achieving the three objectives below (1) for the unknown system (1). Our approach is based on the following matrix S-lemma developed in [27]:
Lemma 1 [27]. Let be symmetric and partitioned as follows:
(9) |
where . Suppose , , , and there exists satisfying . Then, we have that
(10) |
if and only if there exist and such that
(11) |
Proof. See Theorem 13 of [27].
3 Constrained Control
In this section, we consider a linear-feedback with a constant gain matrix . This controller yields the following closed-loop system:
(12a) | ||||
(12b) | ||||
(12c) |
The closed-loop transfer matrix from disturbance input to performance output is given by
(13) |
For a performance level , the closed-loop matrix is Schur stable and the norm of satisfies if and only if there exists a matrix satisfying
(14a) | |||
(14b) |
See Theorem 2.2 of [31]. Now let (hence, ) and . With some algebra and Schur complement arguments, it can be shown that (14) is equivalent to
(15a) | |||
(15b) |
Note that (15b) can be written as
(16) |
Now consider the quadratic Lyapunov function . When (14) holds, we can derive the following dissipation inequality:
(17) |
where satisfies . We define, for , the ellipsoidal set
(18) |
and we arrive at the following result:
Lemma 2. Suppose (14) holds and the energy of the disturbance input is bounded as for some . Then, for any , is an invariant set of (12), i.e., for all .
Proof. Suppose (14) holds, , and . Then, the dissipation inequality (3) implies
(19) |
for all . This shows for all .
We now consider the optimization problem (20) for designing the feedback gain . In (20), denotes the th standard basis vector, is a forecasted energy bound of the disturbance input, and is a design parameter to be tuned so that (20) is feasible. We note that (20) is an LMI (hence, convex) problem in the decision variables . A design of based on (20) has several desirable properties, stated in the following theorem:
(20a) | ||||
s.t. | (20b) | |||
(20e) |
where , and indicates the transpose of the related element below the diagonal.
Theorem 1. Suppose
-
(a)
The offline data satisfy Assumption 1 and there exists such that (2) is strictly positive-definite;
-
(b)
The online disturbance inputs satisfy the energy bound ;
-
(c)
The tuple is a solution to (20).
Then, if we control the unknown system (1) using the linear-feedback , with , the closed-loop system has the following properties:
-
(i)
The closed-loop matrix is Schur stable and the closed-loop norm from to is less than ;
-
(ii)
The constraints in (2) are satisfied at all times.
Proof. First, let
(26) | ||||
(29) | ||||
(33) |
where . Using a Schur complement argument, it can be shown that (20b) is equivalent to
(34) |
Meanwhile, for the and defined and partitioned as above, the following conditions can be verified: 1) , using the fact that due to the constraint (20c), 2) , using according to Assumption 1, and 3) . Now with and (with given in assumption (a)), it can be seen that the assumptions of Lemma 1 are all satisfied. According to Lemma 1, (34) holds for some and if and only if
(35) |
where is defined in (8). Recall that . Therefore, we have shown that if satisfies (20b), (20c), and , then (16) (hence, (15b)), with , necessarily holds. Note that (20c) also implies (15a) and . Now, if we let and , we arrive at (14) with . This proves part (i).
Now suppose satisfies (20b)-(20e) and . Above we have shown that, with , , and , (14) holds. Meanwhile, using a Schur complement argument, (20d) implies . In this case, according to Lemma 2, in assumption (b) implies that is an invariant set of the closed-loop system, i.e., for all . Recall that is the ellipsoidal set defined in (18). The support function of is . Hence, implies
(36) |
Meanwhile, using a Schur complement argument, (20e) is equivalent to
(37) |
Combing (3) and (3), we obtain . This completes the proof of part (ii).
Theorem 1 states the stability, disturbance attenuation, and constraint enforcement properties of the feedback gain for the unknown system (1) synthesized using its trajectory data according to (20). The existence of such that (2) is strictly positive-definite in assumption (a) of Theorem 1 can be checked offline. We make the following two remarks about Theorem 1:
Remark 1. The norm from to represents a level of disturbance attenuation of the closed-loop system. In particular, using (3) recursively we can obtain the following dissipation inequality that holds for all :
(38) |
and this indicates that the -gain from disturbance to output is bounded by . Because (20) maximizes , which is equivalent to minimizing , the obtained controller seeks to maximize disturbance attenuation while satisfying the time-domain constraints in (2).
Remark 2. Differently from many other robust control formulations that assume set-bounded disturbances (i.e., for some known set bound and for all ), our formulation (20) enforces constraints for disturbance inputs that have bounded total energy where the energy can be distributed arbitrarily over time (i.e., ). This total energy model is particularly suitable for modeling transient disturbances such as wind gusts in aircraft flight control or wind turbine control, power outages in power systems, temporary actuator or sensor failures, etc. Such disturbances occur infrequently and typically last a short period of time (as compared to persistent disturbances), but they can have a significant magnitude. Meanwhile, predicting exactly when they will occur can be difficult or impossible. In such a case, on the one hand, our formulation based on a total energy disturbance model may lead to a less conservative solution (hence, better performance) than those assuming a set bound at all times; on the other hand, estimating/predicting an energy bound for such transient disturbance events using historic data and/or real-time information (such as weather conditions) is possible in many applications. In what follows we assume a mechanism that is able to forecast an energy bound for future disturbances is available. Treating set-bounded persistent disturbances is left for future research.
4 Moving-Horizon Control
We now consider a strategy for implementing the control developed in Section 3 in a moving-horizon manner: At each time , one solves (20) with the current system state as the initial condition in (20d) for a feedback gain and uses the control over one time step. This way, the feedback gain becomes adaptive to system state and the control becomes nonlinear, possibly leading to improved performance while satisfying constraints. However, this simple implementation may fail to guarantee stability and disturbance attenuation, as discussed in [3, 4]. To recover a disturbance attenuation guarantee, following the strategy of [3, 4], we consider the following inequality:
(39) |
where is associated with the feedback gain that is used at time and with an performance level of (i.e., the triple satisfies (14)), is associated with and , and keeps track of a previous dissipation level and is defined recursively according to
(40) |
for and . Note that the definition of in (40) only uses state and -matrix information up to time .
Lemma 3. Suppose (39) holds for all where is defined recursively according to (40). Then, the following dissipation inequality will be satisfied for all :
(41) |
where .
Proof. For each , because satisfies (14), similar to (3), we can derive the following inequality:
(42) |
Summing over , we obtain
(43) | ||||
The definition (40) yields the following closed-form expression for :
(44) |
Supposing (39) holds at , we have
(45) |
Combining (43), (45), and , we obtain
(46) |
We now consider the following moving-horizon approach to constrained control for the unknown system (1): At each time , we solve the optimization problem (47) for designing the feedback gain and use the control over one time step. In particular, the constraint (47f) is excluded at the initial time and included for . In (47), denotes the measured current system state, is from the previous time, is defined according to (40), represents a forecasted bound on the total energy of present and future disturbances, i.e., , and is a design parameter, of which a design method is informed by Lemma 4 and Theorem 2. For a moving-horizon control algorithm, recursive feasibility (i.e., the online optimization problem being feasible at a given time implies the problem being feasible again at the next time) is a highly desirable property. Before we discuss the closed-loop properties of the proposed algorithm, the following lemma provides a recursive feasibility result:
(47a) | ||||
s.t. | (47b) | |||
(47d) | ||||
(47f) |
where , indicates the transpose of the related element below the diagonal, and the constraint (47f) is excluded at and included for .
Lemma 4. Suppose (47) is feasible at time and denotes a solution. At time , if the forecasted disturbance energy bound satisfies and the parameter is chosen to be , (47) will be feasible again. In particular, in this case, remains to be a solution to (47) at time .
Proof. First, we note that the constraints (47b), (35c), and (35e) with do not change from to . The solution satisfying (47b) and (35c) implies (14) with , , and . In this case, similar to (3), the following inequality holds
(48) |
Meanwhile, satisfying the constraint (47d) at time is equivalent to
(49) |
If and , together with (4) and (49), we can obtain
(50) |
which implies that also satisfies the constraint (47d) at time . Last, for , we have ; for , satisfying the constraint (47f) implies (39) with . In the latter case, similar to (44)–(45), we have
(51) |
At time , the constraint (47f) is equivalent to
(52) |
where . Because , it is clear that satisfies (52) (hence, (47f)).
Therefore, we have shown that if and , a solution at time , , still satisfies all constraints of (47) at time , i.e., remains to be a solution to (47) at . This proves the result.
Remark 3. Lemma 4 represents a sufficient condition for the online optimization problem (47) to be recursively feasible. The assumption is reasonable because bounds the total energy of disturbance inputs from time and bounds that from – they differ by the energy of the disturbance input at time . Then, yields a simple strategy for setting the parameter – at each time , is set to its previous value. At time instants at which (47) is infeasible with (due to errors in forecasted disturbance energy bound and violations of ), an alternative strategy that promotes feasibility is to include or its inverse as a decision variable optimized together with the other variables . In this case, (47) becomes a nonconvex problem with a single nonconvex variable or , which can be solved using a branch-and-bound type algorithm with branching on or [32].
We now analyze the closed-loop properties of the system under the proposed moving-horizon control approach. The properties are given in Theorem 2.
Theorem 2. Suppose
-
(a)
The offline data satisfy Assumption 1 and there exists such that (2) is strictly positive-definite;
-
(b)
The online disturbance inputs satisfy for all , where are forecasted energy bounds and used in (47);
- (c)
-
(d)
The solutions have a common lower bound on their objective values, i.e., for all .
Then, if we control the unknown system (1) using , with , at all , the closed-loop system has the following properties:
-
(i)
The system state converges to as ;
-
(ii)
The following dissipation inequality is satisfied for all :
(53) where , indicating that the -gain from disturbance to output is bounded by ;
-
(iii)
The constraints in (2) are satisfied at all times.
Furthermore, suppose (a) and (b) hold and
-
(e)
Problem (47) is feasible at the initial time , the energy bounds satisfy for all , and the parameters are chosen as for all ;
-
(f)
The tuple is a (not only feasible but also) optimal solution to (47) at each .
Then, the following results hold:
-
(iv)
The conditions (c) and (d) are necessarily satisfied;
-
(v)
The solutions have non-decreasing objective values, i.e., for all ;
-
(vi)
The dissipation inequality (53) holds true with the gain of .
Proof. We start with proving the inequality (53) in (ii). At each time , the solution satisfies the constraints (47b) and (35c). Following similar steps as in the proof of Theorem 1, part (i), we can show that (14), with , , and , holds. The constraint (47f) is equivalent to
(54) |
Because (47f) (hence, (54)) is satisfied by at each , we have (39) for all . In this case, according to Lemma 3, (41) holds for all . Meanwhile, for all (assumption (d)) implies that for all , where , , and . The combination of (41) and leads to (53). This proves part (ii). Now, because (53) holds for all , as , we obtain
(55) |
Note that under assumption (b), the right-hand side of (55) is bounded by , and hence the series on the left-hand side converges according to the monotone convergence theorem. And (55) implies as . When is detectable, implies . This proves part (i). For part (iii), because satisfies (47d) and (35e), following similar steps as in the proof of Theorem 1, part (ii), it can be shown that (due to (47d)) and this implies for all (due to (35e)).
Now suppose (a), (b), (e), and (f) hold. According to Lemma 4, when (47) is feasible at , , and , an optimal solution to (47) at remains to be a feasible solution to (47) at . In this case, (47) is feasible at and an optimal solution has an objective value at least as large as that of the feasible solution , i.e., . Using the same argument recursively, we can conclude that (47) is feasible at all and for all . The latter also implies that is a common lower bound for all , i.e., (d) is satisfied with . Accordingly, (53) holds with . This completes the proofs of parts (iv), (v), and (vi).
We make the following remark about Theorem 2:
Remark 4. Theorem 2 shows that our objectives of closed-loop stability, disturbance attenuation, and constraint enforcement stated in Section 2 are fulfilled by the proposed data-driven moving-horizon control based on (47). In particular, part (i) shows that the system state converges to zero asymptotically for any disturbance input signal that has bounded total energy. Parts (v) and (vi) show that, under assumption (e) (which is reasonable as discussed in Remark 3), the moving-horizon approach based on (47) achieves a disturbance attenuation performance at least as good as that achieved by a constant linear-feedback designed based on (20). In particular, the moving-horizon approach based on (47) is able to dynamically adapt the feedback gain to measured/estimated system state and forecasted disturbance energy bound and hence has the potential to achieve improved disturbance attenuation performance while satisfying constraints. We will demonstrate this improvement with a numerical example in the following section.
5 Numerical Example
In this section, we use a numerical example to illustrate our proposed control approach. Consider a system in the form of (1) with the following parameters:
(56a) | ||||
(56b) | ||||
(56c) |
The parameters in (56b) mean that we want to minimize the effect of disturbance input on the first state , and the parameters in (56c) mean that the second state and the control input should satisfy the constraints and . Assume is unknown. We simulate the system and collect data of with random disturbance input that satisfies . Then, we construct the matrices in Assumption 1 as , , and .
We implement three data-driven approaches using the same set of collected data to control the system: 1) the data-driven control approach of [27] (which does not handle constraints), 2) the constrained control based on (20) (without moving-horizon implementation), and 3) the moving-horizon control based on (47).
We consider the initial condition . For (20) and (47), to illustrate the results of Theorems 1 and 2, we use the parameters , , and for all .
The system is stabilized and converges to under each of the three approaches. However, as shown in Fig. 1, the control using the approach of [27] violates the constraint . This is expected because the approach of [27] does not consider any constraints. In contrast, (20) and (47) both satisfy the constraints, illustrating the effectiveness of our proposed approach for handling constraints. We then compare the time history of using the constrained control based on (20) (without moving-horizon implementation) versus that using the moving-horizon control based on (47) in Fig. 2. Note that a smaller indicates a stronger attenuation of the effect of disturbance input on performance output . It can be seen that both approaches start with a large . This is because the control has to sacrifice some disturbance attenuation performance for satisfying the constraints on and . Because the approach of (20) uses a fixed gain, the disturbance attenuation performance level remains constant over time. In contrast, as the state and the control are getting farther away from the constraint boundaries, the moving-horizon approach based on (47) adjusts the gain and achieves a lower , illustrating the effectiveness of our proposed moving-horizon approach for improving performance while satisfying constraints.
In this example, the optimization problem (47) is feasible at every time step, which is consistent with the recursive feasibility result of Lemma 4 and Theorem 2. The average computation time per step for solving (47) using the MATLAB-based LMI solver mincx on a MacBook Air (M1 CPU, 8 GB RAM) is 12.5 ms, indicating the computational feasibility of the approach.
6 Conclusions
In this paper, we proposed a novel data-driven moving-horizon control approach for constrained systems. The approach optimizes -type disturbance rejection while satisfying constraints. We established theoretical guarantees of the approach regarding closed-loop stability, disturbance attenuation, constraint satisfaction under noisy offline data, and online problem recursive feasibility. The effectiveness of the approach has been illustrated with a numerical example. Future work includes applying the proposed approach to practical control engineering problems.
References
- [1] K. Zhou, J. C. Doyle, and K. Glover, Robust and optimal control. USA: Prentice Hall, 1996.
- [2] J. M. Maciejowski, Predictive control: with constraints. USA: Prentice Hall, 2002.
- [3] H. Chen and C. Scherer, “Disturbance attenuation with actuator constraints by moving horizon control,” IFAC Proceedings, vol. 37, no. 1, pp. 415–420, 2004.
- [4] H. Chen and C. W. Scherer, “Moving horizon control with performance adaptation for constrained linear systems,” Automatica, vol. 42, no. 6, pp. 1033–1040, 2006.
- [5] S.-M. Lee and J. H. Park, “Robust model predictive control for uncertain systems using relaxation matrices,” International Journal of Control, vol. 81, no. 4, pp. 641–650, 2008.
- [6] P. E. Orukpe, “Towards a less conservative model predictive control based on mixed control approach,” International Journal of Control, vol. 84, no. 5, pp. 998–1007, 2011.
- [7] H. Huang, D. Li, and Y. Xi, “Mixed robust model predictive control with saturated inputs,” International Journal of Systems Science, vol. 45, no. 12, pp. 2565–2575, 2014.
- [8] M. Benallouch, G. Schutz, D. Fiorelli, and M. Boutayeb, “ model predictive control for discrete-time switched linear systems with application to drinking water supply network,” Journal of Process Control, vol. 24, no. 6, pp. 924–938, 2014.
- [9] Y. Song, X. Fang, and Q. Diao, “Mixed distributed robust model predictive control for polytopic uncertain systems subject to actuator saturation and missing measurements,” International Journal of Systems Science, vol. 47, no. 4, pp. 777–790, 2016.
- [10] Y. Song, Z. Wang, D. Ding, and G. Wei, “Robust model predictive control for linear systems with polytopic uncertainties under weighted MEF-TOD protocol,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 49, no. 7, pp. 1470–1481, 2017.
- [11] Y. Zhang, C.-C. Lim, and F. Liu, “Robust mixed model predictive control for Markov jump systems with partially uncertain transition probabilities,” Journal of the Franklin Institute, vol. 355, no. 8, pp. 3423–3437, 2018.
- [12] A. Shokrollahi and S. Shamaghdari, “Robust model predictive control for constrained Lipschitz non-linear systems,” Journal of Process Control, vol. 104, pp. 101–111, 2021.
- [13] F. L. Lewis, D. Vrabie, and K. G. Vamvoudakis, “Reinforcement learning and feedback control: Using natural decision methods to design optimal adaptive controllers,” IEEE Control Systems Magazine, vol. 32, no. 6, pp. 76–105, 2012.
- [14] B. Recht, “A tour of reinforcement learning: The view from continuous control,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 2, pp. 253–279, 2019.
- [15] U. Rosolia and F. Borrelli, “Learning model predictive control for iterative tasks. A data-driven control framework,” IEEE Transactions on Automatic Control, vol. 63, no. 7, pp. 1883–1896, 2017.
- [16] L. Hewing, K. P. Wabersich, M. Menner, and M. N. Zeilinger, “Learning-based model predictive control: Toward safe learning in control,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 3, pp. 269–296, 2020.
- [17] J. Coulson, J. Lygeros, and F. Dörfler, “Data-enabled predictive control: In the shallows of the DeePC,” in 18th European Control Conference, pp. 307–312, IEEE, 2019.
- [18] E. Elokda, J. Coulson, P. N. Beuchat, J. Lygeros, and F. Dörfler, “Data-enabled predictive control for quadcopters,” International Journal of Robust and Nonlinear Control, vol. 31, no. 18, pp. 8916–8936, 2021.
- [19] L. Huang, J. Coulson, J. Lygeros, and F. Dörfler, “Decentralized data-enabled predictive control for power system oscillation damping,” IEEE Transactions on Control Systems Technology, vol. 30, no. 3, pp. 1065–1077, 2021.
- [20] V. Chinde, Y. Lin, and M. J. Ellis, “Data-enabled predictive control for building HVAC systems,” Journal of Dynamic Systems, Measurement, and Control, vol. 144, no. 8, p. 081001, 2022.
- [21] N. Li, E. Taheri, I. Kolmanovsky, and D. Filev, “Minimum-time trajectory optimization with data-based models: A linear programming approach,” arXiv preprint arXiv:2312.05724, 2023.
- [22] S. Baros, C.-Y. Chang, G. E. Colon-Reyes, and A. Bernstein, “Online data-enabled predictive control,” Automatica, vol. 138, p. 109926, 2022.
- [23] L. Dai, T. Huang, R. Gao, Y. Zhang, and Y. Xia, “Cloud-based computational data-enabled predictive control,” IEEE Internet of Things Journal, vol. 9, no. 24, pp. 24949–24962, 2022.
- [24] J. Berberich, J. Köhler, M. A. Müller, and F. Allgöwer, “Data-driven model predictive control with stability and robustness guarantees,” IEEE Transactions on Automatic Control, vol. 66, no. 4, pp. 1702–1717, 2020.
- [25] J. Coulson, J. Lygeros, and F. Dörfler, “Distributionally robust chance constrained data-enabled predictive control,” IEEE Transactions on Automatic Control, vol. 67, no. 7, pp. 3289–3304, 2021.
- [26] L. Huang, J. Zhen, J. Lygeros, and F. Dörfler, “Robust data-enabled predictive control: Tractable formulations and performance guarantees,” IEEE Transactions on Automatic Control, vol. 68, no. 5, pp. 3163–3170, 2023.
- [27] H. J. van Waarde, M. K. Camlibel, and M. Mesbahi, “From noisy data to feedback controllers: Nonconservative design via a matrix S-lemma,” IEEE Transactions on Automatic Control, vol. 67, no. 1, pp. 162–175, 2020.
- [28] K. J. Åström, U. Borisson, L. Ljung, and B. Wittenmark, “Theory and applications of self-tuning regulators,” Automatica, vol. 13, no. 5, pp. 457–476, 1977.
- [29] D. A. Bristow, M. Tharayil, and A. G. Alleyne, “A survey of iterative learning control,” IEEE Control Systems Magazine, vol. 26, no. 3, pp. 96–114, 2006.
- [30] Z.-S. Hou and Z. Wang, “From model-based control to data-driven control: Survey, classification and perspective,” Information Sciences, vol. 235, pp. 3–35, 2013.
- [31] C. E. de Souza and L. Xie, “On the discrete-time bounded real lemma with application in the characterization of static state feedback controllers,” Systems & Control Letters, vol. 18, no. 1, pp. 61–71, 1992.
- [32] H. Tuy, “On nonconvex optimization problems with separated nonconvex variables,” Journal of Global Optimization, vol. 2, pp. 133–144, 1992.