Stochastic Model Predictive Control for Linear Systems with Unbounded Additive Uncertainties
Abstract
This paper presents two stochastic model predictive control methods for linear time-invariant systems subject to unbounded additive uncertainties. The new methods are developed by formulating the chance constraints into deterministic form, which are treated in analogy with robust constraints, by using the probabilistic reachable set. Firstly, the probabilistically resolvable time-varying tube-based stochastic model predictive control algorithm is designed by employing the time-varying probabilistic reachable sets as the tubes. Secondly, by utilizing the probabilistic positively invariant set, the probabilistically resolvable constant tube-based stochastic model predictive control algorithm is developed by employing the constantly tightened constraints in the entire prediction horizons. In addition, to enhance the feasibility of the algorithms, the soft constraints are imposed to the state initializations. The algorithm feasibility and closed-loop stability results are provided. The efficacy of the approaches are demonstrated by means of numerical simulations.
Index Terms:
Tube-based stochastic MPC, uncertain system, model predictive control, chance constraints, constrained control.I Introduction
Model predictive control (MPC), also known as the receding horizon control, is inarguably the most widely implemented modern control strategy due to its conceptual simplicity and its eminent capability of handling complex dynamics and fulfilling system constraints, as well as compromising the optimal control performance and the computational load [1, 2, 3, 4, 5]. The control performance depends, to a great extent, on the accuracy of the model. Therefore, the traditional MPC actually could not obtain the real optimal control performance, because the uncertainties, such as external disturbance and model mismatch, are widely existing. By assuming the uncertainty is bounded and considering the worst-case, the robust MPC is proposed to deal with model uncertainty [6, 7, 8]. Among the robust MPC, the tube-based MPC attracts huge attention due to its simply handling of constraints in the optimal control problem. In 2001, two typical tube-based MPC approaches are developed almost simultaneously. One of them uses the current measured state as the initial nominal state [9], while the other one adopts the nominal state predicted at the first time instant [10]. Based on [10], the David Mayne’s tube-based MPC (DTMPC) [11] with flexible initialization method is presented to reduce the conservatism, where the nominal state can be searched near the actual state within an invariant set. However, all the robust MPC frameworks do not utilize the possibly existing statistical properties of the uncertainty, albeit the statistical information might be available in many cases and the worst-case scenarios unlikely occur in practice. Therefore, robust approaches may lead to overly conservatism [12]. By employing chance constraints, the stochastic MPC (SMPC), which allows for an admissible level of constraint violation in a probabilistic sense, exploits the stochastic properties of uncertainty to achieve less conservatism in constraints satisfaction and gain better control performance [13]. Generally, SMPC methods can be classified into the analytic approximation methods [14, 15, 16] and the scenario-based methods [17, 18, 19]. The former ones reformulate the chance constraints into equivalent deterministic form by means of constraint tightening, while the latter ones generate a sufficient number of randomly samples to satisfy the chance constraints [20].
In the case of bounded uncertainty, the recursive feasiblility and stability have been established in several SMPC methods. For instance, in [21] the authors propose an SMPC scheme of time-varying constraint tightening along the prediction horizon based on the tubes of fix-shaped cross section and variable scaling which is computed offline according to chance violation. The tubes in [22] are constructed directly by making explicit use of the distribution of the additive disturbance. A recursively feasible SMPC scheme is proposed in [23] by exerting additional constrains on the first predicted step. The work [24] guarantees recursive feasibility and asymptotic stability by computing the constraint tightening based on confidence region of uncertainty propagation and adopting the flexible initialization methods. When considering the unbounded stochastic uncertainty, it is typically impossible to guarantee recursive feasibility, since the uncertainty is indeed possibly (even with small probability) large enough to make the future optimization problem infeasible [13]. One straightforward approach for guaranteeing recursive feasibility of the SMPC problems with possibly unbounded uncertainties is to set the initial nominal state as the predicted value of the previous time instant. However, it disregards the most recent state measurement, without allowing for the feedback, which may degrade the closed-loop performance [20]. To cope with this issue, an improved method of choosing the initialization between a closed-loop strategy and an open-loop one online is proposed [25, 26]. The key idea in this approach is to choose the closed-loop strategy using the measurement when the problem is feasible, and to choose the open-loop strategy when infeasible. Although this approach guarantees recursive feasibility, it requires to solve two optimisation problems at any time instant to decide the choice between the two initializing strategies. Another recent work [27] sets the initial nominal state equal to the predicted value of the first time instant, and introduces the indirect feedback via the cost function, which facilitates the recursive feasibility analysis. A special approach guarantees recursive feasibility for the system with unbounded disturbances in the case of that the chance constraint is defined as a discounted sum of violation probabilities on an infinite horizon [28].
However, regardless of the different assumptions for uncertainty, the nominal state initializations of the aforementioned works are mainly chosen to be the current measured state as in a typical tube-based MPC [9], which may cause that the open-loop nominal state is not an accurate estimate of the actual closed-loop one, and more complex proofs of recursive feasibility and stability are required [29]. While some of the nominal state initializations are set as the predicted value of the previous time instant, which disregard the feedback of measured value, may degenerate the control performance.
In this paper, the model predictive control problem of linear time-invariant (LTI) system under unbounded additive uncertainties is studied. To solve the problem, we propose two tube-based SMPC schemes based on the framework of the DTMPC [11], which fall in the category of analytic approximation method. The proposed tube-based SMPC schemes can be combined with the probabilistic reachable set (PRS) generated by any method to deal with the chance constraints. The novelty of the proposed method lies in that the flexible state initialization technique is used to reduce the conservatism and force the actual state stay within the relevant PRS around the nominal state with a specified probability. To the best of our knowledge, this framework has not been adopted in previous approaches of tube-based SMPC for unbounded uncertainties. The main contributions of this paper are as follows:
-
•
A new SMPC algorithm for the LTI system subject to unbounded uncertainties termed as the probabilistically resolvable time-varying tube-based SMPC (pTTSMPC) is developed. In particular, the PRS built from the uncertainty propagation is treated as the tube, by which the actual states will be restricted in the vicinity of the nominal states. The PRS is time-varying along the prediction horizon, and renders the tightened constraints change accordingly, resulting in a less conservative tightening strategy. With these techniques, the pTTSMPC algorithm is then designed based on the framework of the DTMPC [11] which can provide accurate prediction errors between the actual and nominal states, and offer an easier way to guarantee the feasibility and stability properties [29].
-
•
The theoretical properties of the developed algorithms are analyzed. In particular, the probabilistic resolvability, closed-loop chance constraints satisfaction and system stability are proved. Furthermore, the soft constraint is applied to the initial state which strengthens the probabilistic resolvability to stronger algorithm feasibility.
The remainder of this paper is organized as follows. Section II introduces the problem to be solved and states the required definitions. Section III presents the pTTSMPC scheme, and Section IV develops the pCTSMPC approach. In Section V, the numerical examples and comparison studies are presented and the paper is concluded in Section VI.
Notations: represents the set of all real numbers. We denote the set of all integers which are equal or greater than by , and the set of all consecutive integers by . denotes the value of a variable at time , and denotes the -step-ahead predicted value of at time . The probability of the occurrence of event is indicated with . refers to a positive definite matrix, and represents the weighted norm. For a random variable , its expected value is indicated with . A random variable of distribution is denoted by , and a Gaussian distribution with mean and variance is represented as . The notation denotes the Minkowski sum, and represents the Pontryagin set difference.
II Problem Statement and Preliminaries
II-A Problem Statement
Consider a discrete-time LTI system subject to unbound additive uncertainties
(1) |
where , and are the system state, control input and uncertainty at time , respectively. It is assumed that all the system states are measured perfectly, and the unbounded uncertainty is independently and identically distributed (i.i.d.). Moreover, the system in (1) is subject to the following constraints on states and inputs
(2) |
where , and are compact and each set contains the origin in its interior.
Taking advantage of the stochastic nature of the system dynamics, the chance constraints for all the predicted states and inputs in future horizon, which allow constraint violation with a probability no greater than , is adopted to reduce the conservatism. The objective of this paper is to design SMPC algorithms to stabilize the system in (1) under the following chance constraints:
(3a) | |||
(3b) |
II-B Preliminaries
In this subsection, we recall some definitions which will facilitate algorithms design.
Definition 1 (Robust Positively Invariant Set [30]).
A set is said to be a robust positively invariant (RPI) set for the discrete-time system , if , , when .
Definition 2 (Probabilistic Positively Invariant Set [31]).
A set is said to be a probabilistic positively invariant (PPI) set of probability level for the discrete-time system , if , , when .
Definition 3 (Confidence Region [32]).
A set is said to be a confidence region of probability level for a random variable , if .
Definition 4 (-step Probabilistic Reachable Set [26, 27]).
A set is an -step probabilistic reachable set (-step PRS) of probability level for the discrete-time system , if , initialized from .
Definition 5 (Probabilistic Resolvability [33]).
An optimization problem is probabilistically resolvable if it has feasible solutions at future time steps with a certain probability, given a feasible solution at the current time.
Definition 6 (Almost Surely Asymptotical Stability [34]).
A discrete-time system is almost surely asymptotically stable in the mean if .
III Probabilistic Time-varying Tube-based SMPC
III-A Problem Decoupling
In order to derive a computationally tractable MPC algorithm, we make use of the techniques from DTMPC [11] to decouple the nominal trajectory planning and the uncertainty handling. Then, the system dynamics in (1) can be separated into a nominal dynamics and an error dynamics as
(4a) | |||
(4b) |
where the nominal state is denoted by and the nominal input by . Without loss of generality, the feedback gain is chosen as the solution of the linear quadratic regulator (LQR) problem for the dynamics to ensure that is stable. The state feedback control law is used as in DTMPC
(5) |
The state error between the actual state and the nominal state , and the input error between the control input and the nominal input are defined as
(6a) | |||
(6b) |
III-B Probabilistic Reachable Set
Suppose that is a confidence region of probability level for the uncertainty , where . Then
(7) |
follows from Definition 3. The formulation methods for confidence region of uncertainty, such as the Chebyshev inequality method [15], the Scenario generation method [20], the Box-shaped method and the Ellipsoidal method [32], can be incorporated with the proposed tube-based SMPC scheme.
Suppose that the confidence region is a -step PRS of probability level for the state error dynamics in (4b), where . Then,
(8) |
follows from Definition 4 directly. The -step PRS of probability level for the input error can be calculated as due to . Then
(9) |
is derived.
Any method to compute nested PRS, namely , , can be combined with the proposed tube-based SMPC scheme. For instance, the Ellipsoidal method and Half-space method [27] can be adopted to produce the PRS for uncertainties with mean and variance information. For uncertainty of any type, provided that samples can be obtained, the Scenario approach [35] can be used to formulate the PRS. And the above formulated -step PRS has the property that , [32].
Especially, consider that the uncertainty follows the normal distribution . Let be the confidence coefficient of the uncertainty, where . Then the quantile value corresponding to is , where represents the quantile function of the standard normal distribution [16]. Then, the uncertainty confidence region of probability level can be obtained as
(10) |
Since the error dynamics in (4b) is linear, the distribution of is still normal. Then, the variance propagation of uncertainty can be evolved as
(11) |
where . In addition, follows from the expanded representation of above equation. The -step PRS of probability level can be defined as
(12) |
which is the error confidence region of probability level , and . Allowing for , the corresponding probabilistic reachable sets are nested, namely, . Another property of the -step PRS see the following lemma.
Lemma 1.
III-C Constraint Handling
Because , , then follows from the recursively expanding. We define the relaxed PRS as
(13) |
Then, follows directly. And
(14) |
can be gained from (8), and
(15) |
can be inferred from (9).
To guarantee the chance constraints, we propose a strategy to formulate the chance constraint sets as the time-varying deterministic constraints by using the relaxed PRS in (13). Construct the time-varying tightened state constraint set at step as
(16) |
Then, holds owing to . It can be seen that, if the nominal state , then the state chance constraint in (3a) is satisfied, i.e., can be guaranteed by (14).
III-D Formulation of pTTSMPC
For the cost function, we adopt the stage cost , and the terminal cost , where , , and is the solution of the algebraic Lyapunov equation
To ensure the feasibility, we construct the terminal constraint set as the maximal output admissible set [37]
(19) |
In this case, satisfies the axioms [38]:
-
A1:
.
-
A2:
.
On the basis of the time-varying constraint tightening method and the DTMPC framework in [11], the probabilistic time-varying tube-based stochastic finite horizon optimal control problem to be solved at each time instant is formulated as follows:
(20a) | ||||
(20b) | ||||
(20c) | ||||
(20d) | ||||
(20e) | ||||
(20f) |
The optimal solution of consists of nominal state and input sequence , and the associated optimal state sequence for the nominal system is .
At each time instant, is solved to generate and , and the optimal control law is designed as:
(21) |
The entire process of the probabilistic time-varying tube-based SMPC is repeated for all to yield a receding horizon control strategy.
III-E Properties of pTTSMPC
In this section, the feasibility of the designed pTTSMPC scheme is first presented, then the closed-loop chance constraint satisfaction is verified, and the stability results of the closed-loop system are also provided. Before that, a lemma for developing the feasibility result is presented.
Lemma 2.
Let be the -step PRS of probability level for the state error dynamics in (4b) with . For and , if and , then for all .
Proof:
Based on Lemma 2, the probabilistic resolvability result is reported as follows.
Theorem 1 (Probabilistic Resolvability).
Proof:
The proof follows the similar line of reasoning as Proposition 3 in [11]. Since is feasible for the problem , the constraints (20c)-(20f) are satisfied by and . The shifted input sequence of is denoted as and the shifted state sequence of is denoted as . We choose the candidate solution . Hence, the first elements of satisfy the time-varying tightened state constraints (20c) owing to the fact that if , then as . And the first elements of satisfy the time-varying tightened input constraint (20d) analogously. Because , it follows from A1 that and . So the last element of satisfies the terminal constraint (20f). Allowing for , thus follows. Therefore, the last element of satisfies constraint (20d). Moreover, from Lemma 2 we have that if , then for all , i.e., the initial constraint (20e) is satisfied with the probability no less than . Summarizing the satisfaction status of the state constraint (20c), the input constraint (20d), the initial constraint (20e), and the terminal constraint (20f), is probabilistically resolvable for the problem . ∎
Suppose that the problem is feasible, then the satisfaction of closed-loop chance constraints follows directly, which is shown in the Theorem 2.
Theorem 2 (Chance Constraint Satisfaction).
Proof:
Finally, the stability result is presented in the following theorem.
Theorem 3 (Stability).
Consider the optimal control problem in (20) for the system dynamics in (1) under the control law in (21). Suppose that is the mRPI set for the error dynamics in (4b) and is recursively feasible, the following hold.
-
(a)
is probabilistically stable for the closed-loop system.
-
(b)
If the uncertainty has zero mean, the closed-loop system is almost surely asymptotically stable in the mean.
Proof:
Denote the cost function as , and the optimal cost value as . From the Proposition 3 in [11] we have that . Summing both sides of above inequality over all gives the closed-loop performance bound as . The left hand side of the inequality is the cost value along the nominal trajectory. Since for all , it follows that . Given that the optimal cost is necessarily finite if the problem is feasible, and since each term on the left hand side of the above inequality is non-negative, follows. Furthermore, since , , it implies that following from Theorem 2.7 in [1]. Consequently, the nominal origin is exponentially stable for the nominal dynamics by following from Theorem 2.8 in [1]. Moreover, holds, due to and , hence . This completes the proof for part (a).
III-F Enhanced Feasibility
In order to avoid possible infeasibility when violation occurs, the slack variable can be imposed on the state constraints [40, 16, 39]. However, here, we only impose the slack variable on the flexible initial state constraint in (20e), such that
(22) |
And a finite penalty term
(23) |
is added to the cost function in (20a) to force to tend towards 1, so that the distance between the actual state and the nominal state will be as close as possible while maintaining the feasibility. The weight is chosen large enough so that makes the best of the penalty function [41]. As a result, the probabilistic resolvability of the optimal control problem is enhanced to strong feasibility by using the soft initial state constraint in (22) and the penalty term in (23). Moreover, this approach allows for solving a single optimisation problem instead of having to verify feasibility and decide which MPC problem to be solved accordingly, as is the case in [25, 15, 26].
IV Probabilistic Constant Tube-based SMPC
IV-A Probabilistic Positively Invariant Set
The relationship between the RPI set and the PPI set is presented in the following proposition.
Proposition 1.
Let be the mRPI set for the error dynamics in (4b) with computed as in (18). If is a box-shaped or an ellipsoidal confidence region, then is also the minimal probabilistic positively invariant (mPPI) set of probability level for the error dynamics in (4b) with [32].
Especially, when the uncertainty follows the normal distribution , the following property holds.
Corollary 1.
IV-B Formulation of pCTSMPC
Since is the mPPI set of probability level for the error dynamics in (4b) with , then
(24) |
follows from Definition 2 directly. And
(25) |
can be derived from subsequently.
Construct the constantly tightened state constraint set as
(26) |
then, the chance constraint in (3a) is satisfied. The reason is that, if the nominal state , then, can be guaranteed by (24).
The satisfaction of can be guaranteed by constructing the constantly tightened input constraint set as
(27) |
Thus, if the nominal input , then the input chance constraint in (3b) is satisfied according to (25).
Furthermore, to ensure the feasibility, we construct the terminal constraint set as
(28) |
which satisfies the axioms A1 and A2 similar with (19).
On the basis of the constant constraint tightening method and the DTMPC framework in [11], the probabilistic constant tube-based stochastic finite horizon optimal control problem to be solved at each time instant is formulated as follows:
(29a) | ||||
(29b) | ||||
(29c) | ||||
(29d) | ||||
(29e) | ||||
(29f) |
The optimal control law is designed as in (21). At each time instant, is solved to generate input sequence and nominal state . The entire process of the probabilistic constant tube-based SMPC is repeated for all to yield a receding horizon control strategy.
Using the similar argument as in pTTSMPC, we can prove that the optimal control problem in (29) is probabilistically resolvable with the probability no less than . Suppose that the problem is recursively feasible, then the satisfaction of closed-loop chance constraints can be guaranteed, and is probabilistically stable for the closed-loop system. Moreover, if the uncertainty has zero mean, the closed-loop system is almost surely asymptotically stable in the mean.
Besides, in order to avoid possible infeasibility when violation occurs, we impose the slack variable on the flexible initial state constraint in (29e), such that
(30) |
And a finite penalty term as in (23) is added to the cost function in (29a) to force to tend towards 1. As a result, the probabilistic resolvability of the optimal control problem is enhanced to strong feasibility by using the soft initial state constraint in (30).
Remark 1.
In both the pTTSMPC and the pCTSMPC schemes, all the constraint sets can be calculated offline, therefore, the resulting SMPC algorithms require the comparable online computational load with the DTMPC algorithm.
V Numerical Simulation
In this section, the control effects of the proposed probabilistic tube-based SMPC schemes are first illustrated. Then, the chance constraint violation of the proposed methods are compared. In the simulations, the feasible regions in the figures are caculated by using the algorithm in [42], the implementations of the algorithms and the computations of , , , and are performed by using the MPT3 toolbox [43].
Consider a discrete-time LTI system in [30] subject to additive uncertainties with unbounded support
(31) |
where , , and . The state and input constraints are and , . The weights of the cost functions are and , respectively. is the LQR feedback gain for the unconstrained optimal problem . The prediction horizon is , the simulation step is and the initial state . Set the weight in the cost function of the strongly feasible SMPC approaches.
The simulation of the pTTSMPC, pTTSMPC with enhanced feasibility (denoted as pTTSMPC-en), pCTSMPC and pCTSMPC with enhanced feasibility (denoted as pCTSMPC-en) for the system in (31) are conducted. The results are as follows.
V-A Control Effects
Figs. 1-4 present the closed-loop state trajectories and the relevant initial tubes at time instant , , of the four probabilistic tube-based SMPC schemes, respectively. In the figures, the solid-circle line (red) represents the actual trajectories, while the dash-dot line (blue) is the nominal trajectories. The initial tubes and at time instant are shown as the small hexagons along the nominal trajectories in Fig. 1 and Fig. 2. It can be seen that the tubes are varying over time to guarantee the feasibility. And the tube of the pTTSMPC-en approach is almost the same as that of the pTTSMPC version at the same time instant, which means the slack variable is enforced to tend towards 1 at all the simulation time instants.
Analogously, the initial tubes and are shown as the small hexagons along the nominal trajectories in Fig. 3 and Fig. 4, respectively. Here, the tubes of the pCTSMPC approach are constant over time. And the relevant time instant tubes of the pCTSMPC-en approach are almost the same as that of the pCTSMPC version, which means the slack variable is enforced to tend towards 1 at all the simulation time instants.
Furthermore, in order to confirm the feasibility of the proposed probabilistic tube-based SMPC schemes, we conduct simulations based on the same parameter setup. The feasibility ratio is defined as the comparative value of the number of feasibility to the total simulation number , i.e.,
Table I illustrates the feasibility ratio of the four approaches. Unexpectedly, the experimental feasibility ratios of all the probabilistic tube-based SMPC algorithms for the system in (31) are 100%, that is to say all the schemes possess good practical feasibility.
algorithm | pTTSMPC | pTTSMPC-en | pCTSMPC | pCTSMPC-en |
---|---|---|---|---|
100% | 100% | 100% | 100% |
V-B Constraint Violation
To verify the chance constraint satisfaction of the proposed probabilistic tube-based SMPC schemes, we introduce three definitions, namely, the empirical maximum constraint violation ratio, the empirical minimum constraint violation ratio and the empirical average constraint violation ratio. The number of violation at time instant over simulations is denoted as . Then the empirical maximum constraint violation ratio is defined as , , with the constraint violation ratio at time instant
The empirical minimum constraint violation ratio is defined as , . And the empirical average constraint violation ratio of the proposed algorithms for the system in (31) are calculated as the mean of the constraint violation ratios at the first 6 time instants.
Table II illustrates the empirical average constraint violation ratio , the empirical maximum constraint violation ratio , the empirical minimum constraint violation ratio and the total time consumption of the probabilistic tube-based SMPC algorithms for simulations with the same parameter setup. From the table we can see that: 1) All the proposed probabilistic tube-based SMPC algorithms can guarantee the chance constraint satisfaction. 2) The ratios , , of the strongly feasible tube-based SMPC algorithms are smaller than those of the related probabilistically resolvable versions due to the slackness of the state initializations. 3) Because of the conservatism of the constant tightening, the constant tube-based SMPC approaches give smaller , , than the corresponding time-varying tube-based SMPC versions. 4) In spite of resulting in more conservatism, the constant tube-based SMPC approaches require less time consumptions than the corresponding time-varying tube-based SMPC approaches.
algorithm | pTTSMPC | pTTSMPC-en | pCTSMPC | pCTSMPC-en |
---|---|---|---|---|
19.71% | 19.58% | 18.09% | 17.97% | |
20.88% | 20.45% | 18.42% | 18.08% | |
18.66% | 18.62% | 17.92% | 17.77% | |
time(s) | 17042 | 18453 | 14853 | 13449 |
To clearly illustrate the constraint violation of the four algorithms, Figs. 5-8 report the simulation results of them with 100 realizations, respectively. For each figure, the left part demonstrates the closed-loop trajectories with 15-step simulation, while the right part exhibits the detailed constraint violation. It can be seen that, in addition to the satisfaction of chance constraint, the closed-loop trajectories of the actual state will stay in the mPPI set ultimately with a probability greater than the prescribed 80%.
V-C Comparison of pTTSMPC Schemes with Different Initialization Methods
To verify the advantage of the proposed flexible initialization method over the existing ones, we further compare the control performance of pTTSMPC schemes with different initialization methods. That is to say, the optimal control problem in (20) with different initial state constraints will be demonstrated for the system in (31) under the same parameter setup. Before that, we define four initialization methods as follows.
Table III illustrates the empirical average constraint violation ratio , the empirical maximum constraint violation ratio , the empirical minimum constraint violation ratio and the total time consumption of the four pTTSMPC algorithms with different initialization methods based on simulations. From the table we can see that: 1) Case1 gives the largest empirical constraint violation ratios, which is less conservative than other cases. Meanwhile, the time consumption of Case1 is heavier than those of other cases. 2) For the other three cases, the computational loads are comparable. The violations of Case2 and Case3 are obviously lower than those of Case1 due to the lack of feedback information from the recently measured state. Case4 is the most conservative one in comparison with others and suffers possible infeasibility owing to the fully feedback.
algorithm | Case1 | Case2 | Case3 | Case4 |
---|---|---|---|---|
19.71% | 12.70% | 6.90% | 0.007% | |
20.88% | 16.32% | 12.09% | 0.02% | |
18.66% | 0% | 0% | 0% | |
time(s) | 17042 | 13670 | 13677 | 14041 |
VI Conclusion
Utilizing the concept of PRS, two probabilistically resolvable tube-based stochastic MPC approaches have been developed for linear constrained system with additive unbounded stochastic uncertainties. The results on the feasibility and closed-loop stability are presented. In comparison with existing result, the proposed method can reduce the conservatism in chance constraints satisfaction. The simulation studies verified the feasibility and advantage of the proposed methods. The future work will be the stochastic MPC for nonlinear systems with unbounded disturbances.
References
- [1] B. Kouvaritakis and M. Cannon, Model Predictive Control: Classical, Robust and Stochastic. Cham, Switzerland: Springer, 2016.
- [2] D. Q. Mayne, “Model predictive control: Recent developments and future promise,” Automatica, vol. 50, no. 12, pp. 2967–2986, Dec. 2014.
- [3] C. Huang, F. Naghdy, and H. Du, “Fault tolerant sliding mode predictive control for uncertain steer-by-wire system,” IEEE Trans. Cybern., vol. 49, no. 1, pp. 261–272, Jan. 2019.
- [4] B. Karg and S. Lucia, “Efficient representation and approximation of model predictive control laws via deep learning,” IEEE Trans. Cybern., vol. 50, no. 9, pp. 3866–3878, Sep. 2020.
- [5] L. Dong, J. Yan, X. Yuan, H. He, and C. Sun, “Functional nonlinear model predictive control based on adaptive dynamic programming,” IEEE Trans. Cybern., vol. 49, no. 12, pp. 4206–4218, Dec. 2019.
- [6] A. Bemporad and M. Morari, “Robust model predictive control: A survey,” Robustness in Identif. Control, vol. 245, London: Springer, pp. 207–226, 1999.
- [7] H. Li and Y. Shi, Robust Receding Horizon Control for Networked and Distributed Nonlinear Systems. Springer, 2017.
- [8] C. Liu, J. Gao, H. Li, and D. Xu, “A periodic robust model predictive control for constrained continuous-time nonlinear systems: An event-triggered approach,” IEEE Trans. Cybern., vol. 48, no. 5, pp. 1397–1405, May 2018.
- [9] L. Chisci, J. A. Rossiter, and G. Zappa, “Systems with persistent disturbances: Predictive control with restricted constraints,” Automatica, vol. 37, no. 7, pp. 1019–1028, Jul. 2001.
- [10] D. Q. Mayne and W. Langson, “Robustifying model predictive control of constrained linear systems,” Electronics Letters, vol. 37, no. 23, pp. 1422–1423, 2001.
- [11] D. Q. Mayne, M. M. Seron, and S. V. Raković, “Robust model predictive control of constrained linear systems with bounded disturbances,” Automatica, vol. 41, no. 2, pp. 219–224, Feb. 2005.
- [12] J. B. Rawlings, D. Q. Mayne, and M. M. Diehl, Model Predictive Control: Theory, Computation, and Design. Santa Barbara, USA: Nob Hill Pub., 2017.
- [13] A. Mesbah, “Stochastic model predictive control: An overview and perspectives for future research,” IEEE Control Syst. Magazine, vol. 36, no. 6, pp. 30–44, Dec. 2016.
- [14] M. Korda, R. Gondhalekar, F. Oldewurtel, and C. N. Jones, “Stochastic MPC framework for controlling the average constraint violation,” IEEE Trans. Autom. Control, vol. 59, no. 7, pp. 1706–1721, Jul. 2014.
- [15] M. Farina, L. Giulioni, L. Magni, and R. Scattolini, “An approach to output-feedback MPC of stochastic linear discrete-time systems,” Automatica, vol. 55, pp. 140–149, May 2015.
- [16] D. Moser, R. Schmied, H. Waschl, and L. del Re, “Flexible spacing adaptive cruise control using stochastic model predictive control,” IEEE Trans. Control Syst. Tech., vol. 26, no. 1, pp. 114–127, Jan. 2018.
- [17] G. Schildbach, L. Fagiano, C. Frei, and M. Morari, “The scenario approach for stochastic model predictive control with bounds on closed-loop constraint violations,” Automatica, vol. 50, no. 12, pp. 3009–3018, Dec. 2014.
- [18] M. Lorenzen, F. Dabbene, R. Tempo, and F. Allgöwer, “Stochastic MPC with offline uncertainty sampling,” Automatica, vol. 81, pp. 176–183, Jul. 2017.
- [19] J. Fleming and M. Cannon, “Stochastic tube MPC for additive and multiplicative uncertainty using sample approximations,” IEEE Trans. Autom. Control, vol. 64, no. 9, pp. 3883–3888, Sep. 2019.
- [20] M. Farina, L. Giulioni, and R. Scattolini, “Stochastic linear model predictive control with chance constraints - A review,” J. Process Control, vol. 44, pp. 53–67, Aug. 2016.
- [21] M. Cannon, B. Kouvaritakis, S. V. Raković, and Q. Cheng, “Stochastic tubes in model predictive control with probabilistic constraints,” IEEE Trans. Autom. Control, vol. 56, no. 1, pp. 194–200, Jan. 2011.
- [22] B. Kouvaritakis, M. Cannon, S. V. Raković, and Q. Cheng, “Explicit use of probabilistic distributions in linear predictive control,” Automatica, vol. 46, no. 10, pp. 1719–1724, Oct. 2010.
- [23] M. Lorenzen, F. Dabbene, R. Tempo, and F. Allgöwer, “Constraint tightening and stability in stochastic model predictive control,” IEEE Trans. Autom. Control, vol. 62, no. 7, pp. 3165–3177, Jul. 2017.
- [24] F. Li, H. Li, and Y. He, “Stochastic model predictive control for linear systems with bounded additive uncertainties,” in Proc. 2020 Chinese Autom. Congress (CAC), Shanghai, China, 2020, in press.
- [25] M. Farina, L. Giulioni, L. Magni, and R. Scattolini, “A probabilistic approach to model predictive control,” in Proc. IEEE Conf. Decision and Control (CDC), Florence, Italy, 2013, pp. 7734–7739.
- [26] L. Hewing and M. N. Zeilinger, “Stochastic model predictive control for linear systems using probabilistic reachable sets,” in Proc. IEEE Conf. Decision and Control (CDC), Miami Beach, USA, 2018, pp. 5182–5188.
- [27] L. Hewing, K. P. Wabersich, and M. N. Zeilinger, “Recursively feasible stochastic model predictive control using indirect feedback,” Automatica, vol. 119, Sep. 2020.
- [28] S. Yan, P. Goulart, and M. Cannon, “Stochastic model predictive control with discounted probabilistic constraints,” in Proc. European Control Conf. (ECC), Limassol, Cyprus, 2018, pp. 1003–1008.
- [29] D. Q. Mayne, “Competing methods for robust and stochastic MPC,” IFAC PapersOnLine, vol. 51, no. 20, pp. 169–174, 2018.
- [30] U. Rosolia, X. Zhang, and F. Borrelli, “A stochastic MPC approach with application to iterative learning,” in Proc. IEEE Conf. Decision and Control (CDC), Miami Beach, USA, 2018, pp. 5152–5157.
- [31] E. Kofman, J. A. De Doná, and M. M. Seron, “Probabilistic set invariance and ultimate boundedness,” Automatica, vol. 48, no. 10, pp. 2670–2676, Oct. 2012.
- [32] L. Hewing, A. Carron, K. P. Wabersich, and M. N. Zeilinger, “On a correspondence between probabilistic and robust invariant sets for linear systems,” in Proc. European Control Conf. (ECC), Limassol, Cyprus, 2018, pp. 1642–1647.
- [33] M. Ono, “Joint chance-constrained model predictive control with probabilistic resolvability,” in Proc. American Control Conf. (ACC), Montréal, Canada, 2012, pp. 435–441.
- [34] T. Hashimoto, “Probabilistic constrained model predictive control for linear discrete-time systems with additive stochastic disturbances,” in Proc. IEEE Conf. Decision and Control (CDC), Florence, Italy, 2013, pp. 6434–6439.
- [35] L. Hewing and M. N. Zeilinger, “Scenario-based probabilistic reachable sets for recursively feasible stochastic model predictive control,” IEEE Control Syst. Lett., vol. 4, no. 2, pp. 450–455, Apr. 2020.
- [36] S. V. Raković, E. C. Kerrigan, K. I. Kouramas, and D. Q. Mayne, “Invariant approximations of the minimal robust positively invariant set,” IEEE Trans. Autom. Control, vol. 50, no. 3, pp. 406–410, Mar. 2005.
- [37] I. Kolmanovsky and E. G. Gilbert, “Maximal output admissible sets for discrete-time systems with disturbance inputs,” in Proc. American Control Conf. (ACC), Seattle, USA, 1995, vol. 3, pp. 1995–1999.
- [38] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert, “Constrained model predictive control: Stability and optimality,” Automatica, vol. 36, no. 6, pp. 789–814, Jun. 2000.
- [39] J. A. Paulson, E. A. Buehler, R. D. Braatz, and A. Mesbah, “Stochastic model predictive control with joint chance constraints,” Int. J. of Control, vol. 93, no. 1, pp. 126–139, May 2020.
- [40] M. Korda, R. Gondhalekar, J. Cigler, and F. Oldewurtel, “Strongly feasible stochastic model predictive control,” in Proc. IEEE Conf. Decision and Control (CDC) and European Control Conf. (ECC), Orlando, FL, USA, 2011, pp. 1245–1251.
- [41] D. Muñoz-Carpintero, G. Hu, and C. J. Spanos, “Stochastic Model Predictive Control with adaptive constraint tightening for non-conservative chance constraints satisfaction,” Automatica, vol. 96, pp. 32–39, Oct. 2018.
- [42] F. Blanchini and S. Miani, Set-Theoretic Methods in Control. Basel, Switzerland: Birkhäuser, 2015.
- [43] M. Herceg, M. Kvasnica, C. N. Jones, and M. Morari, “Multi-Parametric Toolbox 3.0,” in Proc. European Control Conf. (ECC), Zurich, Switzerland, 2013, pp. 502–510.