This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Disturbance Rejection-Guarded Learning for Vibration Suppression of Two-Inertia Systems

Fan Zhang1, Jinfeng Chen1, Yu Hu1, Zhiqiang Gao1, Ge Lv2, Qin Lin1 1The authors are with the Center for Advanced Control Technologies (CACT), Cleveland State University, 2121 Euclid Avenue, Cleveland, OH 44115, USA. Corresponding author: Qin Lin, [email protected]2Ge Lv is with the Department of Mechanical Engineering, Clemson University, 105 Sikes Hall, Clemson, SC 29634, USA.
Abstract

Model uncertainty presents significant challenges in vibration suppression of multi-inertia systems, as these systems often rely on inaccurate nominal mathematical models due to system identification errors or unmodeled dynamics. An observer, such as an extended state observer (ESO), can estimate the discrepancy between the inaccurate nominal model and the true model, thus improving control performance via disturbance rejection. The conventional observer design is memoryless in the sense that once its estimated disturbance is obtained and sent to the controller, the datum is discarded. In this research, we propose a seamless integration of ESO and machine learning. On one hand, the machine learning model attempts to model the disturbance. With the assistance of prior information about the disturbance, the observer is expected to achieve faster convergence in disturbance estimation. On the other hand, machine learning benefits from an additional assurance layer provided by the ESO, as any imperfections in the machine learning model can be compensated for by the ESO. We validated the effectiveness of this novel learning-for-control paradigm through simulation and physical tests on two-inertial motion control systems used for vibration studies.

Index Terms:
Machine Learning, Disturbance Rejection, Extended State Observer, Model Uncertainty

I Introduction

Vibration suppression of multi-inertia systems is critical in many engineering applications, including automotive suspensions, series elastic actuators (SEA), and various other motion control systems [1]. These systems often involve multiple inertia components with a two-inertia subsystem serving as a fundamental block, connected by flexible couplings, which leads to inherent resonance issues. This resonance can cause dynamic stresses, energy wastes, and performance degradation, therefore posing significant challenges to the systems’ efficiency and stability [2, 3]. Given the fundamental challenge of system identification and the necessity for real-time performance, it is common practice to employ a simplified or inaccurate nominal dynamic model. Consequently, the disturbances become inevitable, necessitating their rejection to achieve robust control. The disturbance includes internal (i.e., unknown or unmodelled parts of the plant dynamics) and external (i.e., perturbations from the outside affecting the dynamics) [4, 5].

The observer-based method has emerged as a promising approach to estimating the disturbance for the subsequent design of a disturbance rejection controller. Among the array of existing disturbance observers, the extended state observer (ESO) [6] is gaining popularity due to its simplicity in implementation. For the formulation of an ESO, the system is modeled as a simple chained integrator with a total disturbance term (also called lumped disturbance, ff) that includes both internal and external disturbances. The total disturbance is treated as an extended state to be estimated together with other states. The estimated disturbance can be mitigated through various means, including a simple state feedback controller or more advanced control strategies such as sliding mode control [7] and model predictive control [8].

It is worth noting that the traditional ESO operates in a memoryless fashion, i.e., once it estimates a disturbance and transmits it to the controller, the datum used for estimation is then discarded. However, as a control system operates, we can improve our understanding of the disturbance through collected operational data. Prior works [9, 10] show that a model-based ESO (MB-ESO), which utilizes prior model information about the disturbance (such as a detailed dynamic model obtained through system identification), tends to exhibit reduced sensitivity to noise when compared to a model-free ESO (MF-ESO) that assumes a simple chained integrator as a nominal model. In order to circumvent the need for extensive system identification and maximize the utilization of disturbance information, we propose to leverage machine learning (ML), which has powerful capacities for nonlinear optimization, to memorize and generalize the past estimations from the ESO as a feedforward estimation of the disturbance. The learning component is expected to capture the internal dynamics as well as patterns of external disturbances.

[11, 12, 13] combine ESO with iterative learning control (ILC) for repetitive control tasks. Our approach focuses on general control tasks rather than just the repetitive ones. In addition, we assume that system dynamics, as well as disturbances, are unknown and not necessarily repetitive. In [14], a neural network is utilized to tune the parameters of ESO rather than explicitly learning the disturbance. Other learning-for-control approaches such as [15] employ neural networks to capture discrepancies between a nominal model F^(xk,uk)\hat{F}(x_{k},u_{k}) and the true model F(xk,uk)F(x_{k},u_{k}). Since the state of the true model is unknown, the measured next state xk+1x_{k+1} is used to update the error model represented by the neural network. However, these methods always assume full-state information is available. In addition, when the learning performance falls short of expectations, it may result in suboptimal performance for subsequent model-based controllers. In contrast, our approach represents a novel paradigm that aims at learning the total disturbance with the help of output measurements instead of true values for states. Furthermore, our paradigm includes a correction mechanism for cases where the learning component fails to accurately capture the disturbances. The residual total disturbance, i.e., the remainder excluding the disturbance already estimated by the learning component, will be estimated by a conventional ESO in a feedback correction manner. Through this seamless integration, even when the learning-based estimation struggles to converge effectively, we can leverage the ESO for feedback correction, thereby adding an extra layer of robustness and assurance to the system.

In our new framework, as visualized in Fig. 1, we refer to the learning-enabled extended state observer as L-ESO. The estimation f^\hat{f} of the true total disturbance ff consists of f^L\hat{f}_{L} and Δf^\Delta\hat{f}, which are from the learning component and the ESO, respectively. First, ESO uses the information of control uu and observation yy to estimate the system’s states x^\hat{x} and the residual disturbance Δf^\Delta\hat{f}. Second, ESO’s estimation, including x^\hat{x} and uu are fed as input to the learning component for learning a regression model. The learning component carries out the feedforward estimation f^L\hat{f}_{L}, after which an online optimization iteratively minimizes the difference between f^L\hat{f}_{L} and f^\hat{f}, allowing the learning component to approximate the total disturbance accurately. In situations where imperfect learning introduces errors, the ESO serves as an additional layer to rectify.

Refer to caption
Figure 1: The proposed framework in this paper, where the red and the blue blocks represent the L-ESO and the disturbance rejection tracking controller, respectively. Once the total disturbance is estimated, the tracking controller will be able to reject disturbance.

The contributions of our work are summarized as follows:

  • We propose a novel framework that combines ML and ESO for feedforward estimation and feedback correction for a general disturbance rejection tracking control task. Compared with existing learning-for-control frameworks, we estimate states and disturbances in a unique way. We also have an extra error correction mechanism for the learning component.

  • The learning component serves as an add-on to existing ESO-based control architecture. As shown in Fig. 1, only a learning component and a few connections (in green) are introduced. The advantage of our modular design is two-fold: 1) no need to change the existing framework; 2) users can customize the learning components by choosing any appropriate machine learning model.

  • Our learning and estimation are real-time and online. We showcase the efficacy of our framework through simulations and a real-world two-inertia testbed as a fundamental block for a multi-inertia system.

The remainder of this paper is structured as follows. We first go through the preliminaries in Sec. II. Then, we construct our framework in Sec. III. Simulation results of the two-mass-spring benchmark system are presented in Sec. IV, followed by the hardware experiments of a torsional plant in Sec. V. Finally, we conclude our work and discuss possible future research directions in Sec. VI.

II Preliminary

The multi-inertia system can be represented as the sum of a nominal part and a nonlinear time-varying part:

{x¯˙(t)=A0x¯(t)+B0u(t)+E0f(x(t),d(t),t)y=C0x¯\begin{cases}\dot{\bar{x}}(t)=A_{0}\bar{x}(t)+B_{0}u(t)+E_{0}f(x(t),d(t),t)\\ y=C_{0}\bar{x}\end{cases} (1)

where x¯n\bar{x}\in\mathbb{R}^{n} is the state vector, uu\in\mathbb{R} is a control input, yy\in\mathbb{R} is a measured output, and f:n+1×[0,]f:\mathbb{R}^{n+1}\times[0,\infty]\rightarrow\mathbb{R} is an unknown function representing the time-varying uncertainty, which contains external disturbance d(t)d(t)\in\mathbb{R}, unmodeled dynamics, and parameter uncertainty. Terms A0A_{0}, B0B_{0}, E0E_{0} and C0C_{0} are real and known matrices with appropriate dimensions. For the particular case of a two-inertial system with n=4n=4, meaning two states for each inertial position/angular and velocity/angular velocity, please refer to the details in the example in Sec. IV. The justification of classifying (1) as a nonlinear time-varying system can be found in [16, 17].

Traditionally, an ESO is established for a system in a chained integrator form [6]. However, in our most recent work [18], we have significantly expanded the applicability scope of ESO and rigorously proved that for a general system (1), given that Assumption 1 and the Assumption 2 are satisfied, an ESO can be established to estimate ff by releasing the chained integrator form requirement.

Assumption 1.

(A0,C0)(A_{0},C_{0}) is observable.

Assumption 2.

(A0,E0,C0)(A_{0},E_{0},C_{0}) has no invariant zeros.

For system (1), under the Assumptions 1, and 2 , there exists a matrix

S=[C0C0A0C0A0n1]S=\begin{bmatrix}C_{0}\\ C_{0}A_{0}\\ \vdots\\ C_{0}A_{0}^{n-1}\end{bmatrix} (2)

such that

A¯0=SA0S1=[010001a0a1an1]B¯0=SB0=[00b]TC¯0=C0S1=[100]E¯0=SE0=[001]T\begin{split}\bar{A}_{0}=SA_{0}S^{-1}&=\begin{bmatrix}0&1&\dots&0\\ \vdots&\ddots&\ddots&\vdots\\ 0&\dots&0&1\\ -a_{0}&-a_{1}&\dots&-a_{n-1}\\ \end{bmatrix}\\ \bar{B}_{0}=SB_{0}&=\begin{bmatrix}0&0&\dots&b\end{bmatrix}^{T}\\ \bar{C}_{0}=C_{0}S^{-1}&=\begin{bmatrix}1&0&\dots&0\\ \end{bmatrix}\\ \bar{E}_{0}=SE_{0}&=\begin{bmatrix}0&0&\dots&1\end{bmatrix}^{T}\\ \end{split} (3)

form the following new system

{x˙=A¯0x+B¯0u+E¯0fy=C¯0x\begin{cases}\dot{x}=\bar{A}_{0}x+\bar{B}_{0}u+\bar{E}_{0}f\\ y=\bar{C}_{0}x\end{cases} (4)

The readers are referred to [18] for more details on the matrix transformation. The new system (4) has an observable canonical form such that an ESO can be established for estimating ff.

Remark 1.

Assumption 2 is equivalent to the following conditions. The proof can be found in [18].

C0E0=0,C0A0E0=0,,C0A0n1E00C_{0}E_{0}=0,C_{0}A_{0}E_{0}=0,\dots,C_{0}A_{0}^{n-1}E_{0}\neq 0

According to whether or not the system dynamics are available, we have the following two variants of ESO:

II-A MB-ESO

If the model information, i.e., a0,a1,,an1,b-a_{0},-a_{1},\cdots,-a_{n-1},b, in matrix A¯0\bar{A}_{0} and B¯0\bar{B}_{0} is available, we have

x˙=\displaystyle\dot{x}= [010001a0an1]A¯0,MBx+[00b]B¯0,MBu+[001]E¯0df\displaystyle\underbrace{\begin{bmatrix}0&1&\dots&0\\ \vdots&\ddots&\ddots&\vdots\\ 0&\dots&0&1\\ -a_{0}&\dots&\dots&-a_{n-1}\\ \end{bmatrix}}_{\bar{A}_{0,MB}}x+\underbrace{\begin{bmatrix}0\\ 0\\ \vdots\\ b\end{bmatrix}}_{\bar{B}_{0,MB}}u+\underbrace{\begin{bmatrix}0\\ 0\\ \vdots\\ 1\end{bmatrix}}_{\bar{E}_{0}}\underbrace{d}_{f} (5)

The total disturbance can be represented as:

f=df=d\\ (6)

where dd is the external disturbance, bb is the true control gain.

II-B MF-ESO

If the model information, i.e., a0,a1,,an1,b-a_{0},-a_{1},\cdots,-a_{n-1},b, in matrix A¯0\bar{A}_{0} and B¯0\bar{B}_{0}, is not available, we have

x˙=[010001000]A¯0,MFx+[00b0]B¯0,MFu+[01]E¯0(a0x1an1xn+(bb0)u+d)f\begin{array}[]{r@{}l}\dot{x}=&\underbrace{\begin{bmatrix}0&1&\dots&0\\ \vdots&\ddots&\ddots&\vdots\\ 0&\dots&0&1\\ 0&0&\dots&0\\ \end{bmatrix}}_{\bar{A}_{0,MF}}x+\underbrace{\begin{bmatrix}0\\ 0\\ \vdots\\ b_{0}\end{bmatrix}}_{\bar{B}_{0,MF}}u+\\ &\underbrace{\begin{bmatrix}0\\ \vdots\\ 1\end{bmatrix}}_{\bar{E}_{0}}\underbrace{(-a_{0}x_{1}-\dots-a_{n-1}x_{n}+(b-b_{0})u+d)}_{f}\par\end{array} (7)

where a0x1an1xn+(bb0)u-a_{0}x_{1}-\dots-a_{n-1}x_{n}+(b-b_{0})u is the internal disturbance (unknown/unmodelled dynamics), b0b_{0} is the nominal control gain, and dd is the external disturbance. In such a case, the total disturbance becomes:

f=a0x1an1xn+(bb0)u+df=-a_{0}x_{1}-\dots-a_{n-1}x_{n}+(b-b_{0})u+d\\ (8)

ESO treats the total disturbance ff as an extended state, such that a Luenberger observer can be designed to estimate both the original system state xx and the total disturbance ff. The augmented dynamic system is as follows:

{[x˙f˙]=A[xf]+Bu+Ef˙y=Cx\begin{cases}\begin{bmatrix}\dot{x}\\ \dot{f}\end{bmatrix}=A\begin{bmatrix}x\\ f\end{bmatrix}+Bu+E\dot{f}\\ y=Cx\end{cases} (9)

where A=[A0¯E0¯01×n0](n+1)×(n+1)A=\begin{bmatrix}\bar{A_{0}}&\bar{E_{0}}\\ 0_{1\times n}&0\end{bmatrix}_{(n+1)\times(n+1)}, B=[B0¯0](n+1)×1B=\begin{bmatrix}\bar{B_{0}}\\ 0\end{bmatrix}_{(n+1)\times 1}, C=[C0¯,0]1×(n+1)C=[\bar{C_{0}},0]_{1\times(n+1)}, E=[0,,0,1](n+1)×1TE=[0,\cdots,0,1]_{(n+1)\times 1}^{T}.

The Luenberger observer has the following form:

[x^˙f^˙]=A[x^f^]+Bu+L(yC[x^f^])\begin{bmatrix}\dot{\hat{x}}\\ \dot{\hat{f}}\end{bmatrix}=A\begin{bmatrix}\hat{x}\\ \hat{f}\end{bmatrix}+Bu+L\left(y-C\begin{bmatrix}\hat{x}\\ \hat{f}\end{bmatrix}\right) (10)

where x^\hat{x} and f^\hat{f} are estimations of xx and ff, LL is the observer gain. We have the following estimation error dynamics:

e˙=(ALC)e+Ef˙\dot{e}=(A-LC)e+E\dot{f}\\ (11)

where e=[xx^ff^]Te=\begin{bmatrix}x-\hat{x}&f-\hat{f}\\ \end{bmatrix}^{T}.

Theorem 1.

Under Assumption 1 and Assumption 2, the eigenvalues ALCA-LC can be placed at the left side of the plane to make the estimation converge [18, 17].

All eigenvalues can be placed at ωo-\omega_{o}, which is called the observer bandwidth of ESO [19].

III Learning-Enabled ESO

The model-based ESO in (5) and the model-free ESO in (7) can be further expanded as follows:

[010001a0an1A¯0,MBE0¯01×n0]AMB[x^f^]+[0b0+bb0B¯0,MB0]BMBu=[01000100A¯0,MFE0¯01×n0]AMF[x^f^]+[0b0B¯0,MF0]BMFu+[E¯00](a0x1an1xn+(bb0)u)\begin{array}[]{r@{}l}&\underbrace{\begin{bmatrix}\underbrace{\begin{matrix}0&1&\dots&0\\ \vdots&\ddots&\ddots&\vdots\\ 0&\dots&0&1\\ -a_{0}&\dots&\dots&-a_{n-1}\\ \end{matrix}}_{\bar{A}_{0,MB}}&\bar{E_{0}}\\ 0_{1\times n}&0\end{bmatrix}}_{A_{MB}}\begin{bmatrix}\hat{x}\\ \hat{f}\end{bmatrix}+\underbrace{\begin{bmatrix}0\\ \vdots\\ \underbrace{b_{0}+b-b_{0}}_{\bar{B}_{0,MB}}\\ 0\end{bmatrix}}_{B_{MB}}u=\\ &\underbrace{\begin{bmatrix}\underbrace{\begin{matrix}0&1&\dots&0\\ \vdots&\ddots&\ddots&\vdots\\ 0&\dots&0&1\\ 0&\dots&\dots&0\\ \end{matrix}}_{\bar{A}_{0,MF}}&\bar{E_{0}}\\ 0_{1\times n}&0\end{bmatrix}}_{A_{MF}}\begin{bmatrix}\hat{x}\\ \hat{f}\end{bmatrix}+\underbrace{\begin{bmatrix}0\\ \vdots\\ \underbrace{b_{0}}_{\bar{B}_{0,MF}}\\ 0\end{bmatrix}}_{B_{MF}}u\par+\\ &\begin{bmatrix}\bar{E}_{0}\\ 0\end{bmatrix}(-a_{0}x_{1}\dots-a_{n-1}x_{n}+(b-b_{0})u)\end{array} (12)
Remark 2.

By incorporating model information, MF-ESO becomes equivalent to MB-ESO.

Remark 3.

The motivation for proposing the learning component can be justified in that the model information is learnable to facilitate the incorporation of model information.

Remark 4.

The learning component is even possible to learn the external disturbance together with the internal disturbance to be incorporated.

Since the learning component has a feedforward estimation f^L\hat{f}_{L} for the total disturbance, ESO can serve as a feedback correction to estimate the residual total disturbance as Δf^\Delta\hat{f}. The combination of the feedforward estimation and the feedback correction is realized as follows:

[x^˙Δf^˙]=A[x^Δf^]+Bu+L(yC[x^Δf^])+[E¯00]f^L\begin{bmatrix}\dot{\hat{x}}\\ \dot{\Delta\hat{f}}\end{bmatrix}=A\begin{bmatrix}\hat{x}\\ \Delta\hat{f}\end{bmatrix}+Bu+L\left(y-C\begin{bmatrix}\hat{x}\\ \Delta\hat{f}\end{bmatrix}\right)+\begin{bmatrix}\bar{E}_{0}\\ 0\end{bmatrix}\hat{f}_{L} (13)

Since the learning component is expected to capture the unknown dynamics, we employ a model-free ESO, see Fig. 1. The learning block in Fig. 1 is a function hθ(x,u)h_{\theta}(x,u) parameterized by θ\theta. To learn the total disturbance (see (8)), we establish a mapping from the input (x^\hat{x} estimated by ESO and control input uu) to the output f^\hat{f}, where f^=f^L+Δf^\hat{f}=\hat{f}_{L}+\Delta\hat{f}. The total disturbance estimation consists of two parts: 1) the feedforward estimation from the learning component f^L=hθ(x^,u)\hat{f}_{L}=h_{\theta}(\hat{x},u); 2) feedback correction for the residual disturbance Δf^\Delta\hat{f} by an MF-ESO. To optimize the parameters of the machine learning model, a general regression problem is formulated using the following cost function:

J(θ)=12i=1n(hθ(x^i,ui)f^i)2J(\theta)=\frac{1}{2}\sum_{i=1}^{n}(h_{\theta}(\hat{x}^{i},u^{i})-\hat{f}^{i})^{2} (14)

where nn is the size of the training data. The details are in Alg. 1. When the batch is not yet filled, we run the MF-ESO (see Line 7-14, the learning component does not return optimized parameters).

Input: Control input uu, system output yy, learning rate α\alpha, batch size nn, maximum running time NmaxN_{max}
Output: Total disturbance f^\hat{f}
1 Initialize:
2machine learning input batch 0=\mathcal{I}^{0}=\emptyset
3disturbance estimation by ESO batch Δ0=\Delta\mathcal{F}^{0}=\emptyset
4machine learning output batch 0=\mathcal{F_{L}}^{0}=\emptyset
5machine learning model parameter θ\theta
6machine learning output f^L0=0\hat{f}_{L}^{0}=0
7for  i=1i=1 to nn do
       Get x^i\hat{x}^{i} and Δf^i\Delta\hat{f}^{i} by running L-ESO \triangleright see (13)
8      
      Compute uiu^{i} \triangleright see (22)
9      
10      i:=[i1,[x^1i,x^2i,,x^ni,ui,1]T]\mathcal{I}^{i}:=[\mathcal{I}^{i-1},[\hat{x}_{1}^{i},\hat{x}_{2}^{i},\dots,\hat{x}_{n}^{i},u^{i},1]^{T}]
11      Δi:=[Δi1,Δf^i]\Delta\mathcal{F}^{i}:=[\Delta\mathcal{F}^{i-1},\Delta\hat{f}^{i}]
      i:=[i1,0]\mathcal{F_{L}}^{i}:=[\mathcal{F_{L}}^{i-1},0] \triangleright append data into three batches
12      
13      f^Li=0\hat{f}_{L}^{i}=0
14 end for
15
16for i=ni=n to NmaxN_{max} do
17      
      Get x^i\hat{x}^{i} and Δf^i\Delta\hat{f}^{i} by running L-ESO \triangleright see (13)
18      
      Update i\mathcal{I}^{i} \triangleright pop oldest datum, push new datum
19      
      Update Δi\Delta\mathcal{F}^{i} \triangleright pop oldest datum, push new datum
20      
      Update θi\theta^{i} \triangleright According to (14)
21      
22      i=hθi(i)\mathcal{F_{L}}^{i}=h_{\theta^{i}}(\mathcal{I}^{i})
23      f^Li=hθi(xi)\hat{f}_{L}^{i}=h_{\theta^{i}}(x^{i})
      f^i=f^Li+Δf^i\hat{f}^{i}=\hat{f}_{L}^{i}+\Delta\hat{f}^{i} \triangleright compute total disturbance
24      
      Compute uiu^{i} \triangleright see (22)
25      
26 end for
27
Algorithm 1 L-ESO

Our framework has superior modularity. The design of the ESO is just a conventional model-free convention. We only need to use the estimation from ESO to drive the training of our learning component. First, the learning component can serve as an add-on to existing ESO-based control architecture by just adding a few connections. Second, the learning component is so flexible that users can customize it by choosing appropriate machine learning models, e.g., linear, non-linear, parametric, non-parametric, etc.

IV Simulation Results

IV-A Two-Mass-Spring Problem Formulation

Fig. 2 depicts a schematic of a two-mass-spring system, which is from a well-known benchmark control problem [20]. The system includes two masses: m1m_{1} and m2m_{2}, which can slide freely over a horizontal surface without friction. Note that it has been proved that a non-friction setting is more challenging for a controller design [9]. The masses are connected by a light horizontal spring with a spring constant kk. The system is subject to two external disturbance forces w1w_{1} and w2w_{2}, which act on masses m1m_{1} and m2m_{2}, respectively. The control signal uu is the force applied to mass m1m_{1}. Both the positions of mass m1m_{1} and mass m2m_{2} are measured, and either one can be used as an output to be controlled.

The states of the two-mass-spring system are defined as the displacements and velocities of the two masses. Specifically, the displacement and velocity of mass m1m_{1} are x1x_{1} and x3x_{3}, respectively, while the displacement and velocity of mass m2m_{2} are x2x_{2} and x4x_{4}, respectively. The dynamics of the system can be represented in the following state-space form:

[x˙1x˙2x˙3x˙4]=[00100001km1km100km2km200][x1x2x3x4]+[001m10](u+w1)+[0001m2]w2y=[c1c200][x1x2x3x4]T\begin{split}\begin{bmatrix}\dot{x}_{1}\\ \dot{x}_{2}\\ \dot{x}_{3}\\ \dot{x}_{4}\end{bmatrix}&=\begin{bmatrix}0&0&1&0\\ 0&0&0&1\\ -\frac{k}{m_{1}}&\frac{k}{m_{1}}&0&0\\ \frac{k}{m_{2}}&-\frac{k}{m_{2}}&0&0\\ \end{bmatrix}\begin{bmatrix}x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\\ \end{bmatrix}\\ &+\begin{bmatrix}0\\ 0\\ \frac{1}{m_{1}}\\ 0\\ \end{bmatrix}(u+w_{1})+\begin{bmatrix}0\\ 0\\ 0\\ \frac{1}{m_{2}}\end{bmatrix}w_{2}\\ y&=\begin{bmatrix}c_{1}&c_{2}&0&0\end{bmatrix}\begin{bmatrix}x_{1}&x_{2}&x_{3}&x_{4}\\ \end{bmatrix}^{T}\end{split} (15)
Refer to caption
Figure 2: Two-mass-spring system with uncertain parameters

A time-varying unknown external disturbance w2w_{2} is from the mass m2m_{2}, control needs to be conducted on m1m_{1} to allow x2x_{2} track any desired trajectory. For the output yy, i.e., x2x_{2}, a chained integrator system is derived by taking the derivatives of the output four times. The input and disturbance are in the last channel of this fourth-order system with b=km1m2b=\frac{k}{m_{1}m_{2}}:

y(4)=km1+m2m1m2y¨+km1m2w2+1m2w¨2+buy^{(4)}=-k\frac{m_{1}+m_{2}}{m_{1}m_{2}}\ddot{y}+\frac{k}{m_{1}m_{2}}w_{2}+\frac{1}{m_{2}}\ddot{w}_{2}+bu (16)

IV-B ESO design

The states in the system are:

x=[yy˙y¨y˙˙˙]Tx=\begin{bmatrix}y&\dot{y}&\ddot{y}&\dddot{y}\end{bmatrix}^{T} (17)

The state-space description of the system is

{[x˙f˙]=A[xf]+Bu+Ef˙y=Cx\begin{cases}\begin{bmatrix}\dot{x}\\ \dot{f}\end{bmatrix}=A\begin{bmatrix}x\\ f\end{bmatrix}+Bu+E\dot{f}\\ y=Cx\end{cases} (18)

IV-B1 Model-free ESO

The state-space model is:

AMF=[0100000100000100000100000]A_{MF}=\begin{bmatrix}0&1&0&0&0\\ 0&0&1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1\\ 0&0&0&0&0\\ \end{bmatrix}, B=[000b00]B=\begin{bmatrix}0\\ 0\\ 0\\ b_{0}\\ 0\end{bmatrix}, C=[10000]C=\begin{bmatrix}1&0&0&0&0\end{bmatrix}, E=[00001]TE=\begin{bmatrix}0&0&0&0&1\end{bmatrix}^{T}. As we can see, the model-free design assumes unknown dynamics, such that the total disturbance ff can be represented as:

f=km1+m2m1m2y¨+km1m2w2+1m2w¨2+(bb0)uf=-k\frac{m_{1}+m_{2}}{m_{1}m_{2}}\ddot{y}+\frac{k}{m_{1}m_{2}}w_{2}+\frac{1}{m_{2}}\ddot{w}_{2}+(b-b_{0})u (19)

where km1+m2m1m2-k\frac{m_{1}+m_{2}}{m_{1}m_{2}} is the model parameter information, b0b_{0} is the nominal control gain. We have

y(4)=f+b0uy^{(4)}=f+b_{0}u (20)

where everything besides b0ub_{0}u is considered as total disturbance (see (16)). It can be validated that such a system satisfies Assumptions 1, 2, and 3. Therefore, an ESO can be designed for the estimation of ff, see (10).

The observer gain is chosen where all the eigenvalues of AMFLCA_{MF}-LC are placed at ωo-\omega_{o} [19], i.e., LMF=[5ωo10ωo210ωo35ωo4ωo5]L_{MF}=[5\omega_{o}\quad 10\omega_{o}^{2}\quad 10\omega_{o}^{3}\quad 5\omega_{o}^{4}\quad\omega_{o}^{5}].

IV-B2 Model-based ESO

The model-based design has the following state-space representation:

AMB=[01000001000001000km1+m2m1m20100000]A_{MB}=\begin{bmatrix}0&1&0&0&0\\ 0&0&1&0&0\\ 0&0&0&1&0\\ 0&0&-k\frac{m_{1}+m_{2}}{m_{1}m_{2}}&0&1\\ 0&0&0&0&0\\ \end{bmatrix}, B=[000b00]B=\begin{bmatrix}0\\ 0\\ 0\\ b_{0}\\ 0\end{bmatrix}, C=[10000]C=\begin{bmatrix}1&0&0&0&0\end{bmatrix}, E=[00001]TE=\begin{bmatrix}0&0&0&0&1\end{bmatrix}^{T}. In contrast to the above-mentioned model-free design, such a system tries to leverage the prior knowledge of the dynamic model, by assuming km1+m2m1m2-k\frac{m_{1}+m_{2}}{m_{1}m_{2}} is known (see (16)). In this case, the total disturbance becomes:

f=km1m2w2+1m2w¨2+(bb0)uf=\frac{k}{m_{1}m_{2}}w_{2}+\frac{1}{m_{2}}\ddot{w}_{2}+(b-b_{0})u (21)

such that y(4)=km1+m2m1m2y¨+f+b0uy^{(4)}=-k\frac{m_{1}+m_{2}}{m_{1}m_{2}}\ddot{y}+f+b_{0}u

The observer gain is chosen where all eigenvalues of AMBLCA_{MB}-LC are placed at ωo-\omega_{o} [19]. Let a=km1+m2m1m2a=-k\frac{m_{1}+m_{2}}{m_{1}m_{2}}, the coefficients of LMBL_{MB} are listed in Table I.

Parameters Values
LMB,1L_{MB,1} 5ωo5\omega_{o}
LMB,2L_{MB,2} a+10ωo2a+10\omega_{o}^{2}
LMB,3L_{MB,3} 5aωo+10ωo35a\omega_{o}+10\omega_{o}^{3}
LMB,4L_{MB,4} a2+10aωo2+5ωo4a^{2}+10a\omega_{o}^{2}+5\omega_{o}^{4}
LMB,5L_{MB,5} 5a2ωo+10aωo3+ωo55a^{2}\omega_{o}+10a\omega_{o}^{3}+\omega_{o}^{5}
TABLE I: coefficients of LMBL_{MB}

IV-B3 L-ESO

As shown in (19), the internal disturbance has a linearly structured mapping between the input (state and control) and the output (disturbance). Therefore, a linear regression model is a reasonable choice for the learning component, with hθ()=θT[x^1x^2x^3x^4u1]Th_{\theta}(\cdot)=\theta^{T}\begin{bmatrix}\hat{x}_{1}&\hat{x}_{2}&\hat{x}_{3}&\hat{x}_{4}&u&1\end{bmatrix}^{T}. Note that as we mentioned before, the learning model is flexible to be linear, nonlinear, parametric, non-parametric, etc. Our contribution is not about the complexity of the learning model but the novel design to seamlessly combine machine learning models with an ESO. A batch gradient descent method is used for optimizing the cost function. In our experiments, we initialize θ\theta with all zeros.

IV-C Controller Design

The control law for the system (20) can be designed as:

u=f^+u0b0u=\frac{-\hat{f}+u_{0}}{b_{0}} (22)

such that

y(4)=u0y^{(4)}=u_{0} (23)

It can be controlled by a state feedback controller

u0=Kx^=k1(rx^1)k2x^2k3x^3k4x^4u_{0}=-K\hat{x}=k_{1}(r-\hat{x}_{1})-k_{2}\hat{x}_{2}-k_{3}\hat{x}_{3}-k_{4}\hat{x}_{4}\\ (24)

with a control gain K=[ωc44ωc36ωc24ωc]K=\begin{bmatrix}\omega_{c}^{4}&4\omega_{c}^{3}&6\omega_{c}^{2}&4\omega_{c}\end{bmatrix}, where ωc\omega_{c} is the close-loop natural frequency [19].

IV-D Simulation Results

The system parameters are taken from the benckmark problem [20], i.e., m1=m2=1m_{1}=m_{2}=1 kg, k=1k=1 N/m, c1=0c_{1}=0, c2=1c_{2}=1. Tracking a desired trajectory for the position of mass m2m_{2} is the control objective. A sinusoidal wave with a frequency of 1 rad/s and amplitude 1 is applied in the training phase for L-ESO. After 110 seconds, a step reference is given to all three approaches. A band-limited white noise with noise power 101210^{-12} is added at the system output side. A sinusoidal external disturbance with frequency π/10\pi/10 rad/s is applied on m2m_{2} as w2w_{2} starting at 150 s. The learning algorithm is running online. The learning phase is designed to emulate the typical operational scenarios of the machine under general conditions, whereas the step response is employed to assess and compare the tracking performance. All the control parameters are set identically for fair comparison.

The controller bandwidth ωc\omega_{c} and the observer bandwidth ωo\omega_{o} are set to 1 rad/s and 10 rad/s, respectively. The control gain is set to 1. All three approaches share such same settings for fair comparison.

Refer to caption
Figure 3: Tracking performance for MB-ESO, MF-ESO, L-ESO plotting from 120s.
Refer to caption
Figure 4: Control signal for MB-ESO, MF-ESO, L-ESO plotting from 120s.

The tracking performance and the control input are shown in Fig. 3 and Fig. 4, respectively.

  1. 1.

    MB-ESO and L-ESO have similar performance for the step reference tracking (see the zoom-in plot from 126 s to 134 s, Fig. 3) after the training phase, see the position plot of m2m_{2} in Fig. 3, which are better than MF-ESO in terms of overshoot percentage (0 vs. 55‰ ) and settling time (12s vs. 16s).

  2. 2.

    For external disturbance rejection (see the zoom-in plot from 170 s to 195 s, Fig. 3), L-ESO’s performance is the best. By re-visiting (8), if the external disturbance has a linear component, a linear regression component can still capture it, e.g., the trends of going up and down in a sinusoidal external disturbance.

  3. 3.

    Adding external disturbance information to the observer can help reduce the required bandwidth. In our experiments, we found that MF-ESO and MB-ESO will need three times more bandwidth to achieve the same performance as the L-ESO.

  4. 4.

    The control input of the L- ESO has more fluctuations compared with MF-ESO and MB-ESO, as shown in Fig. 4. This is caused by the noise signal and the batch gradient descent method we choose to minimize the cost function. It can be smoothened by increasing the batch size in this example.

V Hardware Experiments Results

We conduct physical experiments on our ECP Model 205 torsional testbed [21], see Fig.5. It is a mechanical system that consists of a flexible vertical shaft connecting two disks - a lower disk and an upper disk. Each disk is equipped with an encoder for position measurement. A DC servo motor drives the lower disk through a belt and pulley system, which provides a 3:1 speed reduction ratio. The system can be used to study the vibration of a torsional two-mass-spring system.

Refer to caption
Figure 5: ECP Model 205 torsional testbed

A personal computer with MATLAB®Simulink Desktop Real-Time™ installed is used for computation. The computer is also equipped with a four-channel quadrature encoder input card (NI-PCI6601) and a multi-function analog and digital I/O card (NI-PCI6221). These cards interface with the torsional plant Model 205 for real-time data acquisition and control. The quadrature encoder input card enables the computer to receive position and velocity data from the encoders on the disks of the plant. The multi-function analog and digital I/O card allows the computer to send control signals to the DC servo motor that drives the lower disk.

V-A System Model

Since the MB-ESO, as a baseline approach, needs the dynamics information, we first use MATLAB®System identification toolbox and get the transfer function: G(s)=4.6×104s4+1.901s3+1683s2+1812s+0.1032G(s)=\frac{4.6\times 10^{4}}{s^{4}+1.901s^{3}+1683s^{2}+1812s+0.1032}.

V-B ESO and Controller Design

As this testbed is again a fourth-order dynamic system, the same ESO design pipeline shown before can be applied.

V-C Experiment Results

Tracking a desired trajectory for the upper disk is the control objective. A sinusoidal wave with a frequency of π/2\pi/2 rad/s and an amplitude 0.5π0.5\pi is applied in the training phase of L-ESO. ωc\omega_{c} and ωo\omega_{o} are set to 90 rad/s and 40 rad/s, respectively. The control gain is 5.5×1045.5\times 10^{4}. A trapezoidal profile reference with the final value π\pi is used.

Refer to caption
Figure 6: Upper disk position tracking: MB-ESO, MF-ESO, and L-ESO
Refer to caption
Figure 7: Control signal for MB-ESO, MF-ESO, L-ESO

From the results illustrated in Fig. 6 and Fig. 7, we have the following observations: 1) L-ESO has the best performance among all the methods after the training phase in terms of overshoot percentage and settling time. The reasons for L-ESO outperforming MB-ESO could be the imperfection of system identification or that our approach can learn internal as well as external disturbance. 2) The fluctuation of control input of L-ESO is between MF-ESO and MB-ESO, as shown in Fig. 7, which is different from the simulation result. This is because the learning rate is conservatively chosen due to the large noise in the hardware. Also, the trapezoidal profile reference is more smooth than the step reference, which is beneficial for learning.

VI CONCLUSIONS

A novel learning-enabled extended state observer L-ESO with the capacity to memorize and generalize from past estimated disturbances is proposed in this paper. The machine learning model is seamlessly integrated into existing disturbance rejection control architecture as a flexible add-on for boosting robustness performance against unknown and time-varying disturbances. Compared with existing learning for control framework, our new paradigm does not rely on access to full states. In addition, the learning is guarded by disturbance rejection that provides an extra assurance layer to compensate for the imperfections of the machine learning model. The efficacy of the proposed approach has been supported by simulation and hardware experiments. In the future, we will further validate in real robotic testbeds.

References

  • [1] Y. Hori, H. Iseki, and K. Sugiura, “Basic consideration of vibration suppression and disturbance rejection control of multi-inertia system using SFLAC (state feedback and load acceleration control),” IEEE Transactions on Industry Applications, vol. 30, no. 4, pp. 889–896, 1994.
  • [2] S. Zhao and Z. Gao, “An active disturbance rejection based approach to vibration suppression in two-inertia systems,” Asian Journal of control, vol. 15, no. 2, pp. 350–362, 2013.
  • [3] Y. Wang, L. Dong, Z. Chen, M. Sun, and X. Long, “Integrated skyhook vibration reduction control with active disturbance rejection decoupling for automotive semi-active suspension systems,” Nonlinear Dynamics, pp. 1–16, 2024.
  • [4] J. Chen, Y. Hu, and Z. Gao, “On practical solutions of series elastic actuator control in the context of active disturbance rejection,” Advanced Control for Applications: Engineering and Industrial Systems, vol. 3, no. 2, p. e69, 2021.
  • [5] Q. Zheng, Z. Ping, S. Soares, Y. Hu, and Z. Gao, “An active disturbance rejection control approach to fan control in servers,” in 2018 IEEE Conference on Control Technology and Applications (CCTA).   IEEE, 2018, pp. 294–299.
  • [6] J. Han, “From PID to active disturbance rejection control,” IEEE Transactions on Industrial Electronics, vol. 56, no. 3, pp. 900–906, 2009.
  • [7] R. Cui, L. Chen, C. Yang, and M. Chen, “Extended state observer-based integral sliding mode control for an underwater robot with unknown disturbances and uncertain nonlinearities,” IEEE Transactions on Industrial Electronics, vol. 64, no. 8, pp. 6785–6795, 2017.
  • [8] H. Zhang, Y. Li, Z. Li, C. Zhao, F. Gao, F. Xu, and P. Wang, “Extended-state-observer based model predictive control of a hybrid modular DC transformer,” IEEE Transactions on Industrial Electronics, vol. 69, no. 2, pp. 1561–1572, 2021.
  • [9] H. Zhang, S. Zhao, and Z. Gao, “An active disturbance rejection control solution for the two-mass-spring benchmark problem,” in 2016 American Control Conference (ACC).   IEEE, 2016, pp. 1566–1571.
  • [10] C. Fu and W. Tan, “Tuning of linear ADRC with known plant information,” ISA transactions, vol. 65, pp. 384–393, 2016.
  • [11] Y. Hui, R. Chi, B. Huang, and Z. Hou, “Extended state observer-based data-driven iterative learning control for permanent magnet linear motor with initial shifts and disturbances,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 3, pp. 1881–1891, 2021.
  • [12] J. Wang, D. Huang, S. Fang, Y. Wang, and W. Xu, “Model predictive control for ARC motors using extended state observer and iterative learning methods,” IEEE Transactions on Energy Conversion, vol. 37, no. 3, pp. 2217–2226, 2022.
  • [13] J. Zhang and D. Meng, “Improving tracking accuracy for repetitive learning systems by high-order extended state observers,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
  • [14] P. Kicki, K. Łakomy, and K. M. B. Lee, “Tuning of extended state observer with neural network-based control performance assessment,” European Journal of Control, vol. 64, p. 100609, 2022.
  • [15] G. Shi, X. Shi, M. O’Connell, R. Yu, K. Azizzadenesheli, A. Anandkumar, Y. Yue, and S.-J. Chung, “Neural lander: Stable drone landing control using learned dynamics,” in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 9784–9790.
  • [16] B. Guo and Z. Zhao, “On the convergence of an extended state observer for nonlinear systems with uncertainty,” Systems & Control Letters, vol. 60, no. 6, pp. 420–430, 2011.
  • [17] W. Bai, S. Chen, Y. Huang, B. Guo, and Z. Wu, “Observers and observability for uncertain nonlinear systems: A necessary and sufficient condition,” International Journal of Robust and Nonlinear Control, vol. 29, no. 10, pp. 2960–2977, 2019.
  • [18] J. Chen, Z. Gao, Y. Hu, and S. Shao, “A general model-based extended state observer with built-in zero dynamics,” arXiv preprint arXiv:2208.12314, 2023.
  • [19] Z. Gao, “Scaling and bandwidth-parameterization based controller tuning,” in Proceedings of the 2003 American Control Conference, 2003.   IEEE, 2003, pp. 4989–4996.
  • [20] B. Wie and D. S. Bernstein, “Benchmark problems for robust control design,” Journal of Guidance, Control, and Dynamics, vol. 15, no. 5, pp. 1057–1059, 1992.
  • [21] Open AI, “Safety gym,” http://www.ecpsystems.com/controls_torplant.htm [Accessed: 3-23-2024].