Event-Triggered Optimal Attitude Consensus
of Multiple Rigid Body Networks with
Unknown Dynamics
Abstract
In this paper, an event-triggered Reinforcement Learning (RL) method is proposed for the optimal attitude consensus of multiple rigid body networks with unknown dynamics. Firstly, the consensus error is constructed through the attitude dynamics. According to the Bellman optimality principle, the implicit form of the optimal controller and the corresponding Hamilton-Jacobi-Bellman (HJB) equations are obtained. Because of the augmented system, the optimal controller can be obtained directly without relying on the system dynamics. Secondly, the self-triggered mechanism is applied to reduce the computing and communication burden when updating the controller. In order to address the problem that the HJB equations are difficult to solve analytically, a RL method which only requires measurement data at the event-triggered instants is proposed. For each agent, only one neural network is designed to approximate the optimal value function. Each neural network is updated only at the event-triggered instants. Meanwhile, the Uniformly Ultimately Bounded (UUB) of the closed-loop system is obtained, and the Zeno behavior is also avoided. Finally, the simulation results on a multiple rigid body network demonstrate the validity of the proposed method.
Index Terms:
Optimal attitude consensus, multiple rigid body networks, event-triggered control, reinforcement learningI Introduction
Consensus control, as a fundamental form of coordination problem in multi-agent systems, aims to design a control protocol for each agent to drive the states of all agents to be synchronized [1]–[3, 5]. Over recent decades, the attitude consensus problem of multiple rigid body networks has received increasing attention [4] because it plays a significant role in the development of many fields, such as formation in three-dimensional space [6], [7], cooperation of multi-manipulators [8] and satellite networks [9], etc. At present, some results have been proposed, which can be classified into two categories, the leaderless attitude consensus [10]–[12] and the leader-follower attitude consensus [13]–[15]. Note that none of them have considered the performance cost when achieving the attitude consensus.
In practical application scenarios, the performance cost is a factor that must be considered, which affects the efficiency of mission completion and the endurance of limited resources. The optimal attitude consensus control, which not only makes the attitude of all rigid body systems tend to be synchronized, but also minimizes the performance cost. In general, the optimal control problem can be transformed into solving the Hamilton-Jacobi-Bellman (HJB) equations. Nevertheless, it is very difficult to find the analytic solutions to the HJB equations. With the popularity of reinforcement learning technology [16, 17, 18] and the rapid development of the computing capacity of processors, some reinforcement learning based researches on solving the optimal consensus problem have emerged. As far as we know, the vast majority of the research objects are linear systems [19]–[21] or first-order nonlinear systems [22]. In the above results, the knowledge of system dynamics is required in [19], [22], and the researches in [20] and [21] circumvent the dependence on system dynamics. However, the implementation of algorithms in [20] and [21] requires the acquisition of measurement data in advance and a lot of tedious integration operations are involved, which obviously increases the computational burden of the system [46]. At present, there are relatively few results concentrating on the reinforcement learning based method to realize the optimal attitude consensus for multiple rigid body networks. In [23], a model-free algorithm is proposed to deal with the optimal consensus for multiple rigid body networks, in which the model of each rigid body is expressed in the form of Euler-Lagrange equation. However, an extra neural network-based observer is designed to estimate the system dynamics, which imposes additional computational burden. Motivated by these factors, we aim to design a method that only needs real-time measurement data to achieve the optimal attitude consensus of multiple rigid body networks with unknown dynamics.
Updating the controller and the neural networks at each sampling instant based on the reinforcement learning method will take up a lot of computing and communication resources, especially when the system scale is huge. Therefore, it is particularly necessary to integrate the event-triggered mechanism into the reinforcement learning method to reduce the consumption of resources. In recent years, the event-triggered control scheme is widely studied to save the control cost and energy resources [41, 42]. In [24]–[27], the event-triggered mechanism is introduced to solve the optimal control of an individual system. The optimal consensus of multi-agent systems is considered in [28]–[30] by using the event-triggered reinforcement learning method. However, the event-triggered conditions in all of the above event-triggered reinforcement learning methods [24]–[30] include the continuous information. Therefore, all agents need to obtain the state information of themselves and their neighbors in real-time to determine whether the event-triggered condition is satisfied, which inevitably increases the communication resources. Inspired by [31]–[33], we aim to design an event-triggered reinforcement learning method under the self-triggered mechanism, thereby greatly reducing the consumption of computing and communication resources. Compared with the common linear system and first-order nonlinear system [19]–[22], it is challenging to combine the self-triggered mechanism with the reinforcement learning method to solve the optimal attitude consensus problem of multiple rigid body networks, since the dynamic model of a rigid body is a second order system with state coupled characteristics and the underlying attitude configuration space is non-Euclidean.
In this paper, we deal with the optimal attitude consensus problem for multiple rigid body networks with unknown system dynamics. The dynamic event-triggered mechanism is first introduced, which can significantly reduce the consumption of computing resources caused by updating the controller. Based on the discussion of the dynamic event-triggered condition, a sufficient self-triggered condition is proposed. Under the self-triggered mechanism, the continuous communication between rigid bodies can be avoided. Moreover, a reinforcement learning method is used to obtain the optimal policy. In detail, a rigid body only needs a neural network to approximate the optimal value function because of the existence of the augmented system [43]. Each neural network is updated only when the self-triggered condition is violated. The main contributions are as follows:
1) By using only the measurement data at the event-triggered instants, we achieve the optimal attitude consensus of multiple rigid body networks with unknown system dynamics. No additional actor neural network [20], [21] or additional neural network-based observer [23] is used in this paper, which obviously reduces the complexity of the algorithm implementation.
2) Compared with the results in [23] and [34], both a dynamic event-triggered condition and a self-triggered condition are integrated into the proposed reinforcement learning based method to solve the optimal attitude consensus of multiple rigid body networks. Under the self-triggered mechanism, the neural networks are updated only at the event-triggered instants. Meanwhile, the continuous communication is also avoided. Therefore, the consumption of computing and communication resources would be greatly reduced.
The following layout of this paper is as follows. In Section II, we introduce the notations used in this paper and the basics of graph theory. The model-free optimal attitude consensus problem is described in Section III. Meanwhile, the event-triggered mechanism is also introduced. We design an event-triggered reinforcement learning method in Section IV. The feasibility of this method is verified through a simulation in Section V. Section VI gives the conclusion.
II Preliminaries
II-A Notations
Throughout this paper, represents the set of all real numbers, represents the set of all positive real numbers, represents the set of all non-negative integers, and is the set of all positive integers, i.e., , , and . indicates an -dimensional vector, indicates an -dimensional identity matrix, indicates an dimensional matrix. For a vector , its Euclidean norm is defined as . For a square matrix , its trace is defined as , its Frobenius norm is defined as . Define and as the minimum eigenvalue and maximum eigenvalue, respectively. indicates that is positive (semi-positive) definite.
For any two vectors and , their cross product is expressed as follows:
II-B Graph Theory
Let represent the directed communication graph among rigid bodies, where indicates the set of all rigid bodies and indicates the communication relationship between any two rigid bodies. For the rigid body , we use to represent the set of its neighbors. For any two rigid bodies, if there is always a direct path between them, we call this communication graph strongly connected. In this paper, we suppose that all directed communication graphs are strongly connected.
In order to express the communication relationship between all rigid bodies more clearly, the weighted adjacency matrix is introduced. If the rigid body can receive the data transmitted by the rigid body , and otherwise. The in-degree matrix of the directed communication graph can be expressed as , where . Let represent the Laplacian matrix, where , and when .
III Model-Free Event-Triggered Optimal Attitude Consensus
III-A Model-Free Optimal Attitude Consensus
We consider a multiple rigid body network with nodes, where the attitude of each node can be expressed by Modified Rodriguez Parameters (MRPs) [35]. For the rigid body , the attitude is represent by , where indicates the Euler axis, and denotes the angle respective to the Euler axis.
Then, the attitude dynamics of each rigid body is given in the following form:
(1a) | ||||
(1b) |
where , and indicate the angular velocity vector, the inertia matrix and the control input torque, respectively. The matrix , where
Definition 1: Given that the communication topology of a multiple rigid body network (1) is strongly connected, the attitude consensus is said to be achieved when the following conditions hold:
(2a) | |||
(2b) |
Considering the communication topology among these rigid bodies, we can define the following form of consensus error for the rigid body :
(3) |
where . When , , we can easily obtain that and with . That is to say, the attitude consensus is achieved.
The dynamics of can be obtained by taking the derivative of Eq. (3), which is described as follows:
(4) |
where .
In order to overcome the dependence on model information, a compensator is introduced, which can be expressed by the following affine differential equation:
(5) |
where , are two functions to be designed later, and is the control input of the compensator. We need to choose appropriate functions and to ensure that the compensator is controllable. In this paper, a feasible pair of and is given as follows:
(6a) | |||
(6b) |
By combining the consensus error and the control input torque , we define an augmented consensus error, which is expressed as . According to Eq. (III-A) and Eq. (5), we can use the following augmented system to describe the dynamics of the augmented consensus error:
(7) |
where and are represented as follows:
Assumption 1: The matrix is bounded, i.e., , is satisfied, where .
In order to measure the performance cost of implementing the attitude consensus, a performance function is defined in the following form:
(8) |
where indicates the set of control inputs for the neighbors of the rigid body , , , and . According to Eq. (7), we can conclude that is driven by and . Therefore, the left side of Eq. (8) also contains .
According to (8), the value function can be defined as
(9) |
By taking the derivative of Eq. (9), we can obtain the Hamiltonian function in the following form:
(10) |
where
The implicit solution of the model-free optimal controller can be derived from , which is represented as
(11) |
where indicates the optimal value function and .
From Eq. (11), we can observe that the optimal controller contains and . According to the definition of the augmented system (7), we can obtain which overcomes the dependence on system dynamics. Therefore, we only need to obtain the optimal value function from Eq. (III-A) to achieve the optimal controller.
III-B Dynamic Event-Triggered Mechanism
For the purpose of reducing the computational burden when updating the controller, the event-triggered mechanism is introduced. Under the event-triggered mechanism, we only update the controller at a series of discrete instants , where holds with and the initial instant is set as .
To define the difference between the augmented consensus error at the last event-triggered instant and in real-time as the measurement error , which is expressed as
(13) |
For the rigid body , the controller is only updated at the event-triggered instants , and remains unchanged until a new event is triggered. During the event-triggered intervals , the neighbor of the rigid body might update its controller at some instants , which performs as a piecewise constant. Letting indicate the event-triggered times of the neighbor , we have and .
Therefore, the -dynamics (III-A) during is represented as follows:
(14) |
where and are the control input vectors at the event-triggered instants.
Definition 2 (Event-Triggered Admissible Control): If the following conditions are met: is piecewise continuous, , -dynamics (14) is stable, and the performance function (8) is finite, then is called event-triggered admissible control.
Depending on the form of optimal controller (11), we can obtain the event-triggered optimal controller as follows:
(15) |
where .
By combining (III-A) and (III-B), the event-triggered HJB equation for the rigid body is given as follows:
(16) |
The following assumption is proposed for proving the stability of system (14).
Assumption 2 [36]: The controller is Lipschitz continuous during the time interval , and there exists a constant that satisfies the following inequality:
(17) |
where indicates the Lipschitz constant. In the actual engineering applications, the selection of should not be smaller than the maximum value of .
For the event-triggered control, it is necessary to ensure that Zeno-behavior does not occur in the proposed event-triggered mechanism. Therefore, we introduce a dynamic event-triggered mechanism inspired by [31] to exclude the Zeno-behavior implicitly. Firstly, a dynamic variable is defined as follows:
(18) |
where , , and . Then, we can obtain the event-triggered instants through the following dynamic event-triggered condition:
(19) |
where will be determined later in the proof analysis.
Lemma 1: Assuming that the event-triggered instants are determined by (III-B), always holds for if the given initial value .
Proof: For , the event-triggered condition (III-B) guarantees the following inequality:
(20) |
Since the selection of must satisfy , the inequality (20) becomes
(21) |
According to the comparison lemma in [37], we can deduce that
(23) |
Therefore, always holds for . The complete proof is given.
Theorem 2: Consider a multiple rigid body network with nodes under a strongly connected communication topology. Supposed that Assumption 2 holds, the performance function and the event-triggered optimal controller are given by (8) and (III-B), respectively. If the event-triggered instants are determined by the dynamic event-triggered condition (III-B), then the following two conclusions can be obtained:
1) The -dynamics (14) is asymptotically stable, i.e., the optimal attitude consensus is achieved.
2) The Zeno behavior is excluded, i.e., the interval between and , has a positive lower bound.
Proof: 1) Firstly, we prove that the -dynamics (14) is asymptotically stable. Choosing as the Lyapunov function, which contains the optimal value function in (9) and the dynamic variable is governed by (III-B).
By taking the first-order derivative of with respect to along with the consensus error , we derive
(24) |
During the event-triggered intervals , assuming that the neighbors of the rigid body execute . According to (11) and (III-A), it can be easily obtained that
(25) |
and
(26) |
Thus, Eq. (III-B) can be redescribed as
(27) |
Substituting the dynamic event-triggered condition (III-B) into (III-B), becomes
(29) |
Since , we have if . Therefore, we can select appropriate , , and to ensure that the -dynamics (14) is asymptotically stable under the dynamic event-triggered condition (III-B).
2) Then, we prove that the Zeno behavior is excluded.
According to (23), we can deduce a sufficient condition of the dynamic event-trigger condition (III-B) when is selected as zero, which is expressed as follows:
(30) |
According to the definition of , we can conclude that is bounded. That is to say, is satisfied, where . With Assumption 1 and the definition of measurement error , we can obtain that for ,
(31) |
Since is satisfied at the event-triggered instants, we can derive the following inequality by using the comparison lemma [37]:
(32) |
Let indicate the next event-triggered instant determined by the sufficient condition (III-B). According to (III-B), the sufficient condition (III-B) can be divided into two situations during the time interval , which contains: 1) there is no events occuring for all rigid bodies in , and 2) there exists at least one event for the rigid body .
Situation 1: For all rigid bodies in , there is no event-triggered instants during . Therefore, we can deduce the following inequality:
(33) |
Situation 2: For the rigid body , there exists event-triggered instants during . By using to indicate the event-triggered instants and , we can deduce
(34) |
Combining (III-B) and (III-B), we can obtain the unified form of the two situations, which is expressed as follows:
(35) |
where .
Since is determined by the sufficient condition (III-B), and let indicate the next event-triggered instant determined by (III-B), we can obtain the interval between two adjacent event-triggered instants:
(36) |
Therefore, the Zeno behavior can be excluded. The complete proof is given.
III-C Self-Triggered Mechanism
Under the dynamic event-triggered mechanism, we have to obtain the continuous consensus error and the continuous measurement error to judge whether the dynamic event-triggered condition (III-B) is violated. Therefore, it is necessary to continuously communicate with neighbors to obtain their absolute attitude information, or to measure continuous relative attitude information with the help of sensors such as cameras. In order to overcome this problem, a self-triggered condition is proposed in this subsection.
According to (III-B), the self-triggered measurement error is defined in the following form:
(39) |
which is the upper bound of .
Thus, we can obtain a new sufficient condition of the dynamic event-triggered condition (III-B) as follows:
(40) |
According to (39), we can calculate the value of without using the continuous information. Therefore, the continuous communication is avoided.
Remark 1: Since the self-triggered condition (III-C) is a sufficient condition for the dynamic event-triggered condition (III-B), the number of triggered times by using (III-C) will be higher than using (III-B). Our original intention of introducing the self-triggered mechanism is to reduce the consumption of communication resources, which will inevitably increase the number of triggered times. That is to say, we can exchange a large amount of communication resources with a small amount of computing resources.
Remark 2: Compared with the time-triggered methods in [19]–[23], the event-triggered mechanism significantly saves computing resources and communication resources. Note that the event-triggered attitude stabilization problem is studied based on the sliding mode control in [40], however, the performance cost has not been consided in the controller design.
IV Main Results
Up to now, we have already derived the form of the optimal controller (III-B), which contains the optimal value function . However, it is very difficult to obtain the analytic solutions to the event-triggered HJB equations (III-B). In this section, we first introduce an event-triggered Reinforcement Learning (RL) algorithm to obtain the optimal policy. In order to implement the event-triggered RL algorithm online, a critic neural network is used to approximate the optimal value function . Only measurement data at the event-triggered instants are needed in the event-triggered RL algorithm, which obviously reduces the computation burden.
IV-A Model-Free Event-Triggered RL Algorithm
This section presents a model-free event-triggered algorithm based on reinforcement learning, which is used to seek the optimal policy. The RL algorithm involves two parts: policy evaluation and policy improvement. By repeating these two steps at the event-triggered instants, we know that the optimal policy is obtained when the policy improvement does not change the control policy.
(41) |
(42) |
Next, we give a theorem to show the convergence of the model-free event-triggered RL algorithm.
Theorem 2: Supposed that agent updates the control policy according to Algorithm 1, the value function converges to the optimal value function, i.e., and the control policy converges to the optimal control policy, i.e., .
By applying the transformation to Eq. (IV-A), the following equation holds:
(45) |
In the model-free event-triggered RL algorithm, we update the control policy of the rigid body when the self-triggered condition (III-C) is violated. Under the distributed asynchronous update pattern, the control policy of the rigid body remains invariant, where . Considering the trajectory of the consensus error driven by , i.e., , we have
(46) |
According to Eq. (42), it can be easily obtained that
(47) |
Then, Eq. (IV-A) becomes
(48) |
Therefore, is always satisfied. According to the Weierstrass theorem [38], the positive definite value function converges to the optimal value function with . Meanwhile, the control policy converges to the optimal control policy . The complete proof is given.
IV-B Implementation of Event-Triggered PI Algorithm
In this section, we implement Algorithm 1 by using a critic neural network to approximate the optimal value function .
We first define the following neural network of each agent:
(49) |
where indicates the critic estimated weight at the event-triggered instant , and indicates the critic activation function.

For a given event-triggered admissible controller , the update rule of is to minimize the following objective function:
(51) |
According to the gradient descent method, we can derive the update law of the following form:
(52a) | |||
(52b) |
where indicates the learning rate of the critic NN, and .
Letting , we can deduce
(53a) | |||
(53b) |
where denotes the critic target weight, is the critic weight error and is the critic residual error.
Therefore, the optimal controller can be obtained by (42) and (49), which is expressed in the following form:
(54) |
Through the above critic NN framework, we can obtain the optimal controller with only measurement data at the event-triggered instants. Therefore, the need for system dynamics is obviously avoided. In addition, the neural network is only updated at the event-triggered instants , which are determined by the self-triggered condition (III-C).
Assumption 3: In the critic NN framework, the target weight matrix, the activation function, the critic residual error are bounded with positive constants , , and , i.e., , , and .
Theorem 3: Consider the consensus error dynamics (14), the critic neural network is given as (49). If the estimated weight matrix is updated with (52), the consensus error and the critic estimation error are UUB.





Proof: Two different situations are considered, including during the event-triggered intervals and at the event-triggered instants.
Situation 1: During the event-triggered intervals, i.e., .
Consider the Lyapunov function of the following form:
(55) |
where , .
According to (53), we can obtain
(56) |
Therefore, the first-order derivative of can be expressed as follows:
(57) |
In order to ensure , the following inequality should be satisfied:
(58) |
where .
Hence, the consensus error is UUB. During the event-triggered intervals, the critic estimation error remains unchanged, which means is also UUB.
Situation 2: At the event-triggered instants, i.e., .
Choosing the same Lyapunov function as (55), we can obtain:
(59) |
Since the trajectory of is continuous, i.e., . Therefore, one has
(60) |
Next, according to (53), we have
(61) |
From the definition of and , we can obtain the following inequalities:
(62) | ||||
(63) |
where and .
In order to simplify the expression, some auxiliary variables are defined as follows:
Therefore, it can be deduced when , which signifies that and are UUB at the event-triggered instants.
Combing situation 1 with situation 2, it can be proved that the consensus error and the critic estimation error are UUB. The complete proof is given.
Remark 3: The attitude consensus problem of multiple rigid body networks has been widely studied in the literature [12, 13, 14, 15]. However, the attitude consensus protocol proposed in [12, 13, 14, 15] are all based on the known rigid body dynamics, which is a major limitation in practical applications. In this work, a model-free RL algorithm is proposed to solve the HJB equation of the optimal attitude consensus of multiple rigid body networks. Moreover, compared with the existing results on model-free consensus problem of multi-agent networks [20, 21, 23], an event-triggered RL algorithm is proposed, which is further extended to the self-triggered RL algorithm. Based on Algorithm 1, we know that the control update action and the information interaction among agents are only executed on the triggering instants. Hence, the computation and communication resource can be obviously reduced compared with the continuous-time approaches [20, 21, 23].
V Simulation
This section presents a numerical simulation to verify the effectiveness of the proposed event-triggered reinforcement learning method. We consider a multiple rigid body network with six nodes under the strongly connected communication topology. The communication relationship between any two nodes can be seen in Fig. 1. The Laplacian matrix is selected as follows:




The rigid body can be modeled using the following equations:
in which indicates the attitude vector, indicates the angular velocity vector, indicates the inertial matrix. Here, the inertial matrix of each rigid body is selected as follows:
In this simulation, the total duration is set to 40 seconds and the sampling period is 0.01 seconds. We selected the parameters as follows: , , the weight matrices , , and the learning rate . In the dynamic event-triggered condition (III-B), , , , , . In the self-triggered condition (III-C), the parameters remain the same except and . The initial states of each rigid body are given by:
The critic activation function is designed as:
By using the model-free event-triggered RL method proposed above, the optimal attitude consensus problem for multiple rigid body networks is solved. Fig. 2 shows the norms of attitude errors and angular velocity errors between the rigid body and the rigid body . From Fig. 2, we can conclude that the optimal attitude consensus is achieved. The same conclusion can be obtained from Fig. 2, which shows the trajectories of the consensus errors. Fig. 3 and Fig. 3 show the original control inputs of each rigid body and the control inputs of the augmented systems, respectively. It is worth noting that the control inputs of the augmented systems are only updated at the event-triggered instants, and remain unchanged during the event-triggered intervals.
Fig. 4 demonstrates the critic estimated weight matrix of each rigid body. It can be clearly seen that the neural networks are only updated at the event-triggered instants, which obviously reduces the consumption of computing resources. The triggering instants of dynamic event-triggered control and self-triggered control are illustrated in Figs. 5 and 5, respectively. It is clearly shown that control update actions and communication frequencey are both significantly reduced compared with the continous-time control approaches [20, 21, 23]. Fig. 6 shows the self-triggered measurement errors and their upper bound, which determines the event-triggered instants. Fig. 6 represents the triggered times and the minimum triggered interval under the dynamic event-triggered condition and the self-triggered condition, respectively. Since the self-triggered measurement error is the upper bound of the measurement error , the triggered times under the self-triggered mechanism are more than uner the dynamic event-triggered mechanism. Therefore, we can conclude that the self-triggered mechanism leads to an inevitable increase in the number of triggered times without continuing to communicate with neighbors.
VI Conclusion
In this paper, a model-free event-triggered RL method is proposed to deal with the optimal attitude consensus for multiple rigid body networks, which only requires the measurement data at the event-triggered instants. In order to solve the HJB equations, an event-triggered PI algorithm is proposed to obtain the optimal policy. Meanwhile, the critic NN framework is used to approximate the optimal value function online. The critic neural network is updated only when the event-triggered condition is violated, which greatly reduces the consumption of computing and communication resources. The UUB of the consensus error and the weight estimation error is proved and the Zeno behavior is excluded. A numerical simulation for a multiple rigid body network with six nodes shows the feasibility of the proposed method.
In the future, we will further improve this work from the following perspectives. One consideration is to relax the condition of communication topologies, such as from strongly connected graphs to directed spanning trees or even switching topologies [44]. Due to the actuator failure could happen in real applications of rigid body such as intelligent cars and quadrotor aircrafts [45], it is well motivated to consider the optimal cooperative control of rigid body systems with the actuator failure.
References
- [1] S. Ren, R. Mao and J. Wu, “Passivity-based leader-following consensus control for nonlinear multi-agent systems with fixed and switching topologies,” IEEE Transactions on Network Science and Engineering, vol. 6, no. 4, pp. 844-856, Oct. 2019.
- [2] H. Hong, W. Yu, J. Fu and X. Yu, “A novel class of distributed fixed-time consensus protocols for second-order nonlinear and disturbed multi-agent systems,” IEEE Transactions on Network Science and Engineering, vol. 6, no. 4, pp. 760-772, Oct. 2019.
- [3] D. Chen, X. Liu and W. Yu, “Finite-time fuzzy adaptive consensus for heterogeneous nonlinear multi-agent systems,” IEEE Transactions on Network Science and Engineering, vol. 7, no. 4, pp. 3057-3066, Oct. 2020.
- [4] H. Li, J. Liu, R.W. Liu, N. Xiong, K. Wu, T. Kim, “A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis.” Sensors vol. 17, no. 8, pp. 1792, Aug. 2017.
- [5] X. Shi, J. Cao, G. Wen and M. Perc, ”Finite-Time Consensus of Opinion Dynamics and its Applications to Distributed Optimization Over Digraph,” IEEE Transactions on Cybernetics, vol. 49, no. 10, pp. 3767-3779, Oct. 2019.
- [6] H. Du, W. Zhu, G. Wen, Z. Duan and J. Lü, “Distributed formation control of multiple quadrotor aircraft based on nonsmooth consensus algorithms,” IEEE Transactions on Cybernetics, vol. 49, no. 1, pp. 342-353, Jan. 2019.
- [7] Z. Li, Y. Tang, T. Huang and J. Kurths, “Formation control with mismatched orientation in multi-agent systems,” IEEE Transactions on Network Science and Engineering, vol. 6, no. 3, pp. 314-325, 1 Jul. 2019.
- [8] X. Jin, W. Du, W. He, L. Kocarev, Y. Tang and J. Kurths, “Twisting-based finite-time consensus for Euler-Lagrange systems with an event-triggered strategy,” IEEE Transactions on Network Science and Engineering, vol. 7, no. 3, pp. 1007-1018, Jul. 2020.
- [9] H. Zhang and P. Gurfil, “Cooperative orbital control of multiple satellites via consensus,” IEEE Transactions on Aerospace and Electronic Systems, vol. 54, no. 5, pp. 2171-2188, Oct. 2018.
- [10] K. Zhang and M. A. Demetriou, “Adaptation of consensus penalty terms for attitude synchronization of spacecraft formation with unknown parameters,” 52nd IEEE Conference on Decision and Control, Florence, pp. 5491-5496, 2013.
- [11] H. Qu, F. Yang, Q. Han and Y. Zhang, “Distributed H-consensus filtering for attitude tracking using ground-based radars,” IEEE Transactions on Cybernetics, to be published.
- [12] A. Abdessameud and A. Tayebi, “Attitude synchronization of a group of spacecraft without velocity measurements,” IEEE Transactions on Automatic Control, vol. 54, no. 11, pp. 2642-2648, Nov. 2009.
- [13] H. Cai, and J. Huang, “Leader-following attitude consensus of multiple rigid body networks by attitude feedback control,” Automatica, vol. 69, pp. 87-92, Jul. 2016.
- [14] H. Gui, and A.H.J. de Ruiter, “Global finite-time attitude consensus of leader-following spacecraft systems based on distributed observers,” Automatica, vol. 91, pp. 225-232, May 2018.
- [15] M. Lu and L. Liu, “Leader-following attitude consensus of multiple rigid spacecraft systems under switching networks,” IEEE Transactions on Automatic Control, vol. 65, no. 2, pp. 839-845, Feb. 2020.
- [16] B. Yi, X. Shen, H. Liu, Z. Zhang, W. Zhang, S. Liu, N. Xiong, ”Deep Matrix Factorization With Implicit Feedback Embedding for Recommendation System,” IEEE Transactions on Industrial Informatics, vol. 15, no. 8, pp. 4591-4601, Aug. 2019.
- [17] B. Lin, F. Zhu, J. Zhang, J. Chen, X. Chen, N. Xiong, J. L. Mauri, ”A Time-Driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing,” IEEE Transactions on Industrial Informatics, vol. 15, no. 7, pp. 4254-4265, July 2019.
- [18] J. Sun, X. Wang, N. Xiong and J. Shao, ”Learning Sparse Representation With Variational Auto-Encoder for Anomaly Detection,” IEEE Access, vol. 6, pp. 33353-33361, 2018.
- [19] K. G. Vamvoudakis, F. L. Lewis, and G. R. Hudas, “Multi-agent differential graphical games: Online adaptive learning solution for synchronization with optimality,” Automatica, vol. 48, no. 8, pp. 1598-1611, Aug. 2012.
- [20] J. Li, H. Modares, T. Chai, F. L. Lewis and L. Xie, “Off-policy reinforcement learning for synchronization in multiagent graphical games,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 10, pp. 2434-2445, Oct. 2017.
- [21] J. Qin, M. Li, Y. Shi, Q. Ma and W. X. Zheng, “Optimal synchronization control of multiagent systems with input saturation via off-policy reinforcement learning,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 1, pp. 85-96, Jan. 2019.
- [22] M. Abu-Khalaf and F. L. Lewis, “Nearly optimal control laws for nonlinear systems with saturating actuators using a neural network HJB approach,” Automatica, vol. 41, no. 5, pp. 779-791, May 2005.
- [23] H. Zhang, J. H. Park and W. Zhao, “Model-free optimal consensus control of networked Euler-Lagrange systems,” IEEE Access, vol. 7, pp. 100771-100779, 2019.
- [24] L. Dong, X. Zhong, C. Sun and H. He, “Event-triggered adaptive dynamic programming for continuous-time systems with control constraints,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 8, pp. 1941-1952, Aug. 2017.
- [25] X. Zhong and H. He, “An event-triggered ADP control approach for continuous-time system with unknown internal states,” IEEE Transactions on Cybernetics, vol. 47, no. 3, pp. 683-694, Mar. 2017.
- [26] Y. Zhu, D. Zhao, H. He and J. Ji, “Event-triggered optimal control for partially unknown constrained-input systems via adaptive dynamic programming,” IEEE Transactions on Industrial Electronics, vol. 64, no. 5, pp. 4101-4109, May 2017.
- [27] Q. Zhang, D. Zhao and D. Wang, “Event-based robust control for uncertain nonlinear systems using adaptive dynamic programming,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 1, pp. 37-50, Jan. 2018.
- [28] W. Zhao and H. Zhang, “Distributed optimal coordination control for nonlinear multi-agent systems using event-triggered adaptive dynamic programming method,” ISA Transactions, vol. 91, pp. 184-195, Aug. 2019.
- [29] W. Zhao, W. Yu and H. Zhang, “Event-triggered optimal consensus tracking control for multi-agent systems with unknown internal states and disturbances,” Nonlinear Analysis Hybrid Systems, vol. 33, pp. 227-248, Aug. 2019.
- [30] Z. Shi and C. Zhou, “Distributed optimal consensus control for nonlinear multi-agent systems with input saturation based on event-triggered adaptive dynamic programming method,” International Journal of Control, to be published.
- [31] A. Girard, “Dynamic triggering mechanisms for event-triggered control,” IEEE Transactions on Automatic Control, vol. 60, no. 7, pp. 1992-1997, Jul. 2015.
- [32] X. Yi, K. Liu, D. V. Dimarogonas and K. H. Johansson, “Dynamic event-triggered and self-triggered control for multi-agent systems,” IEEE Transactions on Automatic Control, vol. 64, no. 8, pp. 3300-3307, Aug. 2019.
- [33] X. Jin, Y. Shi, Y. Tang and X. Wu, “Event-triggered attitude consensus with absolute and relative attitude measurements,” Automatica, vol. 122, Art. No. 109245, Dec. 2020.
- [34] S. Wang, X. Jin, S. Mao, A. V. Vasilakos and Y. Tang, “Model-free event-triggered optimal consensus control of multiple Euler-Lagrange systems via reinforcement learning,” IEEE Transactions on Network Science and Engineering, to be published.
- [35] H. Schaub and J. L. Junkins, Analytical Mechanics of Space Systems, American Institute of Aeronautics and Astronautics, 2009.
- [36] M. Lemmon, Networked Control Systems, Springer London, 2010.
- [37] H. K. Khalil, Nonlinear Systems, Upper Saddle River, NJ, USA: Prentice Hall, 2002.
- [38] Z. Jiang and Y. Jiang. “Robust adaptive dynamic programming for linear and nonlinear systems: An overview,” European Journal of Control, vol. 19, no. 5, pp. 417-425, Sept. 2013.
- [39] W. Fang, X. Yao, X. Zhao, J. Yin and N. Xiong, ”A Stochastic Control Approach to Maximize Profit on Service Provisioning for Mobile Cloudlet Platforms,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 48, no. 4, pp. 522-534, Apr. 2018.
- [40] Y. Liu, B. Jiang, J. Lu, J. Cao and G. Lu, ”Event-Triggered Sliding Mode Control for Attitude Stabilization of a Rigid Spacecraft,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 50, no. 9, pp. 3290-3299, Sept. 2020.
- [41] B. Li, Y. Liu, K. I. Kou and L. Yu, ”Event-Triggered Control for the Disturbance Decoupling Problem of Boolean Control Networks,” IEEE Transactions on Cybernetics, vol. 48, no. 9, pp. 2764-2769, Sept. 2018.
- [42] S. Zhu, Y. Liu, Y. Lou, et al., ”Stabilization of logical control networks: an event-triggered control approach,” Science China Information Sciences, vol. 63 no. 1, pp. 1-11, 2020.
- [43] Y. -J. Liu, Q. Zeng, S. Tong, C. L. P. Chen and L. Liu, ”Adaptive Neural Network Control for Active Suspension Systems With Time-Varying Vertical Displacement and Speed Constraints,” IEEE Transactions on Industrial Electronics, vol. 66, no. 12, pp. 9458-9466, Dec. 2019.
- [44] L. Liu, Y.-J. Liu, A. Chen, S. Tong, C. L. P. Chen, ”Integral Barrier Lyapunov function-based adaptive control for switched nonlinear systems,” Science China Information Sciences, vol. 63, no.3, 2020.
- [45] Y. -J. Liu, Q. Zeng, S. Tong, C. L. P. Chen and L. Liu, ”Actuator Failure Compensation-Based Adaptive Control of Active Suspension Systems With Prescribed Performance,” IEEE Transactions on Industrial Electronics, vol. 67, no. 8, pp. 7044-7053, Aug. 2020.
- [46] Y. Qu and N. Xiong, ”RFH: A Resilient, Fault-Tolerant and High-Efficient Replication Algorithm for Distributed Cloud Storage,” 2012 41st International Conference on Parallel Processing, pp. 520-529, 2012.
![]() |
Xin Jin received the B.S. degree in school of automation from the Guangdong University of Technology, Guangzhou, China, in 2016. He was an exchange Ph.D. student at University of Victoria, Victoria, Canada from Sept. 2019 to Sept. 2020. He is currently working toward the Ph.D. degree from the East China University of Science and Technology. His research interests include rigid body systems, multi-agent systems, event-triggered control and their applications. |
![]() |
Shuai Mao received the B.S. degree in school of control science and engineering from East China University of Science and Technology, in 2017. He is currently pursuing the Ph.D. degree from East China University of Science and Technology. His research interests include multi-agent systems, distributed optimization and their applications in practical engineering. |
![]() |
Ljupco Kocarev (Fellow, IEEE) is currently a member of the Macedonian Academy of Sciences and Arts, a Full Professor with the Faculty of Computer Science and Engineering, Ss. Cyril and Methodius University, Skopje, Macedonia, the Director of the Research Center for Computer Science and Information Technologies, Macedonian Academy, and a Research Professor with the University of California at San Diego. His work has been supported by the Macedonian Ministry of Education and Science, the Macedonian Academy of Sciences and Arts, NSF, AFOSR, DoE, ONR, ONR Global, NIH, STMicroelectronics, NATO, TEMPUS, FP6, FP7, Horizon 2020, and agencies from Spain, Italy, Germany (DAAD and DFG), Hong Kong, and Hungary. His scientific interests include networks, nonlinear systems and circuits, dynamical systems and mathematical modeling, machine learning, and computational biology. |
![]() |
Chen Liang is currently working as a Research Assistant at Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education and a Faculty Member of School of Information at East China University of Science and Technology. She got her Master Degree in Computer Applied Technology at Shanghai Normal University in 2013. Her research interests include multi-agent systems, reinforcement learning and network. |
![]() |
Saiwei Wang received the B.S. degree in school of control science and engineering from East China University of Science and Technology, Shanghai, China, in 2018. He is currently pursuing the M.S. degree from East China University of Science and Technology. His research interests include multi-agent systems, reinforcement learning and their applications. |
![]() |
Yang Tang (Senior Member, IEEE) received the B.S. and Ph.D. degrees in electrical engineering from Donghua University, Shanghai, China, in 2006 and 2010, respectively. From 2008 to 2010, he was a Research Associate with The Hong Kong Polytechnic University, Hong Kong. From 2011 to 2015, he was a Post-Doctoral Researcher with the Humboldt University of Berlin, Berlin, Germany, and with the Potsdam Institute for Climate Impact Research, Potsdam, Germany. Since 2015, he has been a Professor with the East China University of Science and Technology, Shanghai. His current research interests include distributed estimation/control/optimization, cyber-physical systems, hybrid dynamical systems, computer vision, reinforcement learning and their applications. Prof. Tang was a recipient of the Alexander von Humboldt Fellowship and the ISI Highly Cited Researchers Award by Clarivate Analytics from 2017. He is a Senior Board Member of Scientific Reports, an Associate Editor of IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Emerging Topics in Computational Intelligence, IEEE Transactions on Circuits and Systems I: Regular Papers and IEEE Systems Journal, etc. |