Mode Consensus Algorithms With Finite Convergence Time
Abstract
This paper studies the distributed mode consensus problem in a multi-agent system, in which the agents each possess a certain attribute and they aim to agree upon the mode (the most frequent attribute owned by the agents) via distributed computation. Three algorithms are proposed. The first one directly calculates the frequency of all attributes at every agent, with protocols based on blended dynamics, and then returns the most frequent attribute as the mode. Assuming knowledge at each agent of a lower bound of the mode frequency as a priori information, the second algorithm is able to reduce the number of frequencies to be computed at every agent if the lower bound is large. The third algorithm further eliminates the need for this information by introducing an adaptive updating mechanism. The algorithms find the mode in finite time, and estimates of convergence time are provided. The proposed first and second algorithms enjoy the plug-and-play property with a dwell time.
Index Terms:
Consensus, Mode computing, Blended dynamics, Plug-and-playI Introduction
Distributed mode consensus, also known as majority voting or multiple voting, allows for the identification of the most frequent choice when dealing with categorical data like movies, car brands, or political candidates. Since it is not possible to directly calculate average or median values for such inherently nonnumerical data, distributed mode consensus provides a way to determine the central tendency. In the existing literature, achieving consensus on functions of interest, known as the -consensus problem [9, 5], has been successful for specific types of functions typically assuming real values such as finding average, max(min), median, or the -smallest element. While distributed convex optimization based protocols can handle consensus on these functions directly, the mode consensus problem seems an exception. In addition, the mode function cannot be represented as a composition of the functions mentioned above, presenting a non-trivial challenge for mode consensus.
Achieving mode consensus is not an entirely new problem of course. In the literature, Ref. [1] introduces a distributed method for computing the mode. In this method, the frequency of each element is aggregated from the leaves to the root along a spanning tree, and only the root node performs the mode calculation. By incorporating hash functions, the algorithm is able to find the mode with high probability and low time complexity. The binary majority voting problem, where there are only two distinct elements in the population, is addressed using the “interval consensus gossip” approach described in [2]. The state space used for this problem is . Initially, nodes vote for either “” or “” with corresponding states of or . When neighboring nodes come into contact, they exchange their states and update them based on a predefined transition rule. When the algorithm reaches convergence, all nodes are expected to have states within the set if “” is the majority choice. Conversely, if “” is the majority choice, all node states will belong to the set . Subsequently in [3], a Pairwise Asynchronous Graph Automata (PAGA) has been used to extend the above idea to the multiple choice voting problem, and sufficient conditions for convergence are stated. In [4], a distributed algorithm for multi-choice voting/ranking is proposed. The interaction between a pair of agents is based solely on intersection and union operations. The optimality of the number of states per node is proven for the ranking problem. Ref. [7] explores distributed mode consensus in an open multi-agent system. Each agent utilizes an average consensus protocol to estimate the frequency of each element and then selects the one with the highest frequency as the mode. Agents are free to join or leave the network, and the mode may vary during the process, but the agent that leaves the network needs to signal this intention to its neighbors beforehand.
In this paper, we present distributed mode consensus algorithms based on the concept of blended dynamics [6, 8]. Blended dynamics have the characteristic that the collective behavior of multi-agent systems can be constructed from the individual vector fields of each agent when there is strong coupling among them. As an example, [6] has demonstrated that individual agents can estimate the number of agents in the network in a distributed manner. The proposed mode consensus algorithms provide two main key benefits, over and beyond the inherent contribution. First, the algorithms can be easily implemented in a plug-and-play manner. This means that the system can maintain its mode consensus task without requiring a reset of all agents whenever a new agent joins or leaves the network. Second, we can demonstrate the intuitively satisfying conclusion that the frequency of the mode has an impact on the convergence rate of the mode consensus algorithm, in the sense that a higher mode frequency results in faster convergence of the algorithm.
The paper is organized as follows. In Section II, preliminaries on -consensus is introduced, and the mode consensus problem is then described. The direct mode consensus algorithm is described in Section III, along with its characterization of convergence rate. Section IV combines the direct algorithm with the -th smallest element consensus, resulting in two mode consensus algorithms that are applicable when the mode appears frequently. The performance of the proposed algorithms is evaluated in Section V. Finally, Section VI concludes the paper.
II Notation and Preliminaries
II-A Underlying network
Consider a group of agents labeled as . Every agent has an attribute, which can be thought of as a label. The attribute could be a positive integer, a real vector, a color, a gender, an age group, a brand of car, etc. Two or more agents may have the same attribute (and indeed, in many situations one might expect the number of distinct attributes to be much less than the number of agents). The attribute of agent will be denoted by . The vertex set is part of an undirected graph with which is also associated a set of edges (i.e. vertex pairs), in the usual way. The neighbor set of agent is the set of vertices which share an edge with agent . The vertex set , the edge set , the attributes , and are assumed to be time-invariant in the bulk of the paper, but at times we open up the possibility, to accommodate a “plug and play” capability, that, following certain rules, means they are piecewise constant. The state is updated by an out-distributed control algorithm, that is, at every time , the quantity is computed as some function of , , and for all .111While we choose a continuous time setting in this paper, it seems very probable that a discrete-time setting could be used as an alternative, with very little change to the main assumptions, arguments and conclusions.
Assumption 1
The graph is undirected and connected, with .
The need for connectivity is intuitively obvious. The need for to be undirected is less so; note though that virtually all blended dynamics developments rest on an assumption of undirectedness of the underlying graph.
II-B -consensus and broad problem setting
To explain the problem setting, we define first the particular generalization of consensus, viz. -consensus, with which we are working, and then indicate an explicit type of -consensus problem of interest in this paper for which we are seeking an update algorithm. Following this, we provide two illustrative examples of -consensus which are in some way relevant to the problem considered here. For an introduction and the development of -consensus, see e.g. [10, 9, 5].
Definition 1 (-consensus)
With being the set of all possible distinct attributes, consider a collection of agents whose attributes take values in . Suppose is a given function. An algorithm is said to achieve -consensus asymptotically if it is out-distributed and the solution of the overall system exists and satisfies, for every , .
Average consensus, obtained by setting , and is a very well known example. Some others are detailed further below, starting with the problem of interest in this paper. In the meantime however, we shall make the following assumption:
Assumption 2
The set is finite, and there is a bijective mapping .
To illustrate the practical effect of this assumption, suppose that we are considering an attribute which is a 2-vector of real numbers, being the height and weight of a group of individuals. Such data is always quantized in the real world, with height for example usually being expressed to the nearest number of centimeters. So the vector entry corresponding to height might be an integer number somewhere between 25 and 250, and a number of say 180 would indicate the individual’s height lies in the interval [179.5,180.5). In effect becomes a finite subset of . Then any such finite set can always be bijectively mapped to . While the possibly unordered set is mapped to an ordered set by the mapping , the order can be arbitrary for our purpose of computing the mode of the attributes.
II-C Two relevant examples of -consensus
There are two related problems treatable by -consensus which are similar to the problem just posed, and which have provided to the authors of this paper insights in the formulation of the solution of the mode consensus problem.
II-C1 Distributed computation of network size
In the literature, there exist consensus protocols that accomplish the task of distributed computation of the network size based on blended dynamics, see e.g. [6]. Inspired by [6], the following simple protocol estimates in finite time under the assumption that where is a known upper bound of :
(1) |
where is the coupling gain, and is the gain to control the speed of the algorithm. As can be shown in Theorem 1 of the next section, if and , the solution of the system (1) satisfies
where is the rounding function, and
where .
Remark 1
The upper bound on the estimation time (which may be quite conservative) depends on in the order of . If grows linearly with , one may even have . However a large gain could also undermine the robustness of the protocol against high-frequency noise.
II-C2 -th smallest element consensus
Since is a totally ordered set, suppose without loss of generality that , then the -th smallest element is defined as . The -th smallest element (or -th order statistic) consensus problem is then an -consensus problem with .
Ref. [5] proposed a method to solve the -th smallest element consensus problem with distributed convex optimization algorithms. An example used in the following is
(2) |
where is defined by
with and .222In [5] the bound was given by , which can be loosened to .333System (2) admits unique solution in the sense of Filippov. For more details, refer to, e.g., [11, Proposition S2]. Initial conditions for the differential equations can be arbitrary. It can be shown following [12] that if
(3) |
the convergence rate of the protocol (2)–(3) satisfies , where . When , it can be obtained that the solution of (2)–(3) satisfies
where
By inverting the map , every agent can figure out the -th smallest element.
Remark 2
Although increasing renders a smaller , it also amplifies to a very large number for large . To reach a balance between and , we select , so that and .
Remark 3
Both of the examples above, the distributed computation of network size and the -th smallest element consensus, exhibit algorithms with a finite convergence time. Likewise, for mode consensus, we seek algorithms which also yield finite consensus time.
II-D The problem of interest: Mode consensus
Mode consensus is a special class of -consensus problem. Suppose the function returns the attribute that appear most often among (when multiple distinct values equally appear most often, returns any of these values specified by the user). The mode consensus problem as studied in this paper is an -consensus problem with .
II-E Plug-and-play
In many circumstances, it may be advantageous to have a plug-and-play capability. Specifically, we are interested in a network that can change over time during its operation. Changes cannot be arbitrary, but rather admissible in accord with certain rules.
First, while the potential vertex set is taken as time-invariant, that is, is fixed over time for some , the actual vertex set which is an arbitrary subset of can be time-varying but piecewise constant; with denoting its cardinality, is an upper bound for . In this setting, our admissible changes are as follows:
-
•
The set of edges, written as , is time-varying but piecewise constant. Therefore, at certain time instants, a new edge or edges can be created, and some existing edge or edges can be deleted.
-
•
There can be orphan nodes (meaning a node that has no incident edge). When all the edges that are incident to a node are deleted, we say the node leaves the network.
-
•
We assume that there is only one connected component in the network. In fact, even if there is more than one connected component, our concern is with just one of them. If a connected component of interest splits into two connected components (with some edges being deleted), one of these components still receives our attention, and all the nodes belonging to the other component are regarded as leaving the network.
-
•
The attribute of a node is permitted to be time-varying but must be piecewise constant.
-
•
The node dynamics stop integrating whenever the node is an orphan. One example of the node dynamics is:
(4) where for a set implies cardinality of the set. In this way, malfunctioning agents can also be represented by considering them as orphans.
The control algorithm for updating the states of agents is said to be plug-and-play ready if, whenever an abrupt admissible change of the network occurs, (a) the -consensus is recovered (to a new value reflecting the new situation) after a transient, and (b) the recovery is achieved by passive manipulation, together with local initialization of the newly joining agent if necessary. By passive manipulation we mean that, when an individual agent detects the changes in its incident edge set, or equivalently its neighbor set, associated with immediate neighbors leaving or joining the network, the agent can perform some actions that do not require reactions of neighboring agents. An example is (4) because, if at time , the manipulation simply resets the incident edge set or equivalently neighbor set, and this can be done by the individual agent. (Of course, neighbor agents may have to also reset in order to carry out their own updates.) By local initialization we mean that, any initialization following a change must be local only, i.e. it must be global initialization-free. This implies that the algorithm should not depend on a particular initial condition constraining two or more agents in some linked way. A constraint such as is an example of global initialization, while the constraint that , , where is a compact set known to every agent, is considered as an example of local initialization (or equivalently it is global initialization-free), and a local initialization is required for the newly joining agent (or, in (4), when becomes non-empty so that the sign function changes from 0 to 1).
The property of plug-and-play ready basically requires that, when the change occurs, no further action is required except for the agents whose incident edge sets are changed. In particular, the requirement of passive manipulation is useful in the case when some agent suddenly stops working (without any prior notification) and cannot function properly anymore.
III Direct mode consensus algorithm
The following protocol, which is motivated by the algorithm for distributed computation of network size discussed above, lays the foundation of the mode consensus algorithm. It is also inspired by the notion of blended dynamics and calculates the number of agents with an arbitrary particular attribute , denoted by , in a distributed manner:
(5) |
where
(6) |
and analogous to (1), here is the coupling gain, and is the gain to control the speed of the algorithm.
Theorem 1
Proof of Theorem 1 is found in the Appendix.
Remark 4
Remark 5
Remark 6
As is usual for algorithms based on using the blended dynamics approach, grounded in singular perturbation ideas, there are broadly speaking two convergence rates, the fast one being associated with the achieving of consensus between the agents (adjusted by ), and the slower one being associated with the blended dynamics (adjusted by ). This behavior is apparent in the simulations discussed later.
- 1.
-
2.
Return the attribute defined by the mode:
(8)
Based on Theorem 1, Algorithm 1 for distributed computation of the mode can be formulated directly. Evidently, one can execute the second step of Algorithm 1 using parallel computations, with one for each attribute. Consider the following modification to (5)–(6):
(9) |
where is the state vector, and is the unit -vector. Likewise, and are positive scalar gains. If each agent holds the set and the map , then the execution of the third step of the algorithm is straightforward (and achievable by a single agent, or all agents). Any agent for which by definition has an attribute which is the mode.
Remark 7
Indeed, if there is more than one attribute with the highest frequency of occurrence, Algorithm 1 is able to find all of these attributes. In that case the user has the option to return any one or several of them. This issue will not be examined any further in the remainder of the paper.
Algorithm 1 is attractive on several grounds. It offers finite convergence time with a bound available for that time, and it is plug-and-play ready. In particular, if the admissible changes discussed in Section II-E occur with the dwell time (i.e., any two consecutive changes do not occur within ), then the mode is obtained after the time from the time of change. It is because, whenever where is the -th time of change, it holds that for all by Theorem 1. Thus, because , and this repeats. On the other hand, it has the potential disadvantage that every agent has to run the consensus protocols (5)–(6) multiple (in fact times, which could be computationally burdensome (but may not be, even with large ). For small , there is in fact no substantive disadvantage.
IV Mode consensus algorithm considering the frequency of mode
In this section, we consider two alternative algorithms based on knowledge, available a priori or acquired early in the algorithm, of a lower bound on the frequency of the mode.
This second style of mode consensus algorithm in both cases uses the following result.
Lemma 1
Let , and let be a positive integer. If , then there is an integer such that equals the -th smallest element of .
Proof:
The proof uses a typical pigeonhole principle argument. For notational convenience, let and . Suppose without loss of generality that . Moreover, let be additional attributes introduced temporarily just for the proof and such that
(10) |
Partition the above sequence into subsequences
for . It follows that .
The result is then proved with a contradiction argument. Suppose, to obtain a contradiction, that for all . Then it must follow that for some and . Combined with the fact that the values are ordered (see (10)), it must follow that the number of agents with attribute equal to is less than , which yields a contradiction. ∎
Remark 8
IV-A Algorithm with a priori knowledge of the least frequency of the mode
The message of Lemma 1 is that, if a lower bound of the frequency of the mode is known, that is, with a known , then, with such that , the integer for the mode should be one of the -th smallest elements, , in . This message yields Algorithm 2, in which, Step 2) identifies the -th smallest elements, , and then, Step 4) finds the mode by comparing the frequencies among the candidates.
-
1.
Estimate the network size, , with the distributed consensus protocol (1) with ;
- 2.
- 3.
-
4.
Return the mode
(11)
While Algorithm 2 outlines the distributed mode consensus algorithm, it also reveals the significance of in reducing the computational load. Depending on , sometimes Algorithm 2 may involve manipulating or storing fewer variables than Algorithm 1, but the reverse may hold. In fact, Step 2) in Algorithm 2 can be considered as a selection procedure to find an attribute to be inspected. By this, it is expected that the number of inspections in (11) (effectively, the cardinality of ) is smaller than that in (8) (the cardinality of ). This is indeed the case when, for example, with , , and . That is, for (8), at least six attributes are inspected in Step 1) of Algorithm 1. For the case of Algorithm 2, assuming that is known, guarantees that , and so two attributes are inspected by Step 3) of Algorithm 2. However, in order to identify the attributes to be inspected, Step 1)-2) of Algorithm 2 needs to be performed, as an overhead. So the total number of variables to be manipulated is which is still less than . As a second example, if with and , then and we have to choose . In this case, Algorithm 2 inspects five candidates while Algorithm 1 inspects eight candidates, but the count of the overhead is five so that the total count becomes , which is more than .
A convergence time bound can be established as follows. Suppose Algorithm 2 is executed with parallel computations at Steps 2) and 4), for example at Step 2), each agent runs the -th smallest element consensus protocol for every in parallel. Then, the mode consensus is reached within the time .
It appears from Algorithm 2 that Step can only be implemented after Step has converged. This is actually not the case. All steps may start simultaneously, provided that the parameters used in any step are replaced with their estimated value generated in the previous steps. Indeed, the following equations are one alternative implentation of Algorithm 2 for agent :
(12) |
where , for all , and
in which, , , , , and are predetermined. (When , the in the above is interpreted as .) Here, is the estimate of by the agent , is the estimate of the -th smallest element, is the estimate of the frequency of , and is the estimated integer for the mode, i.e., the estimated mode is such that . In fact, although all the dynamics (12) run altogether, the estimates are sequentially obtained. That is, starts converging to its proper value after becomes , and starts converging to its proper value after becomes the -th smallest element. Therefore, the required time for getting the mode is still the same as .
Algorithm 2 that repeats forever, and the alternative algorithm (12) are plug-and-play ready as long as the admissible changes occur with the dwell time . This is because, with , , and at the time of change , the same inclusion holds at the next time of change ; i.e., , , and .
From (12) we see the number of the state variables needed in Algorithm 2 equals (or if ). Thus Algorithm 2 uses less state variables than Algorithm 1 if (or when ). In particular, If , the mode consensus problem reduces to the max consensus of since is the -th smallest element; If , the problem reduces to finding the attributes having larger frequency between the median and the maximum of (if is odd, it is simply the median); If , Algorithm 2 probably has to do the -th smallest element consensus times, and then it offers no advantage over Algorithm 1. Since , it is not necessary to consider because the condition is automatically fulfilled. Thus to choose Algorithm 2 over Algorithm 1 it is preferable to have .
IV-B Algorithm with learned knowledge of the least frequency of the mode
When is unknown, we are not able to employ Algorithm 2. However, once we determine a value of for some attribute , then we can immediately infer that . Based on this simple fact, we begin with an estimate of with , and update by whenever an attribute such that is found. At the same time, we begin with and repeatedly increase until it holds that . Once we have , it means that the integer corresponding to the mode is among the -th smallest elements, , of , so that the previous algorithm can be applied. Putting all this together, we obtain Algorithm 3.
-
1.
Estimate the network size, , with the distributed consensus protocol (1) with . (If , stop because the mode is the same as the attribute.);
-
2.
Set ;
-
3.
While do
-
4.
;
- 5.
- 6.
-
7.
;
-
8.
End While
-
9.
Return the mode
In the algorithm, Steps 5) and 6) can be made more efficient by storing data obtained in the previous loop, but in the worst case scenario, the “While” loop should be run times, where is the smallest positive integer such that . Analogously to the analysis for Algorithm 2, the number of state variables needed in Algorithm 3 equals . To choose Algorithm 3 over Algorithm 1 it is preferable to have . If in addition, Algorithm 3 is executed with parallel computations, the time to reach mode consensus is less than .
V Simulations
Consider an undirected loop composed of agents, where agent is connected to agent for every , and connected to agent for every . Agent and are also connected. Suppose that , and
Thus is the mode, which is unique.
Algorithm 1 is examined first. For each , the initial condition of (5)–(6) at each agent (i.e., the initial guess of at each agent) is randomly and independently selected from to . According to Theorem 1, the coupling gain is selected as . Set . The simulation results are shown in Figs. 1–2. From Fig. 2 it is observed that consensus is reached rapidly, while the trajectories converge to the frequency of the mode with a relatively slow rate. This is in accord with expectations, given use of blended dynamics ideas.
Second, Algorithm 2 is examined. Suppose that . With protocol (1) where and , and the initial condition at each agent (i.e., the initial guess of at each agent) is randomly and independently selected from to , each agent asymptotically estimates the network size, as shown in Fig. 3. Next, assume that is used in Algorithm 2 (this is valid since ). Thus only the -th and -th smallest element need to be estimated. Fig. 4 shows the corresponding estimation result by protocol (2), where , , and the initial condition at each agent (i.e., the initial guess of the mode at each agent) is randomly and independently selected from . It is shown that the -th element converges to while the -th element converges to . Then, Fig. 5 shows that the frequency of the -th smallest element converges to , and the -th smallest element converges to by running protocol (5)–(6), where the gains and initial conditions are selected following the rules of the first simulation. Finally, Fig. 6 shows the mode estimated at each agent.
Lastly, Algorithm 3 is examined. Parameter settings follow Algorithms 1 and 2, and the iteration starts at and . It is supposed that the criterion “” is verified every s. Since , is switched to immediately, and . Thus the -th and -th smallest element need to be estimated, and the corresponding result is shown in Fig. 7-8, in which the attribute estimation converges to and and the frequency estimation converges to and . Then since , is further switched to in Fig. 9. As the situation at is the same as the second simulation, simulation details are omitted. Note that the termination condition is satisfied since . Finally, Fig. 10 shows the mode estimated at each agent during the iterations.







.



VI Conclusion
It is shown in this paper that, by employing a blended-dynamics based protocol, distributed mode consensus can be reached in an undirected network. It is also shown that, if the frequency of the mode is no less than for some positive integer satisfying , the number of distinct frequencies computed at every agent can be reduced.
To design the mode consensus algorithms, the upper bound of the network size, , is required a priori information. The convergence time is always finite.
A nice feature of the proposed algorithms is the plug-and-play property. When a new agent plugs in or leaves the network, only passive manipulations are needed for the agents in order to restore consensus to a new mode.
Future works along this direction include extending the present results to the case of directed graphs, developing second (or higher) order algorithms to reach faster convergence, adaptively adjusting the gains so that the need for knowledge of can be removed. While the piecewise-constant interaction graph is considered in this paper, gossip or other stochastic sorts of interactions are also of interest for future direction of research.
We shall prove Theorem 1 following the idea of [6]. By letting and , the overall system (5)–(6) is rewritten as
(13) |
where , and and in (5) and are written as , , and , respectively, for simplicity.
Let denote the second smallest eigenvalue of the symmetric Laplacian matrix , which is nonzero under Assumption 1. Then, the following lemma is a key to the proof.
Lemma 2 (Lemma 1 in [6])
Suppose that Assumption 1 holds. If , then the matrix is positive definite. Moreover, if , then , where is the smallest eigenvalue for a symmetric matrix .
Lemma 3
Let be the -th element of . Under Assumption 1,
-
1.
,
-
2.
for ,
(14)
Proof:
Observe that because and . Then,
which proves the first claim. Under Assumption 1, the matrix has exactly one zero eigenvalue and there exists an orthogonal matrix such that
where is orthogonal, and is real and diagonal. Then, with a coordinate change
the equilibrium can be expressed by
(15) |
where and whose existence follows from the fact that . In fact, we observe that
where which is positive definite. Therefore, we see that . Now, by (15) and by , we have that
from which is obtained. With the expressions for and , equation (15) yields that
Therefore,
Noting that, for ,
we finally obtain
Now, it follows from (13) that
Here we note that, since the matrix is symmetric,
Recalling that the assumption of Theorem 1 implies by the fact that [13] so that Lemma 2 guarantees . The assumption also implies that by (14), with which we note that for all because . This implies that where that is the size of the interval since by the assumption. Putting together, we obtain
Therefore, when
we have that for all .
References
- [1] Kuhn F, Locher, T and Schmid, S. Distributed computation of the mode. Proceedings of the twenty-seventh ACM symposium on Principles of distributed computing, 2008, 15–24.
- [2] Bénézit, F. and Thiran, P. and Vetterli, M. Interval consensus: From quantized gossip to voting. Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP’09), 2009, 3661–3664.
- [3] Bénézit, F. and Thiran, P. and Vetterli, M. The distributed multiple voting problem. IEEE Journal of Selected Topics in Signal Processing, 2011, 5: 791–804.
- [4] Salehkaleybar, S. and Sharif-Nassab, A. and Golestani, S. J. Distributed voting/ranking with optimal number of states per node. IEEE Transactions on Signal and Information Processing over Networks, 2015, 1: 259–267.
- [5] Huang, C. and Anderson B. D. O. and Zhang H. and Yan, H. Distributed convex optimization as a tool for solving -consensus problems. Automatica, 2023, 115: 111087.
- [6] Lee, D. and Lee, S. and Kim, T. and Shim, H. Distributed algorithm for the network size estimation: Blended dynamics approach. 2018 IEEE Conference on Decision and Control (CDC), 2018, 4577–4582.
- [7] Dashti, Zoreh Al Zahra Sanai and Oliva, Gabriele and Seatzu, Carla and Gasparri, Andrea and Franceschelli, Mauro. Distributed Mode Computation in Open Multi-Agent Systems. IEEE Control Systems Letters, 2022, 6: 3481–3486.
- [8] Lee, Jin Gyu and Shim, Hyungbo. Design of Heterogeneous Multi-agent system for Distributed Computation. Trends in Nonlinear and Adaptive Control: A Tribute to Laurent Praly for his 65th Birthday, Springer, 2021, 83–108.
- [9] Cortés J. Distributed algorithms for reaching consensus on general functions. Automatica, 2008, 44: 726–737.
- [10] Olfati-Saber R. and Fax J. and Murray R. Consensus and Cooperation in Networked Multi-Agent Systems. Proceedings of the IEEE, 2007, 95: 215–233
- [11] Cortés J. Discontinuous dynamical systems. IEEE Control systems magazine, 2008, 28: 36–73.
- [12] Li, Weijian and Zeng, Xianlin and Liang, Shu and Hong, Yiguang. Exponentially Convergent Algorithm Design for Constrained Distributed Optimization via Nonsmooth Approach. IEEE Transactions on Automatic Control, 2022, 67: 934-940.
- [13] Mohar, Bojan. Eigenvalues, diameter, and mean distance in graphs. Graphs and Combinatorics, 1991, 7: 53–64.