Semantic Communication in Multi-team Dynamic Games: A Mean Field Perspective
Abstract
Coordinating communication and control is a key component in the stability and performance of networked multi-agent systems. While single user networked control systems have gained a lot of attention within this domain, in this work, we address the more challenging problem of large population multi-team dynamic games. In particular, each team constitutes two decision makers (namely, the sensor and the controller) who coordinate over a shared network to control a dynamically evolving state of interest under costs on both actuation and sensing/communication. Due to the shared nature of the wireless channel, the overall cost of each team depends on other teams’ policies, thereby leading to a noncooperative game setup. Due to the presence of a large number of teams, we compute approximate decentralized Nash equilibrium policies for each team using the paradigm of (extended) mean-field games, which is governed by (1) the mean traffic flowing over the channel, and (2) the value of information at the sensor, which highlights the semantic nature of the ensuing communication. In the process, we compute optimal controller policies and approximately optimal sensor policies for each representative team of the mean-field system to alleviate the problem of general non-contractivity of the mean-field fixed point operator associated with the finite cardinality of the sensor action space. Consequently, we also prove the –Nash property of the mean-field equilibrium solution which essentially characterizes how well the solution derived using mean-field analysis performs on the finite-team system. Finally, we provide extensive numerical simulations, which corroborate the theoretical findings and lead to additional insights on the properties of the results presented.
I Introduction
Games among teams have been an attractive topic of interest in the recent past. These systems consist of cooperating decision-makers (within a team) who aim to collectively optimize the shared team objective, while being affected by the actions of other competing teams. Applications of the same include multi-player perimeter defense games [1], electricity markets [2], oligopolies [3], and networked robotics [4], to quote a few.
In this paper, we address the problem of finding equilibrium policies for multiple dynamic teams competing to access a shared wireless channel. A prototype of such a system is shown in Fig. 1. Each team is comprised of a dynamically evolving plant controlled by a sensor-controller pair. The controller relies on state information from the sensor, which communicates over the shared wireless channel. Costs arise from (1) the controller transmitting data over the network, and (2) the sensor implementing control policies for plant stabilization. An effective control policy requires consistent state measurements, which increase communication costs. Conversely, reducing the number of communications lowers these costs, but degrades control performance, leading to a classic communication-control trade-off for each team. Consequently, the sensor and controller act as two active decision-makers within a team, optimizing a common objective to find a team-optimal policy pair.
Furthermore, since the channel is shared, the communication cost is coupled with the transmission activity of other users. We model this coupling effect as the product of sensor action and the average number of communication instances of the other teams. Therefore, if the channel is highly congested, each sensor will seek to reduce its communication, whereas if the channel is underutilized, each sensor will increase its transmissions to improve control performance. This dynamic interaction suggests that a (Nash) equilibrium channel utilization may emerge for each team. However, with a large number of teams, calculating exact Nash policies is challenging because each team needs to be aware of other teams’ policies. To address this, we employ the mean-field game (MFG) framework to compute approximate decentralized equilibrium policies for each team, which depend only on the local information. This approach involves analyzing the limiting team game problem (with infinite teams), which we refer to as the mean-field team (MFT) system. In the MFT system, individual team deviations become insignificant due to the infinite number of teams, and hence, it can be characterized by a representative team interacting with a mass distribution. This allows us to determine the optimal policy for a generic team, referred to as the mean-field team equilibrium (MFTE) policy, which aligns with the population’s behavior. We provide conditions for the existence and uniqueness of an approximate MFTE and demonstrate when the derived equilibrium policies constitute approximate Nash policies for the finite-team system.
I-A Related Works
I-A1 Team optimal control & Information structures
Team decision problems, which were first investigated in [5, 6], involve multiple decision makers (DMs) each of whom has access to different information variables and consequently choose policies jointly to incur a common cost/reward. Since each one acts independently, and they do not necessarily share the same information, the joint optimal policy design is highly dependent on the information available to each DM [7, 8, 9], and its derivation can be challenging particularly when the information is dynamic, as demonstrated by Witsenhausen [10], Feldbaum [11], and Başar [12]. The single team communication-control trade-off problem is an instance of such a decision problem, where the sensor and the controller form a 2-DM team, each having access to different information signals within the system. Initial investigations for solving the single team problem in this context were undertaken within the realm of event-driven control, where the primary objective is to merely stabilize the dynamical plant, without any optimality guarantees [13, 14, 15, 16]. Multi-team extensions of the same have also been considered in [17, 18].
On the other hand, single team settings from the viewpoint of an optimal control problem have also been widely dealt with in literature [19, 20, 21, 22, 23, 24, 25] to study the impact of resource constraints on estimator and controller designs, where the resource is the frequency of usage of the transmission channels for estimation or control purposes. Specifically, papers [19] and [20] have obtained the optimal threshold-type (symmetric) policies for the sensor under a hard constraint on the number of transmissions, with extensions provided in [21]; [22] has placed resource constraints on the interaction between the controller and the actuator in a plant; and [23] has introduced a tradeoff between two options for a controller, which are transmission of control signals to the actuator and receiving measurements, where optimality is again based on threshold-type policies; [24] has provided jointly optimal policies for control and sensing; and [25] has derived suboptimal sensor and control policies based (only) on the statistics of the estimation error. The latest works in this direction are [26, 27], where the authors show that for a multi-dimensional Gauss-Markov process, the optimal estimator is linear and is unaffected by the no-communication events (as was derived for scalar systems earlier in [20] with its optimality proven in [21]), leading to a globally optimal policy design for a single team problem.
I-A2 Finite & Mean-field team games
As opposed to the above frameworks which consider a single team problem, there are only a handful of works considering games among multiple teams[28, 29, 30, 31, 32]. Specifically, the paper [28] has investigated the properties of the saddle-point equilibrium in a two-team zero-sum game; [29] has proposed a learning algorithm for zero-sum Markov games between teams of multiple agents, with extensions to general sum 2-team games in [30]; [31] has provided only the Nash controller policies for 2-team general sum games without explicit sensor policy computation, which becomes challenging even for a 2-team problem; and [32] has solved for best response sensor policies for one player with asymmetric sensing and a zero-sum game setup.
To alleviate the above issue of equilibrium policy computation, the framework of mean-field games (MFGs) (which forms a subset of the mean-field team games with each team constituting a single DM) was introduced simultaneously in [33] and [34]. The idea behind the same is to compute approximate decentralized Nash equilibrium policies for agents by passing to an infinite agent system (under suitable assumptions of anonymity between agents), in which case the effect of actions of any specific agent becomes insignificant on that of the other agents, thereby leading to an aggregation effect. Under this scenario, one can solve for a representative agent’s equilibrium policy within the infinite agent system which is consistent with that of the population and then characterize its approximation with respect to the centralized Nash equilibrium policies for the finite-agent system.
While many works have considered both theoretical and application aspects of standard MFGs [35, 36, 37, 38, 39, 40], only a few results exist pertaining to MFTGs [41, 2]. A similar paradigm also worth mentionaing is that of mean-field type games [42, 43] where the number of teams is finite, but the number of agents within each team goes to infinity. As opposed to the standard MFGs, an additional challenge within the MFTG setup is to handle the incoming interactions from other teams whilst solving for the team optimal solution for each team. While the work [2] considers a discrete state and action setup, the work [41] considers a linear-quadratic setting to compute an asymptotic mixed-equilibrium optima for the team-game problem. In this work, however, we consider a mixed discrete-continuous action space where the controller action lies within the –dimensional Euclidean space while the sensor action is binary-valued (either 0 denoting no transmission or 1 denoting full state transmission). Further, as we will see, the cost function is non-quadratic and the estimation error dynamics, which forms the state of the estimated problem, is bilinear due to the presence of no-transmission events, which makes the problem significantly more challenging.
I-A3 Semantic communication
With the proliferation of the Internet-of-Things (IoT) technology, there has been an increasing interest in the efficient use of the shared wireless spectrum [37, 44, 45]. To enable the same, semantic communication is an emerging paradigm that aims to transmit only relevant information through shared resources rather than just the raw data, leading to more intelligent and context-aware communication systems, especially in bandwidth-constrained environments [46]. For instance, in [47], it has been argued that with semantic communication, it is possible to reduce the transmission rate without significant loss in the system performance; the paper [48] discusses the potential energy savings resulting from semantic interaction in connected autonomous vehicles; and the paper [49] demonstrates the effectiveness of semantics powered sampling for remote real-time source reconstruction. We refer the interested reader to [50, 51], and the references therein for additional details. In this work, we capture the semantics of information using the value of information (VoI) metric [26]. This metric essentially measures the (excess) payoff derived by a team from the information communicated by the sensor to the controller as opposed to no communication, thereby providing a quantitative basis for evaluating the effectiveness of semantic communication in our proposed approach.
Thus, we list the main contributions of our work as follows.
-
1.
We consider a multi-team dynamic networked game problem with a costly uplink where each team operates in a non-cooperative game setting due to interactions over a shared resource whilst coordinating its optimal communication and control actions to ensure stabilization of the underlying dynamical system. The setups described in Subsection I-A1 constitute special cases of our work up to appropriate modifications.
-
2.
In view of the challenge in finding Nash equilibrium policies due to the presence of a large number of teams, we use the framework of mean-field team games to compute approximate decentralized Nash equilibrium controller-sensor policies for each team. In the process, we first notice the general non-contractivity of the mean-field operator due to the finite cardinality of the sensor action space, which precludes the existence of a mean-field team equilibrium (MFTE). To alleviate this issue, we employ probabilistic Boltzmann-type policies to approximate the optimal sensor policy, which are dependent on the average utilization of the wireless channel (or the mean traffic flow over the network). The same is then used to prove the existence and uniqueness of an approximate MFTE.
-
3.
Finally, we demonstrate that the MFTE obtained above constitutes an approximate Nash equilibrium for the finite-team system, where the approximation is induced by (1) the number of teams in the system, and (2) the Boltzmann-parameter characterizing the approximate sensor policy.
We would also like to note that our current work is closest in spirit to [26]. Here, however, we consider a multi-team problem as opposed to a single team problem considered in [26]. Nonetheless, we utilize the team optimal policy notions and heuristics employed in [26] in the current work as well. The added non-trivial challenge is to solve for the Nash equilibrium policies on top of the team-optimal policies for each team. This has required us to frame the problem as a MFTG and consequently compute approximate Nash policies between the finite number of teams.
I-B Organization of the paper
The paper is organized as follows. We formulate the finite-team game problem in Section II and the mean-field team game in Section III. The analysis of the existence and uniqueness of the MFTE is presented in Section IV, its approximation performance is studied in Section V, and the approximate Nash equilibrium analysis is included in Section VI. Numerical results are presented in Section VII, and the paper concludes with some major highlights and ensuing discussions in Section VIII. Proofs of main propositions and the supporting lemmas are presented in Appendices.
Notations: The set of agents is denoted by and the set of time instants by . We denote the Euclidean norm by . For a positive definite (resp. semi-definite) matrix , we use the notation (resp. ). The trace of a matrix is denoted by , and for a vector , we define . For two real functions and , means that there exists a constant such that . We define , and as the indicator function of a measurable set .
II Finite-Team Game Problem
Consider the networked multi-agent system shown in Fig. 2, comprised of teams. Each team is modeled by a triple , i.e., a dynamically evolving plant (), a sensor (), and a controller (), where and can be viewed as the two active agents of team . The plant evolves according to a stochastic linear difference equation as
(1) |
where denotes the state, and denotes the control input, both for the team. The system noise is assumed to be Gaussian with zero mean and finite covariance . Further, and are matrices with appropriate dimensions which depend on the type of the agent , chosen according to empirical probability mass function (pmf) , for all . The initial state has a symmetric density with mean and positive definite covariance for each . We assume that is independent of noise for all .
Next, for each team , full-state information () of the plant is relayed by to over a shared two-hop wireless channel via a central base station (as shown in Fig. 2) which coordinates the flow of information. The hop from the sensor to the BS is called the uplink and the one from the BS to the controller is called the downlink. The sensory information is relayed to the controller subject to a one-step delay, i.e., if and denote the uplink input and downlink output, respectively, then we have that
(4) |
for and . The quantity denotes an empty symbol and denotes the transmission requests of as
(7) |
We further assume that the downlink has enough bandwidth to accommodate all the sensor requests.111The case where the downlink suffers from a capacity constraint with an ideal uplink is dealt with in the works [37, 44], where the authors use a weighted age of information-based metric to devise optimal downlink transmission strategies for the base station. However, the case where both the uplink and the downlink suffer from non-idealities is a significantly challenging problem since one would need to jointly design each team’s policies and the scheduling policy of the base station. We will leave this as an open and promising research direction. Thus, whenever a transmission request is generated, the sensor is connected to the controller by the base station. However, for each packet communicated by the sensor to the controller, team incurs a cost () which is proportional to the average number of users communicating on the uplink. More precisely, the associated time-average communication cost over a finite horizon of is given as
(8) |
where we refer to the term as the average channel utilization, which models the mean traffic flow through the wireless channel. Further, denotes the transmission policy of sensor and denotes the transmission policies of sensors other than . In the absence of any communication, creates the best estimate of the state which resets to the latest information in the event of a communication.
Let us define the information set of the sensor as
(9) |
with . Then, we can define the space of admissible centralized222Note here we use the term centralized to emphasize that the information set of sensor contains the transmission instants of other agents as well. sensor policies for as , where is the –algebra generated by its arguments. Finally, the expectation in (8) is taken with respect to the stochasticity induced by the (possibly) randomized policies, the system noise, and the initial state distribution.
Next, the controller observes the incoming message from as in (4). Between communication instants, computes the best estimate of the state (calculated later) based on its information history given by
(10) |
with . For each control input applied by , the team incurs a time-average control cost given as
(11) |
where , , and denotes an admissible policy of the control agent of team . More precisely, given , we define the space of admissible control policies for team as . Finally, the expectation in (11) is taken with respect to the noise statistics and the initial state distribution.
As a result of the cost functions in (8) and (11), each team is faced with the following stochastic team-game problem.
Problem 1 (Team-level game problem).
where and is the weighting parameter which trades off the costs between uplink communication and control performance.
Henceforth, we refer to as the team’s policy. We note here that each team’s policy depends on the strategies of the other teams through their sensor policies, which leads to a noncooperative game setup. Further, the works [26, 27] constitute a special case of our setup (with ). Thus, the objective of this work is to compute Nash equilibrium policies for the teams. That is, we are looking for a –tuple such that
for all . We note that for team , the above objective depends on the sensing policies of the other teams. Due to the presence of a large number of such teams within a typical networked system, it becomes unrealistic for each team to have access to (and store) this information of the sensing policies of other teams in the population. Thus, we would like to compute team equilibrium policies based on only the local information accessible to each team. Toward this end, we will use the mean-field game framework [35, 52, 53] to characterize approximate decentralized Nash equilibrium solutions in the next three sections.
III Networked Mean-Field Team Game
In this section, we model the –team game using the mean-field team game (MFTG) framework. In this regard, we first consider the limiting system (with teams). Under this setting, the average channel utilization ratio in (8) (and hence in Problem 1) can be approximated by a pre-computable deterministic mean-field trajectory by using the Nash certainty equivalence principle [33]. This reduces Problem 1 to a team optimal control problem for a representative team, the solution of which is consistent with the other teams in the population. Thus, we first compute a team optimal policy for the representative team. Consequently, we will observe that the finite cardinality of the sensing action set (which is given as ) leads to non-contractivity of the mean field operator (as also noted in [54]). To alleviate this issue, we construct a Boltzmann-type sensor policy, prove the existence of a unique approximate mean-field team equilibrium (MFTE). Finally, we will show that the solution obtained constitutes an approximate Nash equilibrium for the finite –team game problem, with “approximation” defined precisely.
III-A Mean-Field Team Game
Consider a representative team (within the infinite team system) of type , characterized by the tuple of a plant, a sensor, and a controller. The plant dynamics evolve according to the stochastic linear difference equation
(12) |
where denotes the state-control input pair of the representative team. Noise is assumed to be Gaussian with zero mean and positive definite covariance . The sensor generates transmission requests () over the uplink, which is then coordinated by the base station to update the state of the plant to the controller. As a result of network usage, the team incurs a communication cost given as
(13) |
where denotes the sensor policy of the representative team, with denoting the space of admissible decentralized sensor policies defined as where defines the information structure of the representative team’s sensor. We note here that the above set constitutes only the local information of the representative team as opposed to the information set defined in (9). We refer to the term as the mean-field trajectory, which denotes the infinite-team approximation to the channel utilization ratio term in (8). Finally, the controller applies a control policy which incurs a cost of
(14) |
The policy is chosen from the space of admissible control policies defined similarly to as before. Thus, the representative team’s optimization problem is defined as follows:
Problem 2 (Representative team problem).
In order to construct a solution to Problem 2, we invoke the following assumption [33, 55] on the mean-field system.
Assumption 1.
-
1.
The pair is controllable.
-
2.
The MF trajectory where is the set of bounded –length sequences.
Next, we define the mean-field team equilibrium (MFTE) by introducing the following operators:
-
1.
, which maps and outputs a team-optimal policy () for a given MF trajectory , and
-
2.
, which maps and generates a MF trajectory from a policy ().
Then, the MFTE can be defined as follows.
Definition 1 (Mean-Field Team Equilibrium).
A mean-field team equilibrium (MFTE) is a policy-trajectory pair , which is a fixed point of the composite map .
In the sequel, we first compute the team optimal policy pair () solving Problem 2 for a given MF trajectory and then provide sufficient conditions for the existence of a unique approximate MFTE.
IV Mean-Field Team Equilibrium Analysis
We start this section by solving Problem 2. In this regard, let us start by determining the optimal controller policy. Using completion of squares, we can equivalently write the cost function in (15) as
(16) |
where and is the unique solution to the Riccati equation
with . Thus, from (IV), the optimal controller can be obtained as
(17) |
where is the conditional mean of the state given the information available to the controller. Then, we have the following result.
Proposition 1.
The conditional mean given by the MMS estimator at the controller follows the dynamics
(18) |
with .
The proof of the above proposition readily follows by taking expectation in (12) conditioned on the controller information structure and is thus not included here.
Let us define . Then, by substituting the optimal controller (17) in (IV), we arrive at
(19) |
where ,
, and satisfies the following stochastic difference equation:
(20) |
Thus, the sensor objective is to solve for the optimal policy minimizing (19) subject to (20). To obtain it, let us first define the optimal cost-to-go () of the sensor as follows:
for all and with the convention that . Then, we invoke the following definition [26] of the value of information (VoI).
Definition 2.
The VoI at instant for the sensor of type is defined as a function of as
(21) |
where denotes the sensor’s optimal cost-to-go under the sensor action .
We now have the following proposition which characterizes the optimal sensor policy ().
Proposition 2.
The optimal sensor policy for the team of type is given as
(22) |
where
(23) |
Proof.
We start by noting that Problem 2 is a single team stochastic optimal control problem where its decision policies ( and ) depend only on the local information and the mean-field trajectory , which is fixed. Hence, for each representative team of type , the proof follows from [26, Theorem 1], and is thus complete. ∎
Let us now denote the optimal policy pair for the representative team of type as , which of course depends on the mean field . Then, we proceed toward computing a MFTE using this policy. In this regard, we first make a crucial observation. The optimal sensor policy computed in Proposition 2 is discontinuous at the point where . This makes it prohibitive to employ contraction-type arguments to establish the existence of a unique MFTE [33, 54]. Thus, our objective in what follows is to construct an approximation of the policy , by restricting it to a special class of policies, then leading to a unique (approximate) MFTE within that class.
Let , and define a new policy from which an action is chosen as follows:
(24) |
We note that the above policy is indeed an approximation of since as under the convention that whenever . Further, it is also measurable with respect to . Let the policy belong to the policy class . Then, we prove the existence of a unique approximate MFE in the sense above, which is defined in precise terms as below.
Definition 3 (–Approximate MFTE).
An –approximate MFTE is a policy-trajectory pair , which is a fixed point of the composite map , where
-
1.
is defined similar to using (24), and outputs the pair () for a given MF trajectory , and
-
2.
which maps to .
Also, let us define the mean-field operator as
(25) |
We now first state and prove the following result on the existence of an approximate MFTE.
Proposition 3.
Under Assumption 1, there exists an –approximate MFTE for any .
Proof.
A detailed proof is provided in Appendix I. In summary, the proof relies on first proving that the mean-field operator , as defined in (25), is a continuous function in . Subsequently, Brouwer’s fixed point theorem is applied to show the existence of at least one MFTE for any . ∎
In addition, we can guarantee the contraction of the MF operator for sufficiently low values of , which allows us to prove a stronger result on the uniqueness of the approximate MFTE as follows.
Theorem 1.
Proof.
The detailed proof is provided in Appendix I. Briefly, we use the contractivity of under the condition (26) to employ Banach’s fixed point theorem to show the existence of a unique –approximate MFTE. ∎
The above results show the existence of an approximate MFTE which is unique if the parameter is small enough. Further, we would also like to note from the proof of Theorem 1 that the contraction condition in (26) is only a sufficient one and one may be able to obtain a more precise characterization of the ‘smallness’ of . Furthermore, in the numerical simulation section, we will compute an approximate MFTE even when (26) is not satisfied.
Now that we have characterized the approximate MFTE for the mean-field game, we next proceed towards proving that this MFTE satisfies the approximate Nash property (as formalized later in Section VI) for the original finite-team game problem.
V Performance of Mean-Field Approximation
In this and the subsequent section, we return to the finite-team setup of Section II and show that the –approximate MFTE as computed in the previous section constitutes an ()–Nash equilibrium for the finite-team game problem. The term arises as a result of the approximation parameter in the sensor policy and the term term arises due to the infinite team approximation.
Suppose now that team deviates toward using a policy different from the –approximate MFTE policy . We call the former policy and an action chosen from it at instant as . Also, let us define the quantity , which denotes the empirical average of the actions chosen by all the sensors. We start by proving the following lemma which computes the mean deviation between the empirical average and the approximate equilibrium mean-field trajectory .
Lemma 1.
Suppose that Assumption 1 holds. Let team employ the sensor policy while all the other teams employ the equilibrium policy . Then, we have that
(27) |
where .
Proof.
Consider the following inequality:
(28) |
We now bound each of the three terms in (V) below. To that end, let the action chosen by the sensor of team be given as . Then, Term 1 in (V) can be bounded as follows:
Term 1 | ||||
(29) |
where the second equality follows since teams other than the one are still following the policy .
We next bound Term 2 in (V). To that end, let us define the quantity . Then, we have that
(30) |
where the third equality follows since and are uncorrelated because the system noise is independent across different teams, which leads to decoupling between the corresponding sensor policies.
Thus, it follows from (30) that:
(31) |
Finally, since takes values in , Term 3 in (V) can be bounded as follows:
Term 3 | (32) |
The above lemma demonstrates that the deviation of the empirical average from the approximate mean-field trajectory vanishes toward zero as the number of teams increases, and hence the latter ‘reasonably’ approximates the former if the number of teams is large.
VI Approximate Nash Equilibrium Analysis
In this subsection, we prove the approximate-Nash property of the MF solution for the finite-team game problem, which is made precise below.
Definition 4.
The set of team policies constitutes an –Nash equilibrium for the cost functions for , if there exists such that
for all .
We also note that the infimum on the RHS of the above inequality is taken over the set of centralized equilibrium policies as defined before, hence bringing in an element of conservatism.
Now, we define the following cost functions:
(33a) | |||
(33b) | |||
(33c) | |||
(33d) | |||
The cost (33a) is the one incurred by team in the finite-team setting when all the players follow the –approximate MFTE policy. The cost in (33b) is the generic team’s cost under the equilibrium policy and equilibrium mean-field. The cost in (33c) is incurred by team when it deviates toward using a policy while all other teams still employ the MF policy. Finally, the cost in (33d) is incurred when an agent follows a decentralized policy under the approximate equilibrium MF trajectory. The notations of the states and actions in the above costs are under the respective policies and the trajectory distributions, and are self-explanatory. |
We next state and prove another main result of the paper in the following theorem, which essentially characterizes how ‘well’ the MFTE policy performs.
Theorem 2.
Suppose that Assumptions 1 holds. Suppose also that is given. Then, the set of decentralized control policies constitute –Nash equilibrium for the finite-team uplink scheduling game, i.e.,
where and .
Proof.
To establish the inequality in Theorem 2, consider the following:
(34a) | ||||
(34b) | ||||
(34c) |
The proofs for bounding each of the above terms are provided in the Appendices. The term in (34a) is bounded by Lemma 3 as in Appendix II. Next, to bound the term in (34b), we first prove the boundedness of the state action value function as in Lemma 5 in Appendix III and consequently use Proposition 5 as in Appendix III. Finally, the term in (34c) is bounded by Lemma 4 as in Appendix II.
Combining the results from Lemmas 3, 4 and Proposition 5, we arrive at
(35) |
The proof is thus complete.
∎
Theorem 2 states that the –approximate MFTE policy derived using the infinite agent system behaves as an approximate Nash equilirium solution for the original finite-team game, where the approximation results from 1) using the policy instead of the optimal sensing policy , and 2) using infinite-team system as a proxy for the –team system.
VII Numerical Experiments
In this section, we provide simulation results on the finite-team game and demonstrate the performance of the approximate MFTE as computed in Section IV. Throughout, we consider the type set to be singleton, i.e., all the teams are homogeneous. Further, we consider open-loop unstable scalar agents with the following parameters: and a time horizon . Next, we note that the approximate sensing policy in (24) depends on the value of information which in turn depends on the conditional value function of the state. Since we do not have a closed-form expression for the latter, we will use supervised learning with function approximation [56] to approximate it. In particular, we start by first parametrizing the state value function using a quadratic-in-state and quadratic-in-mean-field approximation as , where , and denote the weighting coefficients and denotes the estimation error and denotes the mean-field trajectory. Subsequently, we run value iteration using the parametrized form of the value function. Finally, we perform supervised learning using a mean-squared error loss to learn the weighting coefficients. The procedure terminates whenever the latter converges within a range for a suitably chosen . Later, we also study the effect of the degree of the approximating polynomial on the total average cost per team.
In the first study, we plot the variation of the average cost per team versus the importance weight in Figure 3 for four different values of and . The corresponding values of were taken to be and . The figure shows a box plot depicting the median (red line) and spread (box) of the average cost per team over 100 runs for each value of and teams, under the equilibrium controller-sensor policy developed using the mean-field game paradigm. From the same, we observe that the average cost increases with the increase in . This is explained by an increase in overall estimation error due to increase in the price of communication.
For the same set of values of and , we also plot (in Figure 4) the approximate equilibrium mean-field trajectory corresponding to the –approximate MFTE computed in Section IV, which denotes the average utilization of the wireless channel. From the same, we see that the channel utilization decreases as the price of communication increases, aligned with intuition. The corresponding state, error, and control input trajectories under the approximate equilibrium policies for all the teams are shown in Figures 5, 6, and 7, respectively.
Next, in Figure 8 we study the variation of the average cost per team under the equilibrium controller-sensor policy for a fixed value of and varying values of . The figure shows a box plot of the average cost per team over 100 runs for each value of and teams. From the same, we observe that the average cost shows a decreasing trend with increasing values of . This is explained by the fact that increasing implies decreasing level of approximation in the construction of the policy from the optimal policy , which in turn leads to a lower cost.
Next, in Figure 9 we provide the box plots for average cost per agents over 100 runs for varying numbers of teams. The values of and were chosen to be 0.055 and 10, respectively. We then observe that the spread of the box plot decreases with increasing , indicating that the mean-field approximation gets better with increasing .
Finally, we study the effect of the degree of the polynomial used for approximation of the value function on the average cost of the teams. For this, we fix the term and choose the values of and . We plot in Figure 10 the average cost per team as a function of the degree of the polynomial. As the degree increases from quadratic to cubic to fourth, we do not observe significant changes in the average cost per team. The median costs (red line) for the three cases are obtained as 3640, 3650, and 3666. This demonstrates that the quadratic polynomial approximates the value function reasonably well.
VIII Conclusion & Discussions
In this work, we have solved for approximate Nash equilibrium team policies for a multi-team dynamic game problem where each team is coordinating its communication and control policies while utilizing a shared wireless channel. Specifically, we have formulated a noncooperative (general sum) game between –teams, each of which was comprised of a local sensor and a local controller which communicate over a network and incur an overall cost which is coupled with the (sensing) policies of other teams as a result of communication over the shared channel. Due to unavailability of the sensing policies of other teams (which become challenging to obtain especially when there is a high population of them), we have employed the mean-field team game framework to compute approximate decentralized Nash-equilibria between the teams. Toward this end, we have first noticed that the finite cardinality of the sensing action space prohibited the use of a contraction argument to establish the existence of a mean-field equilibrium. To overcome this difficulty, we have constructed a Boltzmann-type sensing policy which we then employed to explicitly prove the existence of a unique approximate mean-field team equilibrium. Subsequently, we showed that the same constituted an approximate Nash equilibrium for the finite-team game, where the approximation depended vanishingly on the number of teams, , and the Boltzmann parameter . We have also validated the theoretical results with extensive numerical simulations.
We list here a few future research directions. First, for the numerical analysis, we had assumed that the price parameter was fixed and provided an extensive numerical analysis for its effect on the average costs, estimation errors, and control inputs of the teams. Thus one direction would be to formulate the finite-team game within a Stackelberg game setting, where the leader (possibly a telecommunication/network operator) solves an optimization problem to distribute ‘optimal’ to the follower teams. This necessitates that one substitutes the policies of the followers into the leader’s optimization problem, and subsequently derive the leader’s optimal policy. However, this (at the moment) seems challenging since each team’s policy is not attainable in closed form; rather, we only know it as a function of the conditional value function. Thus, it would be interesting to see if one can derive analytical results for the Stackelberg game setting.
Another direction that deserves mention here is that in the current work we did not account for the no-transmission instants within the information structure of the controller. This was due to the fact that the no-transmission instants make the estimator dynamics nonlinear, and couples the design on the optimal estimator and the optimal sensing policy. Although it has been shown in [27] that the optimal estimator turns out to be linear and decoupled from the sensor design, this is true only at optimality. However, upon careful observation, one may notice that in the current work, we have employed a Boltzmann sensing policy (which is an approximation to the optimal policy) to establish the existence of a unique and tractable equilibrium. Hence, the decoupling argument between the estimator and the scheduler no longer holds. Thus, it would be interesting to consider whether similar results as in this paper can be generalised to the case where the information set of the controller also includes the no-transmission instants.
References
- [1] D. Shishika and V. Kumar, “A review of multi agent perimeter defense games,” in Decision and Game Theory for Security: 11th International Conference, GameSec 2020, College Park, MD, USA, October 28–30, 2020, Proceedings 11. Springer, 2020, pp. 472–485.
- [2] J. Subramanian, A. Kumar, and A. Mahajan, “Mean-field games among teams,” arXiv preprint arXiv:2310.12282, 2023.
- [3] G. Y. Weintraub, C. L. Benkard, and B. Van Roy, “Markov perfect industry dynamics with many firms,” Econometrica, vol. 76, no. 6, pp. 1375–1411, 2008.
- [4] T. Hatanaka, N. Chopra, M. Fujita, and M. W. Spong, Passivity-based Control and Estimation in Networked Robotics. Springer, 2015.
- [5] J. Marschak, “Elements for a theory of teams,” Management Science, vol. 1, no. 2, pp. 127–137, 1955.
- [6] R. Radner, “Team decision problems,” The Annals of Mathematical Statistics, vol. 33, no. 3, pp. 857–881, 1962.
- [7] S. Yüksel and T. Başar, Stochastic networked control systems: Stabilization and optimization under information constraints. Springer Science & Business Media, 2013.
- [8] A. Dave and A. A. Malikopoulos, “Decentralized stochastic control in partially nested information structures,” IFAC-PapersOnLine, vol. 52, no. 20, pp. 97–102, 2019.
- [9] A. Dave, N. Venkatesh, and A. A. Malikopoulos, “Decentralized control of two agents with nested accessible information,” in 2022 American Control Conference (ACC). IEEE, 2022, pp. 3423–3430.
- [10] H. S. Witsenhausen, “A counterexample in stochastic optimum control,” SIAM Journal on Control, vol. 6, no. 1, pp. 131–147, 1968.
- [11] A. A. Feldbaum, “Dual control theory,” in Control Theory: Twenty-Five Seminal Papers (1932-1981), T. Başar, Ed. Wiley-IEEE Press, 2001, ch. 10, pp. 874–880.
- [12] T. Başar, “Variations on the theme of the Witsenhausen counterexample,” in Proceedings of the 47th IEEE Conference on Decision and Control, December 2008, pp. 1614–1619.
- [13] K. J. Astrom and B. M. Bernhardsson, “Comparison of Riemann and Lebesgue sampling for first order stochastic systems,” in Proceedings of the 41st IEEE Conference on Decision and Control, 2002., vol. 2. IEEE, 2002, pp. 2011–2016.
- [14] P. Tabuada, “Event-triggered real-time scheduling of stabilizing control tasks,” IEEE Transactions on Automatic Control, vol. 52, no. 9, pp. 1680–1685, 2007.
- [15] W. P. Heemels, K. H. Johansson, and P. Tabuada, “An introduction to event-triggered and self-triggered control,” in Proceedings of the 51st IEEE Conference on Decision and Control, 2012, pp. 3270–3285.
- [16] K. Nar and T. Başar, “Sampling multidimensional Wiener processes,” in Proc. 53rd IEEE Conference on Decision and Control (CDC’14, Dec 15-17, 2014; Los Angeles, CA), pp. 3426–3431.
- [17] D. V. Dimarogonas, E. Frazzoli, and K. H. Johansson, “Distributed event-triggered control for multi-agent systems,” IEEE Transactions on Automatic Control, vol. 57, no. 5, pp. 1291–1297, 2011.
- [18] G. S. Seyboth, D. V. Dimarogonas, and K. H. Johansson, “Event-based broadcasting for multi-agent average consensus,” Automatica, vol. 49, no. 1, pp. 245–252, 2013.
- [19] O. C. Imer and T. Başar, “Optimal estimation with limited measurements,” in Proceedings of the 44th IEEE Conference on Decision and Control. IEEE, 2005, pp. 1029–1034.
- [20] O. C. Imer and T. Başar, “Optimal estimation with limited measurements,” International Journal of Systems, Control and Communications, vol. 2, no. 1-3, pp. 5–29, 2010.
- [21] G. M. Lipsa and N. C. Martins, “Remote state estimation with communication costs for first-order LTI systems,” IEEE Transactions on Automatic Control, vol. 56, no. 9, pp. 2013–2025, 2011.
- [22] O. C. Imer and T. Başar, “Optimal control with limited controls,” in 2006 American Control Conference. IEEE, 2006, pp. 298–303.
- [23] O. C. Imer and T. Basar, “To measure or to control: optimal control with scheduled measurements and controls,” in 2006 American Control Conference. IEEE, 2006, pp. 1003–1008.
- [24] A. Molin and S. Hirche, “On LQG joint optimal scheduling and control under communication constraints,” in Proceedings of the 48th IEEE Conference on Decision and Control held jointly with 28th Chinese Control Conference. IEEE, 2009, pp. 5832–5838.
- [25] D. Maity and J. S. Baras, “Minimal feedback optimal control of linear-quadratic-Gaussian systems: No communication is also a communication,” IFAC-PapersOnLine, vol. 53, no. 2, pp. 2201–2207, 2020.
- [26] T. Soleymani, J. S. Baras, and S. Hirche, “Value of information in feedback control: Quantification,” IEEE Transactions on Automatic Control, vol. 67, no. 7, pp. 3730–3737, 2022.
- [27] T. Soleymani, J. S. Baras, S. Hirche, and K. H. Johansson, “Value of information in feedback control: Global optimality,” IEEE Transactions on Automatic Control, vol. 68, no. 6, pp. 3641–3647, 2023.
- [28] I. Hogeboom-Burr and S. Yüksel, “Zero-sum games involving teams against teams: Existence of equilibria, and comparison and regularity in information,” Systems & Control Letters, vol. 172, p. 105454, 2023.
- [29] M. G. Lagoudakis and R. Parr, “Learning in zero-sum team Markov games using factored value functions,” Advances in Neural Information Processing Systems, vol. 15, 2002.
- [30] M. Ghimire, L. Zhang, W. Zhang, Y. Ren, and Z. Xu, “Solving two-player general-sum games between swarms,” available on arXiv:2310.01682, 2023.
- [31] D. Maity and J. S. Baras, “Linear quadratic stochastic differential games under asymmetric value of information,” IFAC-PapersOnLine, vol. 50, no. 1, pp. 8957–8962, 2017.
- [32] S. Aggarwal, T. Başar, and D. Maity, “Linear quadratic zero-sum differential games with intermittent and costly sensing,” IEEE Control Systems Letters, 2024.
- [33] M. Huang, P. E. Caines, and R. P. Malhamé, “Large-population cost-coupled LQG problems with nonuniform agents: individual-mass behavior and decentralized -Nash equilibria,” IEEE Transactions on Automatic Control, vol. 52, no. 9, pp. 1560–1571, 2007.
- [34] J.-M. Lasry and P.-L. Lions, “Mean field games,” Japanese Journal of Mathematics, vol. 2, no. 1, pp. 229–260, 2007.
- [35] M. Huang, R. P. Malhamé, P. E. Caines et al., “Large population stochastic dynamic games: closed-loop Mckean-Vlasov systems and the Nash Certainty Equivalence principle,” Communications in Information & Systems, vol. 6, no. 3, pp. 221–252, 2006.
- [36] J. Moon and T. Başar, “Linear quadratic risk-sensitive and robust mean field games,” IEEE Transactions on Automatic Control, vol. 62, no. 3, pp. 1062–1077, 2016.
- [37] S. Aggarwal, M. A. U. Zaman, M. Bastopcu, and T. Başar, “Weighted age of information based scheduling for large population games on networks,” IEEE Journal on Selected Areas in Information Theory, vol. 4, pp. 682–697, 2023.
- [38] F. Bagagiolo and D. Bauso, “Mean-field games and dynamic demand management in power grids,” Dynamic Games and Applications, vol. 4, pp. 155–176, 2014.
- [39] S. Y. Olmez, S. Aggarwal, J. W. Kim, E. Miehling, T. Başar, M. West, and P. G. Mehta, “Modeling presymptomatic spread in epidemics via mean-field games,” in 2022 American Control Conference (ACC). IEEE, 2022, pp. 3648–3655.
- [40] S. Aggarwal, M. A. uz Zaman, M. Bastopcu, S. Ulukus, and T. Başar, “A mean field game model for timely computation in edge computing systems,” available on arXiv:2404.02898, 2024.
- [41] J. Huang, Z. Qiu, S. Wang, and Z. Wu, “Linear quadratic mean-field game-team analysis: A mixed coalition approach,” Automatica, vol. 159, p. 111358, 2024.
- [42] B. Djehiche, A. Tcheukam, and H. Tembine, “Mean-field-type games in engineering,” available on arXiv:1605.03281, 2016.
- [43] M. A. U. Zaman, A. Koppel, M. Laurière, and T. Başar, “Independent RL for cooperative-competitive agents: A mean-field perspective,” available on arXiv:2403.11345, 2024.
- [44] S. Aggarwal, M. A. uz Zaman, M. Bastopcu, and T. Başar, “Large population games on constrained unreliable networks,” in 2023 62nd IEEE Conference on Decision and Control (CDC), 2023, pp. 3480–3485.
- [45] F. Mason, F. Chiariotti, A. Zanella, and P. Popovski, “Multi-agent reinforcement learning for coordinating communication and control,” IEEE Transactions on Cognitive Communications and Networking, 2024.
- [46] Q. Lan, D. Wen, Z. Zhang, Q. Zeng, X. Chen, P. Popovski, and K. Huang, “What is semantic communication? A view on conveying meaning in the era of machine intelligence,” Journal of Communications and Information Networks, vol. 6, no. 4, pp. 336–371, 2021.
- [47] E. Uysal, O. Kaya, A. Ephremides, J. Gross, M. Codreanu, P. Popovski, M. Assaad, G. Liva, A. Munari, B. Soret et al., “Semantic communications in networked systems: A data significance perspective,” IEEE Network, vol. 36, no. 4, pp. 233–240, 2022.
- [48] H. Du, J. Wang, D. Niyato, J. Kang, Z. Xiong, M. Guizani, and D. I. Kim, “Rethinking wireless communication security in semantic internet of things,” IEEE Wireless Communications, vol. 30, no. 3, pp. 36–43, 2023.
- [49] M. Kountouris and N. Pappas, “Semantics-empowered communication for networked intelligent systems,” IEEE Communications Magazine, vol. 59, no. 6, pp. 96–102, 2021.
- [50] O. Ayan, P. Kutsevol, H. Y. Özkan, and W. Kellerer, “Semantics-and task-oriented scheduling for networked control systems in practice,” IEEE Access, vol. 10, pp. 115 673–115 690, 2022.
- [51] W. Yang, H. Du, Z. Q. Liew, W. Y. B. Lim, Z. Xiong, D. Niyato, X. Chi, X. Shen, and C. Miao, “Semantic communications for future Internet: Fundamentals, applications, and challenges,” IEEE Communications Surveys & Tutorials, vol. 25, no. 1, pp. 213–250, 2022.
- [52] J.-M. Lasry and P.-L. Lions, “Jeux à champ moyen. i–le cas stationnaire,” Comptes Rendus Mathématique, vol. 343, no. 9, pp. 619–625, 2006.
- [53] ——, “Jeux à champ moyen. ii–horizon fini et contrôle optimal,” Comptes Rendus Mathématique, vol. 343, no. 10, pp. 679–684, 2006.
- [54] K. Cui and H. Koeppl, “Approximately solving mean field games via entropy-regularized deep reinforcement learning,” in International Conference on Artificial Intelligence and Statistics. PMLR, 2021, pp. 1909–1917.
- [55] J. Moon and T. Başar, “Discrete-time LQG mean field games with unreliable communication,” in Proceedings of the 53rd IEEE Conference on Decision and Control (CDC), December 2014, pp. 2697–2702.
- [56] D. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming. Athena Scientific, 1996.
Appendix I
Proof of Proposition 3.
We prove the proposition by inductively establishing the continuity of the operator and consequently employing Brouwer’s fixed point theorem.
To this end, let us define the following set
(36) |
Then, we note from (25) that , and hence, . Further, the interval is compact and convex, and hence, is also compact and convex. Thus, it only remains to prove the continuity of , which then by using Brouwer’s fixed point theorem would prove the statement of the proposition. From now on, we will omit wherever it clutters the notation.
To establish continuity, using (25), it suffices to show the continuity of with respect to , for which we first show the continuity of the state-action value function, defined as:
(37) |
for all . The above denotes the value received by the generic team when the action is chosen at the current instant and the future actions are chosen optimally from the admissible space for a given MF trajectory . Now, we will use backward induction on to prove the result. Fix . Consider the base case when . Then, by recalling that the terminal cost is 0, we have that
(38) |
Then, we can compute the error covariance dynamics using Proposition 1 as
(39) |
Substituting (39) into (38) for , we arrive at the equation
which is linear (and thus, clearly continuous) in .
Next, we take the induction hypothesis to be the assertion that is continuous in . Then, we prove the assertion for instant . In this regard, consider the following:
From the induction hypothesis, we have that is continuous in since the minimum of a continuous function is continuous. Further, the running cost of is also continuous in . The desired continuity of thus follows from the fact that the sum of continuous functions is continuous. Consequently, the result is established for all and any chosen action by invoking the induction principle. The continuity of then follows from by noting that
is the difference of two continuous functions. The proof is thus complete. ∎
Next, to prove Theorem 1, we first establish the following auxiliary lemma.
Lemma 2.
Let where is defined in (36). Then, the following is true for all :
Proof.
We will use backward induction on to prove the result. In this regard, first, consider the base case for . Then, we have that:
The base case is thus true. Assume next that
(40) |
Then, consider the following:
(41) |
where the errors and are under noise realizations and , respectively. Next, let us define and . Then, we consider the following cases:
We are now ready to prove Theorem 1.
Proof of Theorem 1.
We will use Banach’s fixed point theorem to prove the result. In this regard, we start by noting from (25) that , and hence, . Further, the interval is a closed subset of a complete metric space , and hence complete, which further implies that is complete under the sup norm. Finally, we prove that is a contraction.
Let and further define and as the value of information terms at instant corresponding to and , respectively. Then, consider the following:
(43) |
Next, we prove that is Lipschitz continuous in . To this end, we invoke the definition of the value of information to arrive at:
(44) |
where the last inequality follows by using Lemma 2. Then, by substituting (Proof of Theorem 1.) in (Proof of Theorem 1.), we arrive at
A similar inequality can be proven for , which would then imply the above inequality for , as well. Thus, the operator is a contraction if , which is implied by the sufficient condition (26). The proof is then completed by invoking Banach’s fixed point theorem to conclude that there exists a unique which is the fixed point of the operator such that . ∎
Appendix II
Lemma 3.
It holds that
Proof.
Consider the following:
where the equality follows since we use the approximate MF policy pair () on the finite-agent system and the inequality follows as a result of Lemma 1. The proof is thus complete. ∎
Lemma 4.
The following inequality holds:
Proof.
Let us fix a policy and while the other agents play using the policy . Then, we have that:
which gives
which by taking infimum on both sides implies that
(45) |
The proof is thus complete. ∎
Appendix III
Lemma 5.
The following inequality holds:
(46) |
where denotes the state-action value function and is defined recursively as:
(47) |
with . Further, are time-dependent functions, and is a constant.
We note that the same inequality can also be established for using the same proof technique as presented below.
Proof.
We prove the above lemma by induction on . In this regard, consider the base case with . Then, we have
(48) |
where we have used the error dynamics in (20) to arrive at the last inequality. The base case is therefore proved. Assume now that the inequality in the statement of the lemma holds for instant , i.e.,
(49) |
Then, we prove the above for instant . Consider the following:
(50) |
The proof now follows by the induction principle, and is thus complete. ∎
Proposition 4.
We have that
(51) |
for all , where .
Proof.
Let us fix . We will use induction on to prove the result.
Let us start with the base case of . Let us define the estimation errors
where is the error at instant k under action .
We recall that . Then, we have that
(52) |
The base case is thus proved. Assume now that the following is true:
(53) |
Then, we prove the inequality in the statement of the Proposition for instant . In this regard, let us define the set and the set to be the set of optimal actions for a given trajectory and the information set . Further let us denote a suboptimal action as . Then, we have that
(54) |
We note that the above is valid for . If it is 0, then we have that both the actions of 0 and 1 produce the same cost, and hence, there is no suboptimal action. Thus from LABEL:b:1, we get that .
The proof thus follows by the induction principle for all , and is complete.
∎
Proposition 5.
It holds that
Proof.
Let and be an action chosen from .
(55) |
The proof is thus complete. ∎