LQG Mean Field Games with a Major Agent: Nash Certainty Equivalence versus Probabilistic Approach
Abstract
Mean field game (MFG) systems consisting of a major agent and a large number of minor agents were introduced in (Huang, 2010) in an LQG setup. The Nash certainty equivalence was used to obtain a Markovian closed-loop Nash equilibrium for the limiting system when the number of minor agents tends to infinity. In the past years several approaches to major–minor mean field game problems have been developed, principally (i) the Nash certainty equivalence and analytic approach, (ii) master equations, (iii) asymptotic solvability, and (iv) the probabilistic approach. For the LQG case, the recent work (Huang, 2021) establishes the equivalency of the Markovian closed-loop Nash equilibrium obtained via (i) with those obtained via (ii) and (iii). In this work, we demonstrate that the Markovian closed-loop Nash equilibrium of (i) is equivalent to that of (iv) for the LQG case. These two studies answer the long-standing questions about the consistency of the solutions to major-minor LQG MFG systems derived using different approaches.
keywords:
major-minor LQG mean field games; Nash equilibrium; Nash certainty equivalence; probabilistic approach.[table]capposition=top
1 Introduction
Mean field game (MFG) systems with major and minor agents were first introduced in [5] in an LQG setting, where there is a major agent (whose impact does not vanish in the limit of infinite population size) together with a population of minor agents (where each agent has individually asymptotically negligible effect). In the introduced setting the major agent’s state appears in both dynamics and the cost functional of each minor agent. Moreover, each agent is interacting with the average state of minor agents through couplings in the dynamics and cost functionals. As a result, the mean field for such systems is a progressively measurable stochastic process with respect to the filtration generated by the major agent’s Wiener process. In [5], the author uses the Nash certainty equivalence to establish the existence of Markovian closed-loop -Nash equilibria and derive the individual agents’ explicit control laws which together yield an equilibrium. This methodology is extended in [8] for a general nonlinear case where the major agent’s state appears in nonlinear dynamics and cost functional of individual minor agents, and all agents are coupled with the empirical distribution of minor agents’ state. The best-response strategy of an agent in the limiting case is formulated as the solution to a set of coupled forward backward (FB) PDEs, i.e. a Hamilton Jacobi Bellman equation and a Fokker–Planck–Kolmogorov equation. Subsequently, it is shown that the set of best-response strategies yields a Markovian closed-loop -Nash equilibrium for the system. This methodology which mainly uses the dynamic programming principle is known as the analytic approach in some literature.
A probabilistic approach to major-minor (MM) MFG systems by using the stochastic maximum principle is developed in [3], where the authors establish the existence of open-loop -Nash equilibria for a general case as the solutions to a set of FBSDEs, and provide the explicit solutions for an LQG case. In [3, Section 6], it is discussed in detail that the obtained open-loop equilibrium is different from the Markovian closed-loop equilibrium derived in [5] for the LQG case. Following this work an alternative probabilistic formulation is proposed in [2], where the stochastic maximum principle is used and the search for Nash equilibria in the infinite-population limit is formulated as the search for fixed points in the space of best response control maps for the major and minor agents. Using this method, the authors retrieve the same set of FBSDEs as in [3] characterizing the open-loop equilibrium without explicitly solving it. This is while the paper does not present any comparison between the obtained Markovian closed-loop Nash equilibrium and that of the existing work [5]. Therefore, the paper is inconclusive about this important aspect.
The MFG master equation methodology, which encapsulates the MFG system in a backward nonlinear PDE in infinite dimension, is used in [7, 1] to characterize Nash equilibria for a general MM MFG system. Moreover, [1] shows that the solution of the finite-population MM MFG Nash system converges to the solution of the system of master equations as the number of minor agents tends to infinity.
The solutions to MM LQG MFG systems obtained using the above discussed methods are seemingly different . Consequently, in order to address the questions in the MFG community related to the consistency of the Nash certainty equivalence solutions [5] with the ones obtained via other methods in the literature, the following works have emerged. [4] uses a variational analysis to retrieve the Markovian solutions of [5], where no assumption is imposed on the mean field evolution a priori. Moreover, [6] establishes that the Nash certainty equivalence solutions [5] are equivalent to the Markovian closed-loop solutions obtained via the master equations [7] and asymptotic solvability. The current work serves as the last piece of the long-standing puzzle about MM LQG MFG systems. We demonstrate that the Markovian closed-loop Nash equilibria obtained through the Nash certainty equivalence [5] and the probabilistic approach [2] for the limiting MM LQG MFG systems are equivalent. (For the detailed analysis related to the derivation of consistency equations, best response strategies and -Nash property we refer the reader to [5], [4] and [2].)
In this paper we first introduce finite-population MM LQG MFGs in Section 2. Next, we present the Nash certainty equivalence solutions ([4], [5]) in Section 4. We then present the Markovian closed-loop solutions obtained via the probabilistic approach ([2]) in Section 5. Finally, we show that the two solutions are equivalent in Section 6.
2 Finite-Population MM LQG MFG Systems
We consider a large population of minor agents, each denoted by , and a major agent denoted by . To capture the essence of the two approaches we consider a simple LQG case (for a general case with heterogeneous minor agents see [4, 6, 5]), where the major and minor agents’ states, respectively, satisfy
(1) | |||
(2) |
for , . Here , are the states, , are the control inputs, denotes independent standard Wiener processes. Moreover, denotes the average state of the minor agents. All matrices in (1) and (2) are constant and of appropriate dimension.
Assumption 1.
The initial states defined on are identically distributed, mutually independent and also independent of , with , and , , where is independent of . Moreover each agent , observes .
We denote , where and are matrices of appropriate dimension. We also denote and . Then the cost functionals for a major agent and a minor agent , , are given by
(3) | |||
(4) | |||
(5) |
Assumption 2.
(Convexity) , , , .
3 Infinite Population MM LQG MFG Systems
The dynamics and cost functional of the major agent and a generic minor agent in the infinite-population case are given by
(6) | |||
(7) | |||
(8) | |||
(9) |
where the mean field is defined as , if the limit exists. It is equivalently defined as the expected value of the state of a generic minor agent given the information set defined below.
Information Sets. We define (i) the major agent’s information set as the filtration generated by , and (ii) a generic minor agent ’s information set as the filtration generated by .
Assumption 3.
(Admissible Controls) (i) For the major agent , the set of admissible control inputs is defined to be the collection of Markovian linear closed-loop control laws such that . More specifically, for some deterministic functions and . (ii) For each minor agent , the set of admissible control inputs is defined to be the collection of Markovian linear closed-loop control laws such that . More specifically, for some deterministic functions and .
4 Nash Certainty Equivalence Approach
In the Nash certainty equivalence approach, first an a priori dynamics for the mean field is derived. Then the idea is to Markovianize (i) the major agent’s limiting system by extending its state with the mean field, and (ii) a generic minor agent’s limiting system by extending its state with the major agent’s state and the mean field. This state extension leads to a set of decoupled classical optimal control problems for individual agents, which are linked with each other through the major agent’s state and the mean field. Given the individual information sets, each agent can solve its own stochastic optimal control problem to obtain a best-response strategy. Subsequently, a Nash equilibrium is defined as the set of the best-response Markovian closed-loop strategies of individual agents such that they collectively generate the same mean field that was used in the first step to obtain the best response strategies. This yields a set of consistency equations, the fixed-point solution of which characterizes the Nash equilibrium. ([4] uses a variational analysis and obtains the same Nash equilibrium without assuming an a priori mean field evolution.)
Mean Field Evolution. According to [5, 6], if a generic minor agent adopts a Markovian linear closed-loop strategy, satisfies
(10) |
where and are functions of the fixed-point solutions to the consistency equations (18)-(26). Now we present the agents’ Markovianized systems.
Major Agent. From (6) and (10) the major agent’s extended state satisfies
(11a) | |||
(11b) |
The major agent’s cost functional in terms of is given by
(12a) | |||
(12b) |
According to [4, Thm 5] (the finite-horizon version of [5, Thm 10] for general MM LQG MFG systems), the best response strategy for the major agent is given by
(13a) | |||
(13b) | |||
(13c) | |||
(13d) |
Minor Agent. From (6)-(7) and (10), for a generic minor agent , the extended state is governed by the dynamcis
(14a) | |||
(14d) | |||
(14e) |
The cost functional for , in terms of is given by
(15a) | |||
(15b) |
According to [4, Thm 5] and [5, Thm 10], the best response strategy for a generic minor agent is given by
(16a) | |||
(16b) | |||
(16c) |
Mean Field Consistency Equations. We first define
(17) |
where , . Then according to the Nash certainty equivalence approach [5], the consistency equations are obtained by effectively equating (10) with the mean field equation resulting from the collective action of the mass of minor agents. Subsequently the consistency equations determining , , are given by
(18) | |||||
(19) | |||||
(20) | |||||
(21) | |||||
(22) |
(23) | |||||
(24) | |||||
(25) | |||||
(26) |
Theorem 4.
Proof. Given that , the optimal control (16) of the minor agent is given by
(37) |
Hence only the first block row of and appear in a generic minor agent’s optimal control and the other blocks are irrelevant. Therefore we use (19) and (24) to derive the equations that , and satisfy. To this end, we first treat the terms in (19) one by one. Block multiplications for the first and the second terms on the right hand side of (19) yield
(38) | |||
(39) | |||
(40) |
For the third and forth terms in (19) we have
(41) |
From (19) and (38)-(41), and through block by block correspondence, we obtain the ODEs that and satisfy as in (36) and (28), respectively.
5 Probabilistic Approach
In [2], the search for Nash equilibria for MM MFGs is formulated as the search for fixed points in the space of best response control maps for the major and minor agents in the infinite-population limit. In this section, we present the approach of [2] for obtaining a Markovian closed-loop equilibrium for MM LQG MFG systems.
To be self-contained, in Table 1 we match the notations used in the current work and in [2] for presenting the model parameters and the processes. Otherwise, the notations in the two works are the same.
In [2] Markovian linear closed-loop control actions as in Assumption 3 are considered for the major agent and a representative minor agent, denoted, respectively, by and . The mean field dynamics is then obtained by forming the closed-loop system for the representative agent using and taking the conditional expectation of its state given the information set . Subsequently, to obtain the solutions to the major agent’s problem, its state is extended with the mean field in the same manner as in [5]. Then using the stochastic maximum principle for the extended system, the major agent’s optimal control action is obtained as
(43) |
and a set of McKean-Vlasov FBSDEs is derived which solves for the major agent’s extended state and the decoupling field (adjoint process) . To solve the FBSDEs, an ansatz is adopted for as in
(44) |
where are deterministic matrices of appropriate dimension. Then a set of ODEs that and satisfy is derived. Subsequently the notion of a deviating minor agent is introduced as an extra virtual minor agent who deviates from the strategy of its peers and aims to optimize in response to the major agent and the rest of minor agents. However, when dealing with the optimal control problem for the deviating minor agent, the major agent’s state and the mean field are considered as exogenous stochastic coefficients, which are determined offline by solving a set of SDEs, i.e. the major agent’s closed-loop extended system. Then using the stochastic maximum principle, the deviating minor agent’s optimal control is obtained as in
(45) |
and a set of FBSDEs with random coefficients (not of McKean-Vlasov type), that the minor agent’s state and decoupling field (adjoint process) satisfy, are derived. To solve the FBSDEs, an ansatz is considered for as in
(46) |
where , , and are matrices of appropriate dimension. The mean field equation resulting from a minor agent using must match the one obtained using to solve for the exogenous stochastic coefficients in the minor agent’s optimal control problem. Subsequently the consistency equations whose fixed-point solutions determine and , are given by ([2, eq. (31)-(32)])
(47) | |||||
(48) | |||||
(49) |
(50) | |||||
(51) | |||||
(52) |
where
(53) | |||
(54) |
6 Comparison of the Two Approaches
We start with the following theorem.
Theorem 5.
Proof. By inspection, the Markovian linear optimal control laws obtained through the Nash certainty equivalence (see (13a), (37)) have the same structure as the ones obtained through the probabilistic approach (see (43)-(46)). It remains to show the equivalency of the sets of consistency equations, the fixed-point solutions of which yield the coefficients in the above control laws. More specifically, we show that the reduced consistency equations (27)-(36) obtained via the Nash certainty equivalence are the same as the consistency equations (47)-(53) obtained via the probabilistic approach. To this end, we first define a block elementary operator which operates on a matrix to produce the desired interchanged block rows, as in
(55) |
where the identity matrices are of appropriate dimension. Then we correspond the processes in (27)-(36) with those in (47)-(53) as shown in Table 2.
Current Work | |||||
---|---|---|---|---|---|
Reference [2] |
Using Tables 1-2, we can retrieve (53) from (36) or vice versa. Now we correspond the terms in (27)-(28) to those in (47)-(48). First we use Table 1 to replace the system parameters in (27) with those considered in [2] (i.e. replace respectively, with in (27)). Then as per Table 2 to obtain we multiply both sides of (27) from right by and from left by as in
which by inspection is the same equation as (47), particularly
(56) |
Next we use Tables 1-2 to match (28) and (48). We first replace , respectively with in (28), and then right multiply both sides by which gives us equation (48). Now we show that (50) can be retrieved from (32). From Tables 1-2, we first replace with in (32), and then left multiply its both sides by as in
(57) |
where and
Finally, we replace respectively, with , in (33), which yields (51). This matches (33) and (51).
Discussions. To obtain Markovian closed-loop Nash equilibria for MM LQG MFG systems, both Nash certainty equivalence and probabilistic approaches assume that a generic minor agent adopts a Markovian linear closed-loop strategy. With this assumption the mean field equation is derived using an ansatz for the minor agent’s control action. Then to solve the major agent’s limiting optimal control problem, its state is extended with the mean field in both approaches. In [5], the optimal linear state feedback control for the major agent’s extended system is obtained using the known results for single-agent LQG systems. This is while in [2], the stochastic maximum principle is used, where a linear ansatz for the adjoint process in terms of the major agent’s state and the mean field is considered. The two methods are equivalent and result in the same optimal control for the major agent. For a generic minor agent’s optimal control problem, [5] Markovianizes the minor agent’s system by extending its state by the major agent’s state and the mean field. Then again using the known results for single-agent LQG systems the minor agent’s optimal control is obtained, which is a linear function of its own state, the major agent’s state and the mean field. This is while in [2], the major agent’s state and the mean field are considered as exogenous stochastic coefficients in the minor agent’s system. Then using the stochastic maximum principle, an optimal control is obtained for the minor agent by adopting an ansatz for the adjoint process which is a linear function of its own state, the major agent’s state and the mean field. Although, the obtained optimal control actions for a generic minor agent derived in [5] and [2] do not look the same at first glance, Theorem 5 establishes that they are indeed equivalent. Hence both approaches yield the same Markovian closed-loop Nash equilibrium for MM LQG MFGs.
The fact that the set of consistency equations of [5] reduces to that of [2] stems from an interaction asymmetry in the minor agent’s extended system in the former. In fact, in the minor agent’s extended system, the individual minor agent’s state and control action do not affect the joint system of the major agent and the mean field (i.e. the major agent’s extended system). However, the major agent’s state and the mean field affect the dynamics and the cost functional of the individual minor agent. In the core, the minor agent’s extended system (modelled in [5]) is working in the same manner as the individual minor agent system with exogenous stochastic coefficients solving the major agent’s extended system (modelled in [2]). Such asymmetric interactions do not occur in the major agent’s extended system. This is due to the mutual interactions as the major agent’s state appears in the mean field dynamics and the mean field appears in both the major agent’s dynamics and cost functional.
References
- [1] P. Cardaliaguet, M. Cirant, and A. Porretta. Remarks on Nash equilibria in mean field game models with a major player. Proceedings of the American Mathematical Society, 148(10):4241–4255, 2020.
- [2] R. Carmona and P. Wang. An alternative approach to mean field game with major and minor players, and applications to herders impacts. Applied Mathematics & Optimization, 76(1):5–27, 2017.
- [3] R. Carmona and X. Zhu. A probabilistic approach to mean field games with major and minor players. Annals of Applied Probability, 26(3):1535–1580, 2016.
- [4] D. Firoozi, S. Jaimungal, and P. E. Caines. Convex analysis for LQG systems with applications to major–minor LQG mean–field game systems. Systems & Control Letters, 142:104734, 2020.
- [5] M. Huang. Large-population LQG games involving a major player: The Nash certainty equivalence principle. SIAM Journal on Control and Optimization, 48(5):3318–3353, 2010.
- [6] M. Huang. Linear-quadratic mean field games with a major player: Nash certainty equivalence versus master equations. Communications in Information and Systems, 21(3):441–471, 2021.
- [7] J. M. Lasry and P. L. Lions. Mean-field games with a major player. Comptes Rendus Mathematique, 356(8):886 – 890, 2018.
- [8] M. Nourian and P. E. Caines. -Nash mean field game theory for nonlinear stochastic dynamical systems with major and minor agents. SIAM Journal on Control and Optimization, 51(4):3302–3331, 2013.