Two system transformation data-driven algorithms for linear quadratic mean-field games †
Abstract
This paper studies a class of continuous-time linear quadratic (LQ) mean-field game problems. We develop two system transformation data-driven algorithms to approximate the decentralized strategies of the LQ mean-field games. The main feature of the obtained data-driven algorithms is that they eliminate the requirement on all system matrices. First, we transform the original stochastic system into an ordinary differential equation (ODE). Subsequently, we construct some Kronecker product-based matrices by the input/state data of the ODE. By virtue of these matrices, we implement a model-based policy iteration (PI) algorithm and a model-based value iteration (VI) algorithm in a data-driven fashion. In addition, we also demonstrate the convergence of these two data-driven algorithms under some mild conditions. Finally, we illustrate the practicality of our algorithms via two numerical examples.
keywords:
Linear quadratic (LQ) mean-field game; decentralized strategy; system transformation; policy iteration (PI); value iteration (VI)1 Introduction
Game theory is the study of decision making in an interactive environment, which has been widely used in industry, management and other fields [1, 2, 3, 4]. In game systems with a large number of agents, i.e., large-population systems, the influence of individual agent behavior on the systems can be negligible, but the group behavior of agents has a significant impact on individual agent. In detail, states or cost functionals of agents are highly coupled through a state-average term, which brings curse of dimensionality and considerable computational complexity. Therefore, the classical game theory will no longer be applicable for large-population systems.
Different from the classical game theory, Huang et al. [5] and Lasry and Lions [6] independently proposed a mean-field method to overcome difficulties caused by high coupling. Huang et al. [5] studied collective behavior caused by individual interactions and utilized a mean-field term to represent the complex interaction information between agents and proposed decentralized strategies. Independently, the method proposed by Lasry and Lions [6] entailed solving coupled forward-backward partial differential equations. Since then, mean-field game theory has developed rapidly and the game framework has been extended to various different settings. For instance, Huang et al. [7] considered an LQ game system where each agent is weakly coupled with the other agents only through its cost functional. Based on a fixed-point method, they developed the Nash certainty equivalence and designed an -Nash equilibrium for the infinite-horizon mean-field LQ game. Huang [8] further considered decentralized controls of a game system with a major player and a large number of minor players, where the major player has a significant influence on others. Utilizing a mean-field approximation, they decomposed the game problem in population limit into a set of localized limiting two-player games and derived all players’ decentralized strategies by the Nash certainty equivalence approach. Readers may also refer to [9, 10, 11] for mean-field Stackelberg differential games, [12, 13] for risk-sensitive mean-field games, [14, 15] for mean-field differential games with partial information, [16] for a mean-field cooperative differential game.
On the other hand, reinforcement learning (RL), as a well-known numerical algorithm for studying LQ control and game problems, has received increasing attention from researchers. The main feature of RL algorithms is that they do not require or partially require the system matrix information, and thus may have higher practical value. See [17, 18, 20, 19] for some classical RL algorithms related to deterministic LQ control problems and [21, 22, 23, 24] for some RL algorithms in stochastic LQ control problems. For mean-field game problems, uz Zaman et al. [25] designed a stochastic optimization-based RL algorithm to solve a discrete-time LQ mean-field game problem. Fu et al. [26] established an actor-critic RL algorithm for an LQ mean-field game in discrete time. Subramanian and Mahajan [27] proposed two RL algorithms to deal with a stationary mean-field game problem. Carmona et al. [28] obtained a policy gradient strategy to work out a mean-field control problem and Angiuli et al. [29] introduced a unified Q-learning for mean-field game and control problems. Carmona et al. [30] developed a Q-learning algorithm for mean-field Markov decision processes and presented its convergence proof. However, it is worth mentioning that the aforementioned RL literature focuses on LQ control problems or discrete-time mean-field game problems and there are few RL algorithms for continuous-time LQ mean-field games. To the authors’ best knowledge, Xu et al. [31] is the first attempt to solve continuous-time LQ mean-field game problems by using data-driven RL algorithms. The -Nash equilibrium of their problem is closely related to two algebraic Riccati equations (AREs). They first designed a model-based PI algorithm to solve these two AREs, where all system coefficients are indispensable. Then, they implemented the model-based PI algorithm by collecting the state and input data of a given agent, and thus removed the requirement of all system parameters.
Inspired by the above articles, this paper focuses on the same LQ mean-field game problem as in Xu et al. [31], but proposes two novel data-driven RL methods to address this problem. In particular, we provide a different idea to implement the model-based PI algorithm and a model-based VI algorithm, which reduces the computational complexities of our data-driven algorithms. The main contributions of this paper are summarized as follows.
-
1.
We propose a system transformation idea to implement the model-based algorithms. Specifically, we transform the original stochastic system into an ODE and then carry out the model-based algorithms by the input/state data of the ODE. Different from the algorithms that directly use stochastic data [21, 22, 23, 24, 31], the algorithms that adopt our idea have smaller computational complexities. In addition, this idea may also be applicable to other data-driven RL algorithms for stochastic problems, especially for problems where the diffusion term of their system dynamics does not include control and state variables.
-
2.
We develop a data-driven PI algorithm to solve the LQ mean-field game problem. By virtue of the proposed system transformation idea, a novel data-driven PI algorithm is proposed to circumvent the need of all system coefficients. The simulation results show that this algorithm successfully obtains an -Nash equilibrium with errors similar to those of [31, Section 4]. But it can be noted that our algorithm may be more computationally efficient than their algorithm (see Remark 1).
-
3.
We develop a data-driven VI algorithm to deal with the LQ mean-field game problem. It is worth mentioning that the PI algorithms require a priori knowledge of two Hurwitz matrices and these Hurwitz matrices are closely related to the system coefficients. When all system matrices are unavailable, it may be difficult to obtain two matrices that satisfy this condition. As a consequence, we develop a data-driven VI algorithm to overcome this difficulty. The proposed data-driven VI algorithm neither requires the system parameter information nor the assumption of two initial Hurwitz matrices.
The rest of this paper is organized as follows. In Section 2, we give some standard notations and terminologies, and formulate the continuous-time LQ mean-field game problem. In Section 3, we design the data-driven PI algorithm to work out the game problem. In Section 4, we develop the data-driven VI algorithm to solve the problem. In Section 5, we validate the obtained algorithms by means of two simulation examples. In Section 6, we give some concluding remarks and outlooks.
2 Problem formulation and preliminaries
2.1 Notations
Let us denote and the set of positive integers and nonnegative integers. Let , and be the collection of real numbers, -dimensional real matrices and -dimensional real vectors, respectively. Denote the identity matrix as . For notation simplicity, we denote any zero vector or zero matrix by . Let be a diagonal matrix whose diagonal is vector . represents the induced matrix norm. Let , and denote the set of symmetric matrices, positive semidefinite matrices and positive definite matrices in . A positive semidefinite (respectively, positive definite) matrix is denoted by (respectively, ). Given , let , , denote the real part of the -th eigenvalue of . Let superscript be the transpose of any vector or matrix. Furthermore, represents the Kronecker product. Given a matrix , denotes its pseudoinverse and , in which , , denotes the -th column of matrix . Given a symmetric matrix , we define , where , , is the -th element of matrix . For a given vector , let , where , , denotes the -th element of . For any matrix and vector , we denote by , , , the -th element of and , , the -th element of .
2.2 LQ mean-field game problems
We study a stochastic differential game with agents in a large-population framework. The state of agent satisfies
(1) |
where and represent the state and the control strategy of agent , respectively. are independent standard -dimensional Brownian motions defined on a filtered complete probability space , where . The system coefficients , and are three unknown constant matrices of proper sizes. The initial states are independent and identically distributed with the same expectation. It is assumed that the initial states are independent of and their second moments are finite. Moreover, let be the information available to agent , .
The admissible control set of agent is
and all agents’ admissible control set is , where is a discount parameter.
The cost functional of agent is
where , are two weighting matrices, and .
Then the infinite-horizon LQ mean-field game problem is as follows.
Problem (LQG) Find satisfying
where .
Similar to [8, 31], we are interested in designing decentralized control strategies and the -Nash equilibrium of Problem (LQG), which are closely related to two AREs
(2) |
and
(3) |
In order to guarantee the existence of solutions to these two equations, the following assumption is required.
Assumption 1.
, , and is stabilizable.
By virtue of AREs (2) and (3), the next lemma presents the -Nash equilibrium of Problem (LQG), whose proof follows from [31, Proposition 2.1, Remark 3.1] and [8, Theorem 11] and thus is omitted here.
Lemma 1.
Let Assumption 1 hold. Then we have
(i) ARE (2) admits a solution such that is Hurwitz, and ARE (3) admits a solution such that is Hurwitz, where and ;
(ii) The decentralized strategies of Problem (LQG) are
where is governed by the aggregate quantity
(4) |
Moreover, construct an -Nash equilibrium of Problem (LQG).
Lemma 1 illustrates that AREs (2) and (3) play a core role in constructing the -Nash equilibrium. Noteworthily, in view of the nonlinear properties of (2) and (3), it is hard to directly solve the two equations. Meanwhile, when all system coefficients are unavailable in the real world, it is more difficult to solve these two equations. In the next two sections, we will propose two algorithms to solve AREs (2) and (3) without knowing all system matrices.
3 A system transformation data-driven PI algorithm
In this section, we devote ourselves to designing a novel data-driven PI algorithm to tackle Problem (LQG) without knowing all system coefficients.
Our algorithm is based on a model-based PI algorithm developed in [31, Lemmas 3.1 and 3.2]. We summarize the model-based PI algorithm in Algorithm 1 and present its convergence results in the next lemma for future use.
Lemma 2.
Suppose that Assumption 1 holds. Then the sequences and obtained by Algorithm 1 have the following properties:
(i) and are Hurwitz matrices;
(ii) , .
(5a) | |||
(5b) |
It is worth pointing out that Algorithm 1 needs all information of system matrices, which may be hard to obtain in the real world. In the sequel, we will remove the requirement of all system matrices.
To this end, we first handle system (1) of any given agent . By defining , and , we transform system (1) into
(6) |
Next, we give a lemma to reveal a relationship between system (6) and the matrices to be solved in Algorithm 1, which will play a central role in deriving our data-driven PI algorithm.
Lemma 3.
Proof.
Since the derivations of (7) and (8) are similar, we only provide the proof of (7). According to system (6), we know
(9) |
With the help of (5a), it is clear that
(10) |
Inserting (10) into (9), (9) is transformed into
(11) |
Then, integrating the above equation from to , (11) becomes
Thus, it follows from Kronecker product theory that
which yields equation (7). This completes the proof. ∎
To use the lemma proposed above, we define
(12) |
Thanks to the matrices in (12), equations (7) and (8) imply that
(13) |
(14) |
where and are defined by
By virtue of the above analysis, we present our data-driven PI algorithm in Algorithm 2 and provide its convergence results in Theorem 1.
(15) |
(16) |
Theorem 1.
Suppose Assumption 1 holds and there exists a positive integer such that
(17) |
holds for any , then and generated by Algorithm 2 satisfy and , respectively.
Proof.
Step 1: Given , we first show that matrices and have full column rank under condition (17).
We prove this result by contradiction. If the column ranks of and are not full, then there exist two nonzero vectors and such that and . Moreover, we can take four matrices , , and by and .
Similar to (9), we have
(18) |
and
(19) |
Following the derivations of (13) and (14) and combining (18) and (19), we derive
(20) |
where
(21) |
Noting that and are symmetric matrices, it follows that
(22) |
where
(23) |
Keeping (22) in mind, (20) implies
(24) |
Under condition (17), it is evident to see that matrix has full column rank. Combining this fact with (24), the definitions of and mean that . Thus, (21) yields
(25) |
According to the Kronecker product theory, (25) shows
(26) |
By virtue of (i) of Lemma 2, we know matrices and are Hurwitz. Then it follows from (26) that . Combing it with , we obtain , which means that . However, this is contrary to the assumptions that and are nonzero vectors.
Step 2: We now prove the convergence results of Algorithm 2. For any , since and have full column rank, (15) and (16) admit a unique solution, respectively. Furthermore, Lemma 3 shows that is a solution of (15) and is a solution of (16). Therefore, (15) admits unique solution and (16) admits unique solution . Thus, the convergence of Algorithm 2 follows from Lemma 2. We have then proved the theorem. ∎
Remark 1.
To further analyze our algorithm, we are now in a position to consider the computational complexity (i.e. time complexity and space complexity ) of Algorithm 2. If each integral in (12) is computed by summation with discrete points and the samples of employing Monte Carlo is , then the time complexity of calculating matrices , and is . In this case, the time complexity of computing the similar matrices in Xu et al. [31] is . Moreover, it is easy to verify that the space complexity of Algorithm 2 is the same as the algorithm in Xu et al. [31]. Due to the fact that is always large in practice, it is evident that the computational complexity of our algorithm is much smaller than that of the algorithm in Xu et al. [31].
Remark 2.
In practical implementations, rank condition (17) can be met by adding an exploration signal to the input. There are many types of exploration signals commonly used in RL algorithms, such as Gaussian signals [32, 23, 22], random signals [34, 33] and signals generated by the sum of sinusoidal functions [18, 31]. In the simulation sections of this paper, we adopt exploration signals mainly generated by the combination of sinusoidal functions.
By virtue of a system transformation idea, we have developed a data-driven PI algorithm to solve Problem (LQG) without needing all system coefficients. However, it is evident that this algorithm requires two matrices and so that and are Hurwitz. When all system matrices are unknown, it may be difficult to obtain two matrices that meet this condition. Thus, we will develop a data-driven VI algorithm in the next section to solve this conundrum.
4 A system transformation data-driven VI algorithm
In this section, we aim to propose a data-driven VI algorithm to remove the assumption of needing two Hurwitz matrices. Moreover, the data-driven VI algorithm also does not require the knowledge of all system coefficients.
In addition to Assumption 1, this section also needs the following assumption.
Assumption 2.
is Hurwitz.
Under Assumptions 1-2, it is easy to verify that and . Thus, we only need to deal with ARE (2) and the corresponding . To proceed, we define a constant sequence and a set of bounded collections , which satisfy
(27) |
and
(28) |
Based on the above symbols, we propose a model-based VI algorithm in Algorithm 3, which can initiate from any positive definite matrix. Since the model-based VI algorithm is a direct extension of [20, Lemma 3.4, Theorem 3.3], we present its convergence results in the next lemma but omit its proof due to space limitations
Although Algorithm 3 does not need the assumption of two Hurwitz matrices, it still requires the information of system coefficients and . In the sequel, we will develop a data-driven VI algorithm to solve Problem (LQG) without depending on all system matrices. To this end, we give a lemma to construct a relationship between system (6) and the matrices in Algorithm 3.
Lemma 5.
For any , generated by Algorithm 3 satisfies
(29) |
where , , is a set of real numbers satisfying and is a predefined positive integer.
Proof.
Now we can summarize our data-driven VI algorithm in Algorithm 4. It should be noted that this algorithm does not require the assumption of Hurwitz matrices and can be implemented in the setting of unknown system matrices.
Theorem 2.
Proof.
Given , it follows from Lemma 5 that is a solution of (30). When rank condition (31) is guaranteed, we know (30) admits a unique solution. Therefore, the solution matrices of Algorithm 4 are the same as those of Algorithm 3. Thus, Lemma 4 implies the convergence of Algorithm 4. This completes the proof. ∎
5 Simulations
This section demonstrates the applicability of the proposed data-driven algorithms through two numerical examples.
5.1 The first numerical example
The parameters in the performance index are , and . In order to know the true values of and , we first carry out the model-based algorithm in Lemma 2 with a large number of iterations and set its solution as the true values. The true solutions are
Now we devote ourselves to solving the above problem by Algorithm 2, which does not rely on the information of matrices , and . The parameters of Algorithm 2 are set as , , , , and . The initial state is set as . During the simulation, we choose any agent and randomly collect samples of its state and control policy data. We set , where is a set of real numbers randomly selected in . Then the collected information is used to construct system (6) by the Monte Carlo method. The input and state trajectories of system (6) used in the simulation are depicted in Figs. 1 and 2, respectively.
Figs. 3 and 4 display the convergence of Algorithm 2. After 6 iteration steps, Algorithm 2 gives the approximate values of and . The results of our algorithm are
Clearly, the matrices solved by Algorithm 2 are close to the true values. This is in good agreement with our theortical results. For comparison purpose, the numerical results of Xu et al. [31] are presented in Table 1. It is evident to see from this table that our algorithm has similar performance to theirs. However, as mentioned in Remark 1, the computational complexity of our algorithm is smaller than that of the algorithm in Xu et al. [31].
The algorithm in Xu et al. [31] | Algorithm 2 | |
---|---|---|
Final iteration numbers | 6 | 6 |
Relative errors | 0.0013 | 0.0012 |
Relative errors | 0.0014 | 0.0042 |
To close this subsection, we focus on the decentralized strategies constructed by the solution of Algorithm 2. Suppose that there are agents and they adopt decentralized strategies , , where denotes the trajectory of aggregate quantity (4) simulated by Monte Carlo with samples. All agents’ initial states are randomly generated from . Figs. 5 and 6 illustrate the behaviors of the polulation. Furthermore, we plot all agents’ average state, which is denoted by , and the trajectory of in Fig. 7. The lines in Fig. 7 show that is close to , which effectively demonstrate the consistency condition.



5.2 The second numerical example
As it appears evident, the first example does not satisfy Assumption 2. In this subsection, we consider an example that satisfies Assumptions 1-2. The system coefficients are
The parameters in cost functional are , and . Since and , we only focus on calculating the approximate solutions of and . For comparison purpose, we present the true values of and , which are
Next, we will solve this example by Algorithm 2 and Algorithm 4, respectively. To implement Algorithm 2, we set , , and collect data samples of a given agent . For Algorithm 4, we set , , , , and collect data samples of agent . The other parameters are set the same as those in Section 5.1. The convergence of Algorithms 2 and 4 is shown in Figs. 8 and 9.
After 4 iteration steps, Algorithm 2 generates the following solution matrices
with a relative error . Moreover, Algorithm 4 converges after 172 iterations and gives
with its relative error . It is worth pointing out that the the straight dashed lines in the first half of Fig. 9 indicate that in the early iteration steps. This is one of the characteristics of continuous-time VI algorithms.
According to the above simulation results, it follows that the algorithms developed in this paper may be useful in dealing with LQ mean-field game problems under the setting of completely unknown system coefficients. Therefore, the proposed algorithm may be more conducive to solving practical mean-field game application problems.
6 Conclusions and future works
This paper is concerned with an LQ mean-field game problem in continuous-time. We develop a system transformation method to implement a model-based PI algorithm and a model-based VI algorithm. The obtained data-driven algorithms permit the construction of decentralized control strategies without system coefficient information and have smaller computational complexities. Moreover, we simulate two numerical examples to verify the effectiveness of our algorithms. The simulation results show that our algorithms successfully find the -Nash equilibria with small errors. Thus, the algorithms proposed in this paper may be promising tools in solving continuous-time LQ mean-field games with unknown system parameters. In future works, we want to explore data-driven RL algorithms for more general LQ mean-field games such as those with jumps, delays and partial information.
Acknowledgements
X. Li acknowledges the financial support by the Hong Kong General Research Fund, China under Grant Nos. 15216720, 15221621 and 15226922. G. Wang acknowledges the financial support from the National Key R&D Program of China under Grant No. 2022YFA1006103, the National Natural Science Foundation of China under Grant Nos. 61925306, 61821004 and 11831010, and the Natural Science Foundation of Shandong Province under Grant No. ZR2019ZD42. J. Xiong acknowledges the financial support from the National Key R&D Program of China under Grant No. 2022YFA1006102 and the National Natural Science Foundation of China under Grant No. 11831010.
References
- [1] J. von Neumann, O. Morgenstern. The Theory of Games and Economic Behavior, Princeton University Press, 1944.
- [2] T. Başar, G. Olsder, Dynamic Noncooperative Game Theory, London, U.K.: Academic Press, 1982.
- [3] O. Guéant, J. M. Lasry, P. L. Lions, Paris-Princeton Lectures on Mathematical Finance, Mean field Games and Applications, Heidelberg, Germany: Springer-Verlag, 2011.
- [4] A. Bensoussan, J. Frehse, P. Yam, Mean Field Games and Mean Field Type Control Theory, New York: Springer, 2013.
- [5] M. Huang, R. P. Malhamé, P. E. Caines, Large population stochastic dynamic games: Closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle, Commun. Inf. Syst. 6 (3) (2006) 221-252.
- [6] J. M. Lasry, P. L. Lions, Mean field games, Jap. J. Math. 2 (1) (2007) 229-260.
- [7] M. Huang, P. E. Caines, R. P. Malhamé, Large-population cost-coupled LQG problems with nonuniform agents: Individual-mass behavior and decentralized -Nash equilibria, IEEE Trans. Autom. Control 52 (9) (2007) 1560-1571.
- [8] M. Huang, Large-population LQG games involving a major player: The Nash certainty equivalence principle, SIAM J. Control Optim. 48 (5) (2010) 3318-3353.
- [9] A. Bensoussan, M. H. M. Chau, S. C. P. Yam, Mean field games with a dominating player, Appl. Math. Optim. 74 (2016) 91-128.
- [10] A. Bensoussan, M. H. M. Chau, Y. Lai, S. C. P. Yam, Linear-quadratic mean field stackelberg games with state and control delays, SIAM J. Control Optim. 55 (4) (2017) 2748-2781.
- [11] J. Moon, T. Başar, Linear quadratic mean field stackelberg differential games, Automatica 97 (2018) 200-213.
- [12] J. Moon, T. Başar, Linear quadratic risk-sensitive and robust mean field games. IEEE Trans. Autom. Control 62 (3) (2017) 1062-1077.
- [13] H. Tembine, Q. Zhu, T. Başar, Risk-sensitive mean-field stochastic differential games, IFAC Proceedings 44 (1) (2011) 3222-3227.
- [14] J. Huang, S. Wang, Dynamic optimization of large-population systems with partial information, J. Optim. Theory Appl. 168 (2015) 231-245.
- [15] P. Huang, G. Wang, W. Wang, A linear-quadratic mean-field game of backward stochastic differential equation with partial information and common noise, Appl. Math. Comput. 446 (2023) 127899.
- [16] B. Wang, H. Zhang, J. Zhang, Mean field linear-quadratic control: uniform stabilization and social optimality, Automatica 121 (2020) 109088.
- [17] D. Vrabie, O. Pastravanu, M. Abu-Khalaf, F. L. Lewis, Adaptive optimal control for continuous-time linear systems based on policy iteration, Automatica 45 (2009) 477-484.
- [18] Y. Jiang, Z. -P. Jiang, Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics, Automatica 48 (10) (2012) 2699-2704.
- [19] J. Y. Lee, J. B. Park, Y. H. Choi, Integral Q-learning and explorized policy iteration for adaptive optimal control of continuous-time linear systems, Automatica 48 (11) (2012) 2850-2859.
- [20] T. Bian, Z. -P. Jiang, Value iteration and adaptive dynamic programming for data-driven adaptive optimal control design, Automatica 71 (2016) 348-360.
- [21] N. Li, X. Li, J. Peng, Z. Xu, Stochastic linear quadratic optimal control problem: a reinforcement learning method, IEEE Trans. Autom. Control 67 (2022) 5009-5016.
- [22] H. Zhang, N. Li, Data-driven policy iteration algorithm for continuous-time stochastic linear-quadratic optimal control problems, Asian J. Control (2023).
- [23] H. Zhang, An adaptive dynamic programming-based algorithm for infinite-horizon linear quadratic stochastic optimal control problems, J. Appl. Math. Comput. 69 (2023) 2741-2760.
- [24] G. Wang, H. Zhang, Model-free value iteration algorithm for continuous-time stochastic linear quadratic optimal control problems, arXiv:2203.06547, 2022.
- [25] M. A. uz Zaman, E. Miehling, T. Başar, Reinforcement learning for non-stationary discrete-time linear-quadratic mean-field games in multiple populations, Dyn. Games Appl. 13 (2023) 118-164.
- [26] Z. Fu, Z. Yang, Y. Chen, Z. Wang, Actor-critic provably finds Nash equilibria of linear-quadratic mean-field games, In 8th International Conference on Learning Representations, 2020.
- [27] J. Subramanian, A. Mahajan, Reinforcement learning in stationary mean-field games, In Proceedings of the 18th international conference on autonomous agents and multiagent systems (pp. 251–259), 2019.
- [28] R. Carmona, M. Laurière, Z. Tan, Linear-quadratic mean-field reinforcement learning: Convergence of policy gradient methods, arXiv:1910.04295, 2019.
- [29] A. Angiuli, J-P. Fouque, M. Laurière, Unified reinforcement Q-learning for mean field game and control problems, Math. Control Signals Syst. 34 (2022) 217-271.
- [30] R. Carmona., M. Laurière, Z. Tan, Model-free mean-field reinforcement learning: Mean-field MDP and mean-field Q-learning, Ann. Appl. Probab. 33 (2023) 5334-5381.
- [31] Z. Xu, T. Shen, M. Huang, Model-free policy iteration approach to NCE-based strategy design for linear quadratic Gaussian games, Automatica 155 (2023) 111162.
- [32] S. J. Bradtke, Reinforcement learning applied to linear quadratic regulation, Adv. Neural Inf. Process. Syst. 5 (1993) 295-302.
- [33] H. Xu, S. Jagannathan, F. L. Lewis, Stochastic optimal control of unknown linear networked control system in the presence of random delays and packet losses, Automatica 48 (6) (2012) 1017-1030.
- [34] A. Al-Tamimi, F. L. Lewis, M. Abu-Khalaf, Model-free Q-learning designs for linear discrete-time zero-sum games with application to H-infinity control, Automatica 43 (3) (2007) 473-481.