Exact solutions of the simplified March model for organizational learning
Abstract
James G. March’s celebrated agent-based simulation model for organizational learning [March, Organization Science 2, 71 (1991)] has been extensively studied for the last decades. Yet the model was not fully understood due to the lack of analytical solutions of the model. We simplify the March model to take an analytical approach using master equations. We then derive exact solutions for some simplest yet nontrivial cases, and perform numerical estimation of master equations for more complicated cases. Both analytical and numerical results are in good agreement with agent-based simulations. These results are also compared to those of the original March model. Our approach enables us to rigorously understand the results of the simplified model as well as the original model to a large extent.
I Introduction
James G. March introduced an agent-based simulation model for organizational learning in his seminal paper in 1991 [1]. Since then, the original March model, its variants, and other similar models have been extensively studied for the last decades [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. March’s model considers an external reality, an organizational code (code hereafter), and individual members of the organization. The code represents a set of norms, rules, etc. that is updated using the knowledge of individuals about the reality, while individuals learn from the code about the reality too. By doing so, the organizational knowledge about the reality is collected from individuals and it is disseminated to them at the same time. He studied the effects of learning rates of individuals and of the code on their achieved knowledge about the reality. In order to consider more realistic situations, he took personnel turnover and environmental turbulence into account in his model to find that there may exist an optimal turnover rate that maximizes the achieved knowledge, depending on the learning rates.
We remark that the March model can be considered in the framework of opinion dynamics in networks [16, 17, 18, 19, 20, 21, 22, 23, 24]. That is, the code plays a hub node in a hub-and-spoke network, while individuals are dangling nodes [25]. Nodes update their opinions or beliefs according to their neighbors’ opinions or beliefs, while the external reality plays an external field or source affecting all nodes. It implies that various analytical approaches developed for the opinion dynamics can be applied to the March model.
The March model and its variants have provided with insights into management and business administration [8] but mostly by means of computer simulations of models [14]. In general, for rigorous understanding of models, derivation of their exact, analytical solutions is of utmost importance. In our work, we simplify March’s original model to explicitly write down master equations describing the dynamics of the model. Then we derive exact solutions of the simplified model for some simplest yet nontrivial cases. Numerical estimation of master equations is performed for more complicated cases. Both analytical and numerical results are shown to be in good agreement with agent-based simulation results. Our approach enables us to rigorously understand the results of not only the simplified model but also the original model to a large extent.
The paper is organized as follows. In Sec. II, we describe the original March model and our simplified version. In Sec. III, analytical, numerical, and simulation results of the simplified model are presented and compared to the results of the original March model. Finally, we conclude our work in Sec. IV.
II Models
II.1 Original March model
As mentioned, the original model by March considers the external reality, the code, and individual members of the organization [1]. We remark that in this Subsection we will use mathematical symbols originally used in the March’s paper, and they are not supposed to be confused with symbols in the next Subsection and throughout the paper. The model is based on the following assumptions.
(i) The reality is characterized in terms of an -dimensional vector, each element of which may have the value of or with equal probabilities.
(ii) The code and individuals in the organization have beliefs about the reality. The belief is also represented by an -dimensional vector, each element of which may have the value of , , or with equal probabilities. These beliefs may change over time.
(iii) At each time step, each individual may change elements of its belief that are different from those of the code unless the code’s element is . Each of such elements of the individual changes to that of the code with a probability , independently of other elements.
(iv) At the same time, the code updates its belief based on beliefs of some individuals. For this, individuals whose beliefs are closer to the reality than the code is to the reality are identified, which are called a superior group. Then each element of the code’s belief changes to the dominant element within the superior group with a probability , independently of other elements.
So far, the reality has been assumed to be fixed and the individuals are not replaced by new ones. Thus, it is called a closed system. March first considered a homogeneous population in the closed system, in which all individuals are assigned the identical learning probability. Then he considered the heterogeneous population in the closed system, such that some individuals have higher learning probability than the others. Finally, a homogeneous population in the open system is also considered; in the open system individuals may be replaced by new ones (turnover) and/or the reality changes over time (turbulence). The turnover probability is denoted by and the turbulence probability is by . That is, with a probability each individual is replaced by a new one having a random belief vector at each time step. Also, each element of the reality shifts to the other value, i.e., from to or from to , with a probability .
II.2 Simplified March model
Let us simplify the March model. As in the original model we consider an external reality, a code, and agents. At a time step , the external reality, denoted by a variable , can have a value of or . Beliefs of the code and agents about reality are respectively represented by variables, namely, for the code and for the th agent with . For a given initial condition of , , and , each time step consists of four stages:
(i) Every agent of independently updates their belief by learning from the code with a socialization probability :
(1) |
(ii) Each agent is replaced by a new agent with a turnover probability , and the new agent is assigned a belief randomly drawn from .
(iii) The code learns from agents who are superior to the code. Here superior agents indicate those whose beliefs are closer to the reality than the code’s belief is. For example, if , the code learns from superior agents only when and there is at least one agent with . Denoting a superior agent to the code by , the code updates its belief with a codification probability :
(2) |
where is a Kronecker delta.
(iv) With a turbulence probability , the reality is assigned a new value randomly drawn from , which closes the time step.
Since the reality, the code, and agents update their value or beliefs synchronously, the order of four stages does not affect the result, except for the socialization and turnover of agents. Note that parameters , , , and in our simplified model correspond to , , , and in the original March model [1], respectively.
Our simplified model can be called a closed system if , otherwise it is an open system. In the open system, the reality can vary over time (turbulence), and/or agents can be replaced by new agents (turnover). In contrast, in the closed system, the reality is assumed to have the fixed value of for the entire period of time, i.e., for all , without loss of generality, and there is no turnover of agents.
III Results

III.1 Homogeneous learning in a closed system
We consider the homogeneous learning model in a closed system. That is, for all , there is no turnover of agents, and every agent has the same learning probability , i.e.,
(3) |
See also Fig. 1(a). The state of the system at each time step can be summarized in terms of the code’s belief and the number of agents whose belief matches the reality, which we denote by . Precisely, is defined as
(4) |
In our case with , one simply has . Then the expected density of agents with the belief matching the reality is given by
(5) |
which can be interpreted as the expected belief of a randomly chosen agent, or average individual knowledge [3].
Depending on the initial belief of the code, two scenarios are possible. Firstly, if , the code does not change its belief because it already coincides with the reality, and agents’ beliefs will eventually converge to the value of by Eq. (1). It implies an absorbing state that the code and all agents share the same value as the reality, which is denoted by . Secondly, if , will decrease until the code’s belief changes to by Eq. (2) as long as there is at least one agent with belief of . Once the code’s belief becomes , will increase to reach the absorbing state . However, this is not always the case; may reach before changes to , implying that both the code and agents have the belief of without further dynamics. This indicates another absorbing state . Figure 1(b) shows the transition structure between states with two absorbing states emphasized in red.
For the analysis, let us denote by the probability that at time step the code’s belief is and there are exactly agents with belief of . These probabilities satisfy the normalization condition as
(6) |
They evolve according to the following master equation in discrete time:
(7) |
where the transition probabilities read [see Fig. 1(b)]
(8) |
Here we have used and . Calculating Eq. (7) recursively with any initial condition , one can in principle obtain for any , , and , hence in Eq. (5), i.e.,
(9) |
We focus on steady states of the model. It is obvious that all initial probabilities eventually end up with two absorbing states, i.e., and , implying . Thus, the average individual knowledge in Eq. (9) reads
(10) |
As the simplest yet nontrivial case, let us consider an initial condition that , namely, and for all other states . Using
(11) |
the master equations for and [Eq. (7)] are written as follows:
(12) |
Master equations for all other states than and are irrelevant to calculate in Eq. (10). Since , one obtains
(13) |
leading to
(14) |
This solution is not a function of due to the choice of the initial condition that .
We observe that in Eq. (14) is a decreasing function of but an increasing function of [Fig. 1(c)], already partly implying the qualitatively similar behavior to the simulation results of the original March model, i.e., Fig. 1 in Ref. [1]. The larger leads to the more correct belief of agents about the reality, which is easily understood by considering that is the learning probability of the code from superior agents. On the other hand, the effect of on is not straightforward to understand. It is because the large value of speeds up not only the probability flow to the state but also to the state . It means that the large always helps spread the code’s belief to agents whether the code’s belief is correct or not. When the code’s belief is incorrect, the large increases the amount of flow to the state [Fig. 1(a)]. In contrast, when the code’s belief is correct, the large does not increase the amount of flow to the state , but only speeds up the flow. As we focus on the steady behavior, such an asymmetric role of leads to the decreasing behavior of as a function of . Such a behavior has been interpreted that slow socialization allows for longer exploration, resulting in better organizational learning [1].
For general initial conditions, one can estimate for a sufficiently large by iterating the master equation in Eq. (7) for a given initial condition . For a demonstration, we consider a system of agents and the initial condition that for each . We estimate the value of at the first time step when is satisfied. From the results shown in Fig. 1(d), we find that is a decreasing function of but an increasing function of , showing the same tendency as the solution of in Eq. (14) for the simpler initial condition.
These exact and numerical results are supported by agent-based simulations. We perform the simulations of the model using the rules in Eqs. (1) and (2) together with Eq. (3) for the system with agents with the mentioned initial conditions. Firstly, the initial condition with used for the analysis is realized in the simulation such that only one agent has an initial belief of , while all other agents and the code have the belief of . Secondly, as for the initial condition with for each , we set for and for the rest of agents, while the value of is randomly chosen from with equal probabilities. Eventually every run ends up with one of absorbing states, implying that . For each pair of and , we take the average of over different runs to get the value of in Eq. (5). Such averages are shown with symbols in Fig. 1(c, d), which are indeed in good agreement with analytical and numerical solutions, respectively.
III.2 Heterogeneous learning in a closed system
Next, we study the heterogeneous version of the model in a closed system with by using two distinct values of learning probability, i.e., by setting
(15) |
where [Fig. 2(a)]. Agents with the larger (smaller) learning probability among and can be called fast (slow) learners [1]. The state of the system at each time step can be summarized in terms of the code’s belief , the number of agents with whose belief is , which we denote by , and the number of agents with whose belief is , which we denote by . Here . Then the expected density of agents with belief of is given as
(16) |
which can also be interpreted as the expected belief of a randomly chosen agent.
Similarly to the homogeneous version of the model, the master equation reads
(17) |
where the transition probabilities are written as
(18) |
Here we have used and . It is obvious that there are two absorbing states, i.e., and , implying that . Calculating Eq. (17) recursively with any initial condition , one can in principle obtain for any , , , and , hence in Eq. (16).

As the simplest yet nontrivial case, let us consider an initial condition that . Denoting
(19) |
the master equations for , , , and [Eq. (17)] are written as follows:
(20) | ||||
where , , , , , , , and . After some algebra, one obtains
(21) |
leading to
(22) |
This result is not a function of and due to the choice of the initial condition that . It is straightforward to prove that setting reduces the solution in Eq. (22) to the solution of the homogeneous model with the initial condition that .
To demonstrate the effect of heterogeneous learning on in Eq. (22), we parameterize and with non-negative and . Here controls the degree of heterogeneity of agents. As shown in Fig. 2(c), the larger leads to the higher values of in Eq. (22) for the entire range of , which is consistent with the simulation results of the original March model, i.e., Fig. 2 in Ref. [1]. Such behaviors can be essentially understood by comparing the transition probability in the heterogeneous model to its counterpart in the homogeneous model [Eq. (8)] to get
(23) |
It implies that for positive the probability flow to the absorbing state in the heterogeneous model is always smaller than the flow to the absorbing state in the homogeneous model, hence the larger for the heterogeneous model than for the homogeneous model. We also remark that the ratio in Eq. (23) gets closer to for the larger value of , hence the smaller gap between the heterogeneous and homogeneous models. Such an expectation is indeed the case as depicted in Fig. 2(c).
For general initial conditions, we numerically estimate for a sufficiently large by iterating the master equation in Eq. (17) for a given initial condition . For a demonstration, we consider a system of agents () and the initial condition that for each . We estimate the value of at the first time step when is satisfied. From the results shown in Fig. 2(d), we find that has higher values for more heterogeneous systems.
We also perform the agent-based simulations of the heterogeneous model for the system with agents () with the mentioned initial conditions. Firstly, the initial condition with is realized in the simulation such that one agent with and one agent with have an initial belief of , while all other agents as well as the code have the belief of . Secondly, as for the initial condition with for each , we set for and for the rest of agents, while the value of is randomly chosen from with equal probabilities. Eventually every run ends up with one of absorbing states, implying that . For each combination of , , and , we take the average of over different runs to get the value of in Eq. (16). Such averages are shown with symbols in Fig. 2(c, d), which are indeed in good agreement with analytical and numerical solutions, respectively.
Finally, we note that our setup for heterogeneous agents is different from that in the original March model [1]. In the original paper, the heterogeneity was controlled by the number of agents having , i.e., , while learning probabilities were fixed to be and . We test such original setup using our simplified model both by estimating from the master equations in Eq. (17) and by performing agent-based simulations up to . For a system of agents, we consider with the initial condition that for each . In addition to the expected belief of all agents in Eq. (16), we measure the expected belief of fast-learning agents with , that of slow-learning agents with , and that of the code. Results from the numerical estimation of master equations and from agent-based simulations are in good agreement with each other as depicted in Fig. 2(e). These results show the qualitatively same behaviors as in the original March model, i.e., Fig. 3 in Ref. [1].
III.3 Homogeneous learning in an open system

We finally study the effects of turnover of agents and turbulence of the external reality on organizational learning. For this, we focus on the simplest yet nontrivial case with , indicating that there is only one agent in the system. This agent’s belief is denoted by . The case with general can be studied too within our framework.
We first consider the system with turnover of agents only, while for all , namely, and . Let us denote by the probability that at time step the code’s belief is , and the agent’s belief is . These probabilities satisfy the normalization condition as
(24) |
They evolve according to the following master equation in discrete time:
(25) |
where the transition probabilities read using and as follows:
(26) |
and all other transition probabilities are zero. Note that due to , both and are no longer absorbing states.
For the steady state, we derive the analytical solution of as
(27) |
This solution is independent of the initial condition, and it is not a function of because the change of the code’s belief from to is irreversible; as long as , all initial probabilities end up with states with . We also find that and for , both of which are irrespective of . That is, is a decreasing function of for , whereas for it can have the finite value less than one, e.g., given in Eq. (14), as long as . Thus one can conclude that shows an “increasing” and then decreasing behavior in the range of . This argument is important to discuss about the optimal turnover of agents that maximizes the effectiveness of organizational learning.
Next, we focus on the transient dynamics instead of the steady state. Starting from the initial condition that , we obtain, e.g., at
(28) |
It turns out that is a quadratic function of , meaning that it can have either a maximum or minimum value for the range of , or it may be a monotonically increasing or decreasing function of , depending on the choice of and . Figure 3(a) summarizes such behaviors in the plane of , and Fig. 3(b) depicts as a function of for several cases of and . For example, for sufficiently large , is a monotonically decreasing function of irrespective of . It implies that if the code learns fast from the superior agent, the maximal organizational learning is achieved when there is no turnover. This result can be understood by considering the fact that the turnover introduces randomness or new information from outside of the system. In contrast, if the code learns slowly from the superior agent but the agent learns fast from the code, the maximal organizational learning is achieved for the largest turnover. In such case, without turnover, both the code and agent are likely to be stuck in a suboptimal situation. Thus, the strong turnover may help the system to evade it. Precisely, we find the increasing and then decreasing behavior of for and , and the monotonically decreasing behavior of for and , which are consistent with the results of the original March model, e.g., Fig. 4 in Ref. [1].
We now consider the effect of turbulence on the organizational learning in the presence of turnover of agents. For this, we define an extended system consisting of both the system and the reality, whose states can be denoted by . Let us denote by the probability that at time step the reality is , the code’s belief is , and the agent’s belief is . These probabilities satisfy the normalization condition as
(29) |
They evolve according to the following master equation in discrete time:
(30) |
Denoting and and using Eq. (26), we get the transition probabilities for Eq. (30) as follows:
(31) |
where we have used
(32) |
As is no longer constant, the average individual knowledge is obtained as
(33) |
After some algebra, we derive an exact solution of for the steady state as follows:
(34) |
This analytical solution is depicted as a heatmap in Fig. 3(c) for the case with . We find that for each value of turbulence there exists an optimal turnover probability that maximizes the effectiveness of organizational learning. Such an optimal turnover probability for a given is obtained as
(35) |
which is an increasing function of . It implies that the system is required to have the larger turnover to adapt to the more turbulent external reality. Yet the value of with the optimal turnover tends to decrease with [Fig. 3(c)].
Finally, we look at the transient dynamics of for different values of when , , and are given. We numerically obtain by iterating the master equation in Eq. (25) using the initial condition that for each state. Numerical results are depicted as solid lines in Fig. 3(d). The agent-based simulations are also performed using the initial condition that each of , , and is randomly and independently drawn from . Simulation results are shown as dotted lines in Fig. 3(d), which are in good agreement with numerical results. These results are also qualitatively similar to those in the original March model, e.g., Fig. 5 in Ref. [1].
IV Conclusion
In our work, the celebrated organizational learning model proposed by March [1] has been simplified, enabling us to explicitly write down the master equation for the dynamics of the model. We have successfully derived exact solutions for the simplest yet nontrivial cases and numerically estimated quantities of interest using the master equations, both results are found to be in good agreement with agent-based simulation results. Our results help rigorously understand not only the simplified model but also the original March model to a large extent.
Our theoretical framework for the simplified March model can be applied to the original March model as well as variants of the March’s model that incorporate other relevant factors such as forgetting of the beliefs [3, 11] and direct interaction and communication between agents in the organization [4, 5, 6]. For modeling the interaction structure between agents, various network models might be deployed [26, 25, 27, 28]. In conclusion, we expect to gain deeper insights into the organizational learning using our analytical approach.
Acknowledgements.
H.-H.J. acknowledges financial support by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2022R1A2C1007358).References
- March [1991] J. G. March, Exploration and Exploitation in Organizational Learning, Organization Science 2, 71 (1991).
- Rodan [2005] S. Rodan, Exploration and exploitation revisited: Extending March’s model of mutual learning, Scandinavian Journal of Management 21, 407 (2005).
- Blaschke and Schoeneborn [2006] S. Blaschke and D. Schoeneborn, The forgotten function of forgetting: Revisiting exploration and exploitation in organizational learning, Soziale Systeme 12, 99 (2006).
- Miller et al. [2006] K. D. Miller, M. Zhao, and R. J. Calantone, Adding Interpersonal Learning and Tacit Knowledge to March’s Exploration-Exploitation Model, Academy of Management Journal 49, 709 (2006).
- Kane and Alavi [2007] G. C. Kane and M. Alavi, Information Technology and Organizational Learning: An Investigation of Exploration and Exploitation Processes, Organization Science 18, 796 (2007).
- Kim and Rhee [2009] T. Kim and M. Rhee, Exploration and exploitation: Internal variety and environmental dynamism, Strategic Organization 7, 11 (2009).
- Fang et al. [2010] C. Fang, J. Lee, and M. A. Schilling, Balancing Exploration and Exploitation Through Structural Design: The Isolation of Subgroups and Organizational Learning, Organization Science 21, 625 (2010).
- Sachdeva [2013] M. Sachdeva, Encounter with March’s Organizational Learning Model, Review of Integrative Business and Economics Research 2, 602 (2013).
- Schilling and Fang [2014] M. A. Schilling and C. Fang, When hubs forget, lie, and play favorites: Interpersonal network structure, information distortion, and organizational learning: When Hubs Forget, Lie, and Play Favorites, Strategic Management Journal 35, 974 (2014).
- Chanda and Ray [2015] S. S. Chanda and S. Ray, Optimal exploration and exploitation: The managerial intentionality perspective, Computational and Mathematical Organization Theory 21, 247 (2015).
- Miller and Martignoni [2016] K. D. Miller and D. Martignoni, Organizational learning with forgetting: Reconsidering the exploration–exploitation tradeoff, Strategic Organization 14, 53 (2016).
- Chanda [2017] S. S. Chanda, Inferring final organizational outcomes from intermediate outcomes of exploration and exploitation: The complexity link, Computational and Mathematical Organization Theory 23, 61 (2017).
- Chanda et al. [2018] S. S. Chanda, S. Ray, and B. Mckelvey, The Continuum Conception of Exploration and Exploitation: An Update to March’s Theory, M@n@gement 21, 1050 (2018).
- Chanda and Miller [2019] S. S. Chanda and K. D. Miller, Replicating agent-based models: Revisiting March’s exploration–exploitation study, Strategic Organization 17, 425 (2019).
- Marín-Idárraga et al. [2022] D. A. Marín-Idárraga, J. M. Hurtado González, and C. Cabello Medina, Factors affecting the effect of exploitation and exploration on performance: A meta-analysis, BRQ Business Research Quarterly 25, 312 (2022).
- Castellano et al. [2009] C. Castellano, S. Fortunato, and V. Loreto, Statistical physics of social dynamics, Reviews of Modern Physics 81, 591 (2009).
- Acemoglu and Ozdaglar [2011] D. Acemoglu and A. Ozdaglar, Opinion dynamics and learning in social networks, Dynamic Games and Applications 1, 3 (2011).
- Sen and Chakrabarti [2014] P. Sen and B. K. Chakrabarti, Sociophysics: An Introduction (Oxford University Press, Oxford, 2014).
- Sîrbu et al. [2017] A. Sîrbu, V. Loreto, V. D. P. Servedio, and F. Tria, Opinion dynamics: Models, extensions and external effects, in Participatory Sensing, Opinions and Collective Awareness, edited by V. Loreto, M. Haklay, A. Hotho, V. D. Servedio, G. Stumme, J. Theunis, and F. Tria (Springer International Publishing, Cham, 2017) pp. 363–401.
- Proskurnikov and Tempo [2017] A. V. Proskurnikov and R. Tempo, A tutorial on modeling and analysis of dynamic social networks. Part I, Annual Reviews in Control 43, 65 (2017).
- Proskurnikov and Tempo [2018] A. V. Proskurnikov and R. Tempo, A tutorial on modeling and analysis of dynamic social networks. Part II, Annual Reviews in Control 45, 166 (2018).
- Baronchelli [2018] A. Baronchelli, The emergence of consensus: A primer, Royal Society Open Science 5, 172189 (2018).
- Anderson and Ye [2019] B. D. O. Anderson and M. Ye, Recent advances in the modelling and analysis of opinion dynamics on influence networks, International Journal of Automation and Computing 16, 129 (2019).
- Noorazar [2020] H. Noorazar, Recent advances in opinion propagation dynamics: A 2020 survey, The European Physical Journal Plus 135, 521 (2020).
- Barabási and Pósfai [2016] A.-L. Barabási and M. Pósfai, Network Science (Cambridge University Press, Cambridge, 2016).
- Borgatti et al. [2009] S. P. Borgatti, A. Mehra, D. J. Brass, and G. Labianca, Network analysis in the social sciences, Science 323, 892 (2009).
- Newman [2018] M. E. J. Newman, Networks, second edition ed. (Oxford University Press, Oxford, United Kingdom ; New York, NY, United States of America, 2018).
- Menczer et al. [2020] F. Menczer, S. Fortunato, and C. A. Davis, A First Course in Network Science (Cambridge University Press, Cambridge, 2020).