Agent mental models and Bayesian rules as a tool to create opinion dynamics models
Abstract
Traditional models of opinion dynamics provide a simple approach to understanding human behavior in basic social scenarios. However, when it comes to issues such as polarization and extremism, we require a more nuanced understanding of human biases and cognitive tendencies. In this paper, we propose an approach to modeling opinion dynamics by integrating mental models and assumptions of individuals agents using Bayesian-inspired methods. By exploring the relationship between human rationality and Bayesian theory, we demonstrate the efficacy of these methods in describing how opinions evolve. Our analysis leverages the Continuous Opinions and Discrete Actions (CODA) model, applying Bayesian-inspired rules to account for key human behaviors such as confirmation bias, motivated reasoning, and our reluctance to change opinions. Through this, we obtain update rules that offer deeper insights into the dynamics of extreme opinions. Our work sheds light on the role of human biases in shaping opinion dynamics and highlights the potential of Bayesian-inspired modeling to provide more accurate predictions of real-world scenarios. Keywords: Opinion dynamics,Bayesian methods, Cognition, CODA, Agent-based models
1 Introduction: The need for general methods
Opinion dynamics models [1, 2, 3, 4, 5, 6, 7, 8, 9] is a fascinating area of research that seeks to understand how opinions spread through society. A plethora of models have been developed to describe this process, ranging from simple to complex, and covering topics such as the formation of consensus[10], the emergence of polarization [11, 12, 13, 14], the different ways we can define it [15], and spread of extreme opinions [16, 17, 18, 19, 20, 21, 22, 23, 24]. Extremism can be defined as the end of a range over continuous variable [7, 25], or as inflexibles who do not change their minds[26], or using mixed models [8, 9, 27]. To explore the problem of extremism in the real world, not only opinions matter [28] and we must also consider actions as part of what defines an extremist [29, 30].
However, despite the wealth of knowledge already gathered, there are still many aspects of opinion dynamics that require greater attention. Community efforts are necessary to fill gaps in research and promote progress in the field [31]. Currently, most models are only comparable to similar implementations, with a lack of translation between different types. While attempts to propose general frameworks and universal formulas do exist [9, 32, 33], they are, so far, isolated efforts. To achieve greater understanding, we need to explore how different models relate to each other [34] and develop methods to incorporate new effects and assumptions.
To gain a deeper understanding of the spread of polarization and extremism, we must also consider actions, not just opinions. Incorporating decision-making and behavioral aspects is crucial in modeling opinions [35, 36], as it allows for a more accurate depiction of how individuals perceive and react to complex information [37]. One promising avenue of exploration is the use of Bayesian-inspired models [9].
Bayesian rules to model opinions have been introduced both in the opinion dynamics community, as extensions of the Continuous Opinions and Discrete Actions (CODA) [8, 9] and similar opinion models [9, 21, 27, 34, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 28], by the use of Bayesian belief networks [68], as well as, independently, in models associated to economical reasoning [69, 70, 71, 72, 73, 74]. Despite their popularity, there are two aspects of Bayesian-inspired models for opinion dynamics that have not been properly debated so far. They are how to turn assumptions on how the agents reason on dynamical model equations and the problem of the relationship between Bayesianism and rationality.
In this paper, I will explore both aspects of the problem. First, I will provide a brief explanation on why the use of Bayesian-inspired rules is both supported by experimental evidence [75, 76, 77, 78, 73] and not the same as assuming rationality [79, 80]. And, to illustrate how Bayesian rules can be used in a general problem, I will explain how we can create mental models for the agents. And we will see how models can include any kind of bias and bounded rationality effects [81, 82], and turn those assumptions into update rules. More exactly, I will show how to introduce agents who distrust opinions opposed either to a certain choice (a direct bias), or against their current preference. Update rules for both kinds of agents will be calculated and the effects of those biases on how extreme the position of the agents become will be studied.
2 Bayesian models and rationality
Bayesian methods are one of our golden standards for rationality and inductive arguments [80]. If we start with simple rules about how induction about plausibilities must be performed, we can show that plausibilities should be updated using Bayes theorem [83, 84]. The same theorem can be obtained from other axioms, such as maximization of entropy [85] or the much weaker basis of “dutch books”. However, using those ideas to describe how people reason has been considered problematic [86]. On the other hand, Bayesian ideas can be used both in a hard way, strictly following its rules, or as a soft version, where its basic ideas are used to represent aspects such as updating subjective opinions [87]. That poses the question of whether Bayesian rules can be used to describe our reasoning well. Of course, we should also ask about the requirements we impose from our rationality models. Even the definition of bounded rationality can be challenged, as it assumes there is someone to judge if any behavior is entirely rational [88]. Indeed, using Bayesian methods with perfection is impossible as they require infinite abilities [80]. We can only approximate them by considering a limited set of possibilities, and that is compatible with how our brains work.
There is good evidence that we reason in ways that are similar to Bayesian methods [75, 76, 77, 78, 73]. But, if we want to use it in mathematical and computational models, we need to go further than just similarity. Indeed, it is clear that humans are not perfectly rational nor good at statistics. We sometimes fail at easy problems [89, 90] and we tend to be too confident about our mental skills [91]. At first, experiments about our cognitive abilities seemed to point at a remarkable amount of incompetence.
But that is not the whole story. When we look closer, some of our mistakes are not as serious as they look. While we are not good abstract logicians, when the same problems are presented associated with normal day-to-day circumstances, we answer them correctly [92]. And there is evidence that many of our mistakes can be described as the use of reasonable heuristics [93], short-cuts that allow us to arrive at answers fast and with less effort [94, 95]. As simplified rules, heuristics fail under some circumstances. If we go looking for those cases, we will undoubtedly find them. But they are not a sign of complete incompetence.
That does not explain our overconfidence problems, of course. But we have also observed that our reasoning skills might not have evolved to find the best answers, even if we can use them for that purpose. Instead, humans show a tendency to defend their identity-defining beliefs [96, 97]. More than that, our ancestors had solid reasons to be good at fitting inside their groups and, if possible, ascend socially inside those. Our reasoning and argumentative skills were more valuable from an evolutionary perspective if they worked towards agreeing or convincing our peers. Group belonging mattered more than being right about things that would not affect direct survival [98, 99]. Being sure and defending ideas from our social group would have been more important than looking for better but unpopular answers.
And there is one more issue. In many laboratory problems where we observed humans make mistakes, scientists used questions that would never appear in real life [79]. Take the case of the observation of weighting functions, that we seem to alter probability values that we hear [100]. Using changed values might seem to serve no purpose at first. However, the scientists who performed those experiments assumed the values they presented were known with certainty. But there is no such certainty in real life. If someone tells you there is a 30% chance of rain tomorrow, even if based on very well-calibrated models, you know that is an estimate. As with any estimates, at best, the actual value is close to 30%. A Bayesian assessment of the problem would combine our previous estimate of rain with the information about the forecast to obtain a final opinion. If we do that with the many experiments that showed we use probability values wrong, we see that our behavior is compatible with Bayesian rules. The observed changes match reasonable assumptions for everyday situations when we hear uncertain estimates [76]. Doing that would be wrong in artificial cases, such as those in the laboratory experiments, where there is no (or very little) uncertainty about the probability values. Even our tendency to change our opinions far less than we should, called conservatism (no relationship to politics implied in the technical term) [101] can easily be explained. We just need to include in a Bayesian model a tendency to skepticism about the data we hear [76]. Our brains might just have heuristics that mimick Bayesian estimates for an uncertain and unreliable world.
That is, we are not perfect, but our bounded abilities are not those of incompetents. We make reasonable approximations. We are motivated reasoners, more interested in defending ideas than looking for better answers. Given the right preferences, that can even be described as rational, despite ethical considerations. Even when it seemed we were making mistakes, we might have been behaving closer to Bayesian rules than it was initially assumed.
3 Update rules from the agent mental models
We are not perfectly rational, but we can still be described by Bayesian rules. Therefore, it makes sense to try Bayesian methods as a way to represent human opinions. Thus, the next question we must answer is how we can include our biases and cognitive characteristics in our models. To do that, we must consider how the agents think, what they show to others, and what they expect to see from their neighbors. That is, we need to describe their mental models. But, first, it makes sense to ask how Bayesian methods work.
3.1 A very short introduction to Bayesian methods
While Bayesian statistics, done correctly, can become complicated fast, it is based on an elementary, almost trivial basis, the Bayes Theorem. It works like this. We have an issue we want to learn about, and we represent it by a variable random , where each represents one possible value. Here, can be a quantity, but it can also be nothing more than a label. We start with a probability distribution, our initial guess, on how likely each possible is, represented by a probability distribution , called the prior opinion. Once we observe data , we must change our opinions on . To do that, we need to know, for each possible value , how likely it would have been that we would observe . That is, we need the likelihood, . From that, calculating the posterior estimate is done just by a simple multiplication . The proportionality constant is calculated by imposing that the final distribution must add (or integrate) to one. Everything in Bayesian methods is a consequence of that update rule and considerations on how to use it. The basic idea, already using an opinion dynamics problem, is represented in Figure

To illustrate how it is done in practice, let us look at the demonstration of the CODA model rules. In CODA, the agents try to decide between two possible choices, or (sometimes represented as values of a spin, or ). Each agent has, a time , a probability opinion that is better than (and, that is better). But, instead of expressing their probabilistic opinion, they only show their neighbors the option they considered more likely to be better. They also assume their neighbors have a larger than 50% chance to pick the best option. In principle, there could be assymetric different chances, to choose when is better and to choose when is better. As a first example, we will assume the symmetry here. From Bayes rule, we get the update model for and, similarly, . As the probabilities must add one, we divide by their sum and get the update rule
(1) |
At this step, we have an update rule and we could use it as it is. In this case, however, it is trivial to make a change of variables that will provide us a much more computationally-light rule. The update rule becomes much simpler if we make the transformation
(2) |
The denominators cancel and we get
(3) |
where and the sign on the sum depends on whether the neighbor prefers () or (). We can get an even simpler model by renormalizing and making .
And that is it. We start from the initial opinion, use the Bayes theorem and we get an update rule. In this case, the final rule is to add one when an neighbor prefer and subtract one when it prefers , flipping opinions at .
3.2 Agent communication rules and their mental assumptions
There were two major assumptions in the CODA model. One was how agents communicate. While they have a probabilistic estimate of which option is better, everyone else observes only their best estimate. The second assumption is the mental model of the agents. They think their neighbors are more likely than not to pick the best choice. And all those neighbors are assumed to have the same chance, , to get it right. Of course, we could introduce some agents that think their neighbors are more likely to be wrong, that is, . If we do that, we have just included contrarians [40] in our model, that is, agents who tend to disagreement [102].
Making the model assumption explicit makes it easier to investigate what happens if agents behave or act differently. For example, we could have a situation where agents look for the best choice between and . Still, they communicate their probability estimates that is the better choice, . In that case, while we can keep the probability as a measure of the opinion of the agents, those agents no longer state a binary choice but a continuous specific probability value.
3.2.1 Changing what is communicated
Mental models become crucial, including which question the agents are trying to answer. Agents may still want to determine which is better, or , sharing information on their uncertainty. Or they might see the probability values as an ideal mixture. They might accept, for example, that the perfect position is 60% of and 40% of . First, let us consider the case where they just want to pick the best choice.
In that case, is still a simple value associated with , while, trivially, gives us the probability is better. But we must evaluate the chance other agents will have any continuous value for if is better (or if is). That is, we need a distribution probability of probabilities to represent the agent’s mental model. In mathematical terms, we need a model that says, assuming is better, how likely the neighbor would have an opinion if is better, . Of course, we also need , but in many situations of interest, that can be obtained by symmetry assumptions. This model was implemented originally by assuming was a Beta function , that is
, where is obtained from Gamma functions by
Here, and are the traditional parameters the Beta function. Interestingly, the update rule can once more be defined in terms of the log-odds variable , and that leads once more to an additive model. However, the term to be added depends on the probability communicated by , and as the agents become more certain, the size of the additive term explodes. Consequently, extreme opinions become much stronger than the already extreme opinions in the original CODA model [53, Martins2020c].
3.2.2 Changing the mental variables
While in that example, we did need a new likelihood, the Beta function that tells us how likely agents think others will provide each answer, that need was caused by a change in the communication. But it is not only what is communicated that can be changed. Inner assumptions agents make can also be changed, including the questions they want to answer. That is, we can have very different assumptions in their mental model.
Assume that, instead of having “wisher” agents looking for the best option between and , each “mixer” agent has an estimate about the best mixture between and . In this case, is the percentage of in the best blend of two options, and, as such, each value must have a probability. We need probability densities as prior and posterior opinion distributions. The easier way to implement such a model is to look for conjugate distributions [103]. Conjugate distributions correspond to those cases where, for a certain likelihood, the prior and the posterior will be represented by the same function. In that case, update rules can just update the parameters of that distribution function and not complete distributions. However, this is not the general case for a random application of the Bayes rule. As we build more detailed models, finding conjugate distributions might not be possible for a given mental model.
Luckily, for the more straightforward cases we are interested in, conjugate options do exist. Before proceeding to a model, we need to decide how the agents communicate. Here, once more, we can have a discrete communication, where each agent just tells others which option, or should appear in a more significant amount, or communication could include the average estimate for the proportion of , .
In the first case, with discrete communication, we can find a natural conjugate family by using Beta distributions for the opinions and a Binomial distribution for the likelihood, that is, the chance that a neighbor will choose or depending on its average estimate. Of course, other choices of distributions would be possible, and they correspond to a similar thought structure but with different dynamics. Under the Binomial-Beta option, interestingly, the dynamics of the preferred choice, given the more straightforward choice of proper functions for the prior and the likelihood, mimics the original CODA dynamics. However, while the evolution of the preferred option in the mixture is the same, the probability (or proportion) values never go to the same extreme values [53].
3.2.3 Other mental models already explored
Other variations are possible and have been explored. An initial approach to trust was implemented in a fully connected setting by adding one assumption to the agent mental model [47]. Instead of assuming that every other agent had a probability to get the best answer, each agent assumed there were two types of agents. Agent assumed there was a probability that agent was a reliable source who would pick the best option with chance . But other agents could also be untrustworthy (or useless) and pick the best option with probability so that , possibly even 0.5 or lower. That is, if was the best choice, instead of a chance neighbor would prefer , there was a chance given by . Applying Bayes theorem with this new likelihood led to update rules for and . Each agent updated both its opinions on whether or would be better. And it also changed its estimates about the trustworthiness of the observed agent. The update rule could not be simplified by a transformation of variables because no exact way to uncouple the evolution of the opinion and how much agents trusted their neighbors was found.
A similar idea was used for a Bayesian version for continuous communication and the “mixer”-type of agents. That model [38]led to an evolution of opinions qualitatively equivalent to what we observe in Bounded Confidence models [7, 25]. That continuous model was later extended to study the problem of several independent issues when agents adjusted their trust based not only on the debated subject but also on their neighbor’s positions on other matters [66]. Interestingly, that did cause opinions to become more clustered and aligned, similar to the irrational consistency we observe in humans [104].
Even the agent’s influence on its neighbors can be used for their mental models. That was introduced as a simple version by assuming that there were different chances and that a neighbor would prefer in the case was indeed better, depending on whether the observing agent selected or not [34]. That actually weakened the reinforcement effects of agreement, as the other agent could think was better not because it was but because the observer also thought that. In the limit of strong influence, the dynamics of the voter model [105, 106] – or other types of discrete models, such as majority [107, 108] or Sznajd [6] rules, depending on the interaction rules – was recovered. That shows that Bayesian-inspired models are much more general than the traditional discrete versions.Other variations are possible and have been explored. An initial approach to trust was implemented in a fully connected setting by adding one assumption to the agent mental model [47]. Instead of assuming that every other agent had a probability to get the best answer, each agent assumed there were two types of agents. Agent assumed there was a probability that agent was a reliable source who would pick the best option with chance . But other agents could also be untrustworthy (or useless) and pick the best option with probability so that , possibly even 0.5 or lower. That is, if was the best choice, instead of a chance neighbor would prefer , there was a chance given by . Applying Bayes theorem with this new likelihood led to update rules for and . Each agent updated both its opinions on whether or would be better. And it also changed its estimates about the trustworthiness of the observed agent. The update rule could not be simplified by a transformation of variables because no exact way to uncouple the evolution of the opinion and how much agents trusted their neighbors was found.
A similar idea was used for a Bayesian version for continuous communication and the “mixer”-type of agents. That model [38]led to an evolution of opinions qualitatively equivalent to what we observe in Bounded Confidence models [7, 25]. That continuous model was later extended to study the problem of several independent issues when agents adjusted their trust based not only on the debated subject but also on their neighbor’s positions on other matters [66]. Interestingly, that did cause opinions to become more clustered and aligned, similar to the irrational consistency we observe in humans [104].
Even the agent’s influence on its neighbors can be used for their mental models. That was introduced as a simple version by assuming that there were different chances and that a neighbor would prefer in the case was indeed better, depending on whether the observing agent selected or not [34]. That actually weakened the reinforcement effects of agreement, as the other agent could think was better not because it was but because the observer also thought that. In the limit of strong influence, the dynamics of the voter model [105, 106] – or other types of discrete models, such as majority [107, 108] or Sznajd [6] rules, depending on the interaction rules – was recovered. That shows that Bayesian-inspired models are much more general than the traditional discrete versions.
3.3 Introducing other behavioral questions
Bayesian rules can help us explain how humans reason [76, 79]. And we just saw a few examples of introducing extra details in the agent mental models so that new assumptions can be included. In this subsection, I will discuss how we can move ahead and model some biases we observe in human behavior by applying those concepts to create an original model for a specific human tendency.
Let us start considering an easy one, confirmation bias [109], as that does not even need new mental models. Confirmation bias is simply our tendency to look for information from sources who agree with us. As such, it can be better modeled by introducing rules that reconnect the influence network so that agents will more likely be surrounded by those who agree with them. The co-evolution of the CODA model over a network that evolved based on the agreement or disagreement of the agents and their physical location plus thermal noise was studied. Depending on the noise and the strength of the agreement term in the network rewiring, the tendency to polarization and confirmation bias was apparent [110].
Motivated reasoning [96], on the other hand, is not only about who we learn from but how we interpret information depending on whether we agree with it or not. That can be implemented in more than one way. A simple version is an approach where trust was introduced in the CODA model [47]. In that model, depending on the initial conditions, as agents become more confident about their estimates, they would eventually distrust those who disagreed with them, even when they met for the first time.
3.3.1 Direct bias
Of course, there are other possibilities, including more heavy-handed approaches. For example, agents might think being untrustworthy is associated with one of the two options. Instead of a trust matrix signaling how much each agent trusts each agent , we can introduce trust based on the possible choices. For two options, and , we can assume that each agent has a prior preference. Each agent believes that untrustworthy people will defend only the side the agent is biased against. That can be represented by a small addition to the CODA model. Assume agent prefers , and it thinks that people would only go wrong to defend . One way to describe that is to assume that there is a proportion of reasonable agents who behave like CODA. They pick the better alternative with chance . However, the remaining choose more often, regardless of whether it is true or not, with probability . That is, for agents biased towards , the chance a neighbor would choose , represented by , if (or ) is better is given by the equations
and
Also,
and
In principle, we could introduce an update rule for both the probability that is better and . However, for this exercise, let us assume there is an initial fixed value for . For example, to illustrate how this bias can change the CODA model, the agents can suppose that a majority of honest people are given by . Let us also assume that honest people get the better answer with a chance of , while biased people provide their wrong estimate of with a probability of That makes , , and . So, if agent observes someone who prefers , it will update its opinion by . That is, for the CODA transformed variable , we will have . On the other hand, if is observed, the update rule will provide (again for an agent biased towards ), . That means that steps in favor of will be larger than those of for such an agent. While that agent can be convinced by a majority, the agent will move towards its preference if there is a tie in the neighbors. Depending on the exact values of the parameters, the ratio between step sides can become larger.
We will assume a renormalization of the additive term to implement this bias, as it is normally done in CODA applications. Assuming agent is biased in favor of (the equations for the case where is biased in favor of are symmetric and will not be printed here), we have, for the variable , as defined in Equation 2, when the neighbor also prefers
(4) |
When the neighbor prefers , we have
(5) |
These equations trivially revert to the standard CODA model in Equation 3 when all agents are considered honest, that is, when . For ease of further manipulations, we can define the size of the steps in both Equations 4 and 5 as
The first question we can ask about those steps is how they relate to each other and the original step size . Simple algebraic manipulations show that and that , as long as in both cases. As , both size steps are smaller. That was to be expected. Introducing a chance the neighbor might not know what it is talking about should, indeed, decrease the information content of its opinion.
![]() |
![]() |
Figure 2 shows how the step sizes change as a function of the estimated proportion of honest agents among those each agent is biased against. In the top panel, we can see both the step sizes when the neighbor agrees with the opinion favored by the bias, , and when the neighbor disagrees with it, . Notice that when tends to 1.0, both steps become equal. That corresponds to the scenario where everyone is honest; we recover the original CODA model with identical steps. The apparent equality when tends to zero is not real and is only a product of visualizing minimal steps. Indeed, the bottom panel shows that the ratio between the steps, , increases continuously as gets close to zero.
The changes in the step size are obviously not of the same size. If we want to normalize the steps, as in CODA, we can choose either or as the step we make equal to 1.0. We will initially make, for the implementations, the smaller . And, as a simplification, instead of carrying dependencies on , , and , we will simply assume there is a ratio such that . That is, we will assume that disagreement with the bias corresponds to a step size of 1.0 and agreement with the bias to a step size of , where corresponds to the situation where there is no bias. Finally, we have elementary update rules we can implement given by
(6) |
3.3.2 Conservatism bias
As a related example, let us introduce the effect called conservatism [101], where people change their opinions less than they should. That can be quickly introduced using a mental model where the agent thinks there is a chance the data is reliable and a chance that the information is only non-informative noise. When that happens, it is only natural that the update’s size will be much smaller. The larger the chance associated with noisy data, the smaller the update’s size.
We have the same mathematical problem we had with the direct bias, except that now, instead of believing that defenders of a specific side might be lying, the agents think that defenders of the side who disagree with them might be dishonest. Suppose an agent changes its opinion from to . In that case, it will change its assessment of where there might be dishonesty from to . That means we have very similar rules as those in Equation 6. However, the bias is always the same as opinion, so the rules depend directly on . That is, we have
(7) |
4 Results
To observe how the system might evolve under each rule, we implemented the models using the R software environment [111]. The code is available at https://www.comses.net/codebase-release/d4ab2a25-4233-4e6e-a8c5-a3b919cfd6e2/
All cases shown here correspond to an initial neighborhood of agents defined as a square, bi-dimensional lattice with agents, no periodic conditions, and second-level neighbors. As commented below, in some situations, we let the network evolve into a polarized situation before allowing opinions to change using this polarized and quenched network. Once that initial network, lattice, or rearranged, is established, agents interact, observing the choice of one neighbor and updating their own opinion based on that observation, according to the update rules of each case. There was an average number of 50 interactions per agent. For all simulations, initial opinions were drawn from a uniform continuous distribution on the range . All the results for the distribution of opinions correspond to averages over 20 realizations of each case.
4.0.1 Simulating a direct bias
Results for the distribution of opinions in the direct bias case, with no initial rewiring, for three distinct values of the ratio can be seen in Figure 3. Each curve corresponds to a different ratio between the agreement and the disagreement step. The top graphic shows how extreme the opinion is measured in the number of disagreement steps (following the algorithm implementation). The bottom picture shows the renormalized distribution if we measure opinions in terms of agreement steps.
![]() |
![]() |
It is easy to observe in the upper figure, with the distribution measured in disagreement steps, that as increases, the opinions spread further away from the central position. That suggests that the opinions become more extreme. While the peaks associated with the more extreme opinions become softer (and seem to disappear for ), that happens because the existing extremists get distributed over an extensive range of even stronger opinions.
A different picture emerges if we look at a renormalized step size, using the agreement step as unit, , instead of the implemented disagreement step. The distribution of opinions for such a case can be seen in the bottom graphic in Figure 3. What we observe, in this case, is that when we measure the strength of opinions using as the measuring unit, the distributions become much more similar. As increases, we observe an increase in the number of agents around the weaker opinions and smaller peaks of extremists.
The apparently contradictory conclusions we can arrive at by looking at only one graphic are another example of the problem with adequately defining what an extremist is [28]. In this model, we actually have two different ways to define extremism. One of those definitions would arise if we were to transform back the number of steps into probability values by inverting the renormalizations and transformations of variables. It is worth noticing that, as can be seen in the upper graphic in Figure 2, both step sizes, for agreement and disagreement, become smaller when bias is introduced when compared to the unbiased case . That would lead to the strange conclusion that when agents think others might be biased, their final opinion becomes less extreme. That makes sense if we only care about the agents’ confidence in absolute terms. After all, other agents become less reliable. Their information should convince less.
But there is another definition of extremism that is also natural and reasonable. That definition comes from asking how easy (or hard) it would be to change the choice of a specific agent. This does correspond to the number of steps away from the central opinion. More precisely, since it is necessary to move to the opposite view to change one’s choice, using the disagreement step as the unit is the best choice.
That would be the case if we were studying results from the conservatism bias, as we will do in Subsection 4.1.1. Here, however, bias was fixed, corresponding to the initial choice of the agent. And that means that there is an essential proportion of agents whose biases do not conform to their opinions. While the proportion of agreement between bias and final opinion increases with (observed averages were 57.7% for , 66.4% for , and 74.5% for ), we never obtained a perfect match between bias and opinion. Even at , about 1 in 4 agents would move towards the opposite choice using agreement steps.
That might seem unrealistic. While there may be a bias toward initial opinions, people usually defend their current position. What size of step humans would use when returning to a previously held belief is an interesting question, but that is beyond the scope of this paper. On the other hand, confirmation bias is described not as an agreement with initial views but as the tendency to look for sources of information that agree with our current beliefs.
That bias was already studied before, with a network that evolved simultaneously with the opinion updates [63]. As agents stopped interacting with those they disagreed with, we had a case of traditional confirmation bias. That also means that the results we have analyzed so far correspond to a direct bias independent of opinions. Real or not, it was introduced here as an exercise and as an example that we can model different modes of thinking using Bayesian tools, regardless of if they correspond to reality.
4.1 Rewiring the network
Of course, we want to explore more realistic cases. We will do that in two steps, first, by getting closer to a confirmation bias by introducing rewired networks, and second, by implementing the model of conservatism bias, as defined in Equation 7.
Here, we will follow the rewiring algorithm previously used to study the simultaneous evolution of networks and opinions to generate an initial, quenched network before opinion updates start. At each step, the algorithm tries to destroy an existing link between two agents (1 and 2) and create a new one between two other agents (3 and 4). The decision of whether to accept that change depends on the euclidean distance between agents 1 and 2, , and agents 3 and 4, , measured by considering a coordinate system over the square lattice where first neighbor distances correspond to 1. That way, there is a tendency to preserve the initial square lattice. And we also use a term that makes it more likely to accept the change when the old link was between disagreeing agents and the new one is between agreeing agents. That is, each rewiring is accepted with probability
(8) |
where is the relative importance between physical proximity and opinion agreement. Notice that if we want a reasonable chance that distant agents will connect, should be comparable with the initial side of the square network.
![]() |
Figure 4 shows a typical case, as an example, of how an initially square lattice is altered after an average of 20 rewirings per agent, , and . In the figure, red and blue colors correspond to the two choices, and darker hues show a stronger final opinion, obtained after the opinion update phase. As the network did not change during the opinion update phase, its final shape after implementing the rule defined in Equation 8 is preserved.
![]() |
![]() |
Figure 5 shows the distribution of opinions when we use the direct bias rules after obtaining a quenched network using the parameters to generate networks similar to those in Figure 4. We can see significant differences between the results with no initial rewiring (Figure 3) and the new ones (Figure 5). With the initial rewiring, we find almost no moderates in the final opinions, and the extremist peaks, when we use the disagreement step, , for normalization, move to even stronger values as increases. That dislocation also corresponds to smaller peaks distributed over a more extensive range. That, however, is an artifact of using . When renormalized to size steps, we see that the three curves for different values of match almost perfectly. That happens because, with the implemented rewiring, most interactions happen with agents who already share the same opinions.
However, the problem of defining extremism under these circumstances becomes much more straightforward. We no longer have a situation where the agent bias does not agree with its opinion. Indeed, the observed averages for the proportion of agreement between bias and opinions were 99.8% for , 100% for , and 100% for . That means that to change position, all agents would move one disagreement step at a time. While observing how the curves match when we measure opinions in agreement steps is interesting, the disagreement step case is more informative. And here, as expected, the more bias one introduces, the harder it becomes for all agents to change their opinions.
4.1.1 Simulating the conservatism bias
We also implemented the conservatism model defined in Equation 7 using the same parameter values we used in the previous cases. As in this scenario, there is no distinction between opinion and bias, the simulations presented here do not include an initial rewiring phase to guarantee that most agents are aligned with their own biases.
![]() |
![]() |
Figure 6 shows the distribution of opinions as disagreement steps (top panel) and agreement steps (bottom panel). As we have discussed, in this case, the measure that tells us how hard it would be for an agent to change its choice is associated with the disagreement steps. And we can see at the top figure that, as we had observed with the direct bias case with no initial rewiring, the peaks of extreme opinions move to more distant values and become less pronounced. That happens because, contrary to the rewired situation where there were few links between disagreeing agents, we still have more extensive borders where we can find agents whose neighbors have distinct choices. Despite that, moderate agents, close to zero, become rarer as the conservatism bias increases. While not as relevant for understanding how extreme opinions are in this model, the graphic in the bottom figure, renormalized for agreement steps of size one, is still interesting. It shows the same tendency of preserving the general shape for different values of we observed before, except that it is clear that when no conservatism is expected (), it still shows more agents in the moderate region.
5 Conclusion
Approximating the complete but impossible Bayesian rules provides a reasonable description of human behavior, especially when we account for individuals’ imperfect trust in others and their tendency to reason in a motivated manner. By modeling these approximations, we can create realistic models of human behavior, as demonstrated in this paper. Our analysis focuses on introducing variations to the CODA model where agents exhibit biases in favor of a particular opinion. Through our investigation, we find that conservatism implementation, where the agents distrust information that goes against their beliefs, results in more extreme opinions and a greater resistance to change.
Furthermore, our research examines how to obtain update rules from assumptions about the agents’ mental models, using both previously published cases and our new examples. Another crucial aspect of using Bayesian-inspired rules is that it allows for a clearer understanding of the relationship between distinct models. By exploring which assumptions lead to our current opinion models, we gain insight into their interrelatedness and identify situations where each model might be more applicable. Overall, our work sheds light on the potential of Bayesian-inspired modeling to offer a more nuanced description of agent behavior and its impact on opinion dynamics.
6 Acknowledgments
This work was supported by the Fundação de Amparo a Pesquisa do Estado de São Paulo (FAPESP) under grant 2019/26987-2.
References
- [1] C. Castellano, S. Fortunato, and V. Loreto. Statistical physics of social dynamics. Reviews of Modern Physics, 81:591–646, 2009.
- [2] Serge Galam. Sociophysics: A Physicist’s Modeling of Psycho-political Phenomena. Springer, 2012.
- [3] B. Latané. The psychology of social impact. Am. Psychol., 36:343–365, 1981.
- [4] S. Galam, Y. Gefen, and Y. Shapir. Sociophysics: A new approach of sociological collective behavior: Mean-behavior description of a strike. J. Math. Sociol., 9:1–13, 1982.
- [5] S. Galam and S. Moscovici. Towards a theory of collective phenomena: Consensus and attitude changes in groups. Eur. J. Soc. Psychol., 21:49–74, 1991.
- [6] K. Sznajd-Weron and J. Sznajd. Opinion evolution in a closed community. Int. J. Mod. Phys. C, 11:1157, 2000.
- [7] G. Deffuant, D. Neau, F. Amblard, and G. Weisbuch. Mixing beliefs among interacting agents. Adv. Compl. Sys., 3:87–98, 2000.
- [8] André C. R. Martins. Continuous opinions and discrete actions in opinion dynamics problems. Int. J. of Mod. Phys. C, 19(4):617–624, 2008.
- [9] André C. R. Martins. Bayesian updating as basis for opinion dynamics models. AIP Conf. Proc., 1490:212–221, 2012.
- [10] Hendrik Schawe, Sylvain Fontaine, and Laura Hernández. The bridges to consensus: Network effects in a bounded confidence opinion dynamics model, 2021.
- [11] Paul DiMaggio, John Evans, and Bethany Bryson. Have american’s social attitudes become more polarized? American Journal of Sociology, 102(3):690–755, 1996.
- [12] Delia Baldassarri and Andrew Gelman. Partisans without constraint: Political polarization and trends in american public opinion. American Journal of Sociology, 114(2):408–446, 2008.
- [13] Charles S. Taber, Damon Cann, and Simona Kucsova. The motivated processing of political arguments. Political Behavior, 31(2):137–155, Jun 2009.
- [14] Philipp Dreyer and Johann Bauer. Does voter polarisation induce party extremism? the moderating role of abstention. West European Politics, 42(4):824–847, 2019.
- [15] Aaron Bramson, Patrick Grim, Daniel J. Singer, William J. Berger, Graham Sack, Steven Fisher, Carissa Flocken, and Bennett Holman. Understanding polarization: Meanings, measures, and model evaluation. Philosophy of Science, 84(1):115–159, 2017.
- [16] G. Deffuant, F. Amblard, and T. Weisbuch, G.and Faure. How can extremism prevail? a study based on the relative agreement interaction model. JASSS-The Journal Of Artificial Societies And Social Simulation, 5(4):1, 2002.
- [17] F. Amblard and G. Deffuant. The role of network topology on extremism propagation with the relative agreement opinion dynamics. Physica A, 343:725–738, 2004.
- [18] S. Galam. Heterogeneous beliefs, segregation, and extremism in the making of public opinions. Physical Review E, 71:046123, 2005.
- [19] G. Weisbuch, G. Deffuant, and F. Amblard. Persuasion dynamics. Physica A, 353:555–575, 2005.
- [20] Daniel W. Franks, Jason Noble, Peter Kaufmann, and Sigrid Stagl. Extremism propagation in social networks with hubs. Adaptive Behavior, 16(4):264–274, 2008.
- [21] André C. R. Martins. Mobility and social network effects on extremist opinions. Phys. Rev. E, 78:036104, 2008.
- [22] L. Li, A. Scaglione, A. Swami, and Q. Zhao. Consensus, polarization and clustering of opinions in social networks. IEEE Journal on Selected Areas in Communications, 31(6):1072–1083, June 2013.
- [23] S. E. Parsegov, A. V. Proskurnikov, R. Tempo, and N. E. Friedkin. Novel multidimensional models of opinion dynamics in social networks. IEEE Transactions on Automatic Control, 62(5):2270–2285, May 2017.
- [24] V. Amelkin, F. Bullo, and A. K. Singh. Polar opinion dynamics in social networks. IEEE Transactions on Automatic Control, 62(11):5650–5665, Nov 2017.
- [25] R. Hegselmann and U. Krause. Opinion dynamics and bounded confidence models, analysis and simulation. Journal of Artificial Societies and Social Simulations, 5(3):3, 2002.
- [26] S. Galam and F. Jacobs. The role of inflexible minorities in the breaking of democratic opinion dynamics. Physica A, 381:366–376, 2007.
- [27] André C. R. Martins and Serge Galam. The building up of individual inflexibility in opinion dynamics. Phys. Rev. E, 87:042807, 2013. arXiv:1208.3290.
- [28] André C.R. Martins. Extremism definitions in opinion dynamics models. Physica A: Statistical Mechanics and its Applications, 589:126623, 2022.
- [29] Cristian Tileaga. Representing the ’other’: A discurive analysis of prejudice and moral exclusion in talk about romanies. Journal of Community & Applied Social Psychology, 16:19–41, 2006.
- [30] Joseph Bafumi and Michael C. Herron. Leapfrog representation and extremism: A study of american voters and their members in congress. American Political Science Review, 104(3):519–542, 2010.
- [31] Pawel Sobkowicz. Whither now, opinion modelers? Frontiers in Physics, 8:461, 2020.
- [32] Lucas Böttcher, Jan Nagler, and Hans J. Herrmann. Critical behaviors in contagion dynamics. Physical Review Letters, 118:088301, 2017.
- [33] Serge Galam and Taksu Cheon. Tipping points in opinion dynamics: A universal formula in five dimensions. Frontiers in Physics, 8:446, 2020.
- [34] André C. R. Martins. Discrete opinion models as a limit case of the coda model. Physica A, 395:352–357, 2014.
- [35] Anna Kowalska-Pyzalska, Katarzyna Maciejowska, Karol Suszczyński, Katarzyna Sznajd-Weron, and Rafał Weron. Turning green: Agent-based modeling of the adoption of dynamic electricity tariffs. Energy Policy, 72:164–174, 2014.
- [36] F. Müller-Hansen, M. Schlüter, M. Mäs, J. F. Donges, J. J. Kolb, K. Thonicke, and J. Heitzig. Towards representing human behavior and decision making in earth system models – an overview of techniques and approaches. Earth System Dynamics, 8(4):977–1007, 2017.
- [37] Nika Haghtalab, Matthew O. Jackson, and Ariel D. Procaccia. Belief polarization in a complex world: A learning theory perspective. Proceedings of the National Academy of Sciences, 118(19), 2021.
- [38] André C. R. Martins. Bayesian updating rules in continuous opinion dynamics models. Journal of Statistical Mechanics: Theory and Experiment, 2009(02):P02017, 2009. arXiv:0807.4972v1.
- [39] André C. R. Martins, Carlos de B. Pereira, and R. Vicente. An opinion dynamics model for the diffusion of innovations. Physica A, 388:3225–3232, 2009.
- [40] André C. R. Martins and Cleber D. Kuba. The importance of disagreeing: Contrarians and extremism in the coda model. Adv. Compl. Sys., 13:621–634, 2010.
- [41] R. Vicente, André C. R. Martins, and N. Caticha. Opinion dynamics of learning agents: Does seeking consensus lead to disagreement? Journal of Statistical Mechanics: Theory and Experiment, 2009:P03015, 2009. arXiv:0811.2099.
- [42] Xia-Meng Si, Yun Liua, Fei Xionga, Yan-Chao Zhang, Fei Ding, and Hui Cheng. Effects of selective attention on continuous opinions and discrete decisions. Physica A, 389(18):3711–3719, 2010.
- [43] Xia-Meng Si, Yun, Hui Cheng, and Yan-Chao Zhang. An opinion dynamics model for online mass incident. In 2010 3rd International Conference on Advanced Computer Theory and Engineering(ICACTE), volume 5, pages V5–96–V5–99, 2010.
- [44] André C. R. Martins. A middle option for choices in the continuous opinions and discrete actions model. Advances and Applications in Statistical Sciences, 2:333–346, 2010.
- [45] André C. R. Martins. Modeling scientific agents for a better science. Adv. Compl. Sys., 13:519–533, 2010.
- [46] Lei Deng, Yun Liu, and Fei Xiong. An opinion diffusion model with clustered early adopters. Physica A: Statistical Mechanics and its Applications, 392(17):3546 – 3554, 2013.
- [47] André C. R. Martins. Trust in the coda model: Opinion dynamics and the reliability of other agents. Physics Letters A, 377(37):2333–2339, 2013. arXiv:1304.3518.
- [48] Su-Meng Diao, Yun Liu, Qing-An Zeng, Gui-Xun Luo, and Fei Xiong. A novel opinion dynamics model based on expanded observation ranges and individuals’ social influences in social networks. Physica A: Statistical Mechanics and its Applications, 415:220–228, 2014.
- [49] Gui-Xun Luo, Yun Liu, Qing-An Zeng, Su-Meng Diao, and Fei Xiong. A dynamic evolution model of human opinion as affected by advertising. Physica A, 414:254–262, 2014.
- [50] Nestor Caticha, Jonatas Cesar, and Renato Vicente. For whom will the bayesian agents vote? Frontiers in Physics, 3:25, 2015.
- [51] André C. R. Martins. Opinion particles: Classical physics and opinion dynamics. Physics Letters A, 379(3):89–94, 2015. arXiv:1307.3304.
- [52] Xi Lu, Hongming Mo, and Yong Deng. An evidential opinion dynamics model based on heterogeneous social influential power. Chaos, Solitons & Fractals, 73:98 – 107, 2015.
- [53] André C. R. Martins. Thou shalt not take sides: Cognition, logic and the need for changing how we believe. Frontiers in Physics, 4(7), 2016.
- [54] N. R. Chowdhury, I.-C. Morarescu, S. Martin, and S. Srikant. Continuous opinions and discrete actions in social networks: A multi-agent system approach. In 2016 IEEE 55th Conference on Decision and Control (CDC), pages 1739–1744, Dec 2016.
- [55] Zhichao Cheng, Yang Xiong, and Yiwen Xu. An opinion diffusion model with decision-making groups: The influence of the opinion’s acceptability. Physica A: Statistical Mechanics and its Applications, 461:429–438, 2016.
- [56] Chuanchao Huang, Bin Hu, Guoyin Jiang, and Ruixian Yang. Modeling of agent-based complex network under cyber-violence. Physica A: Statistical Mechanics and its Applications, 458:399–411, 2016.
- [57] Leandro M. T. Garcia, Ana V. Diez Roux, André C. R. Martins, Yong Yang, and Alex A. Florindo. Development of a dynamic framework to explain population patterns of leisure-time physical activity through agent-based modeling. International Journal of Behavioral Nutrition and Physical Activity, 14:111, 2017.
- [58] Ruoyan Sun and David Mendez. An application of the continuous opinions and discrete actions (coda) model to adolescent smoking initiation. PLOS ONE, 12(10):1–11, 10 2017.
- [59] Pawel Sobkowicz. Opinion dynamics model based on cognitive biases of complex agents. Journal of Artificial Societies and Social Simulation, 21(4):8, 2018.
- [60] Hyun Keun Lee and Yong Woon Kim. Public opinion by a poll process: model study and bayesian view. Journal of Statistical Mechanics: Theory and Experiment, page 053402, 2018.
- [61] Leandro M. T. Garcia, Ana V. Diez Roux, André C. R. Martins, Yong Yang, and Alex A. Florindo. Exploring the emergence and evolution of population patterns of leisure-time physical activity through agent-based modelling. International Journal of Behavioral Nutrition and Physical Activity, 15(1):112, Nov 2018.
- [62] Tanzhe Tang and Caspar G. Chorus. Learning opinions by observing actions: Simulation of opinion dynamics using an action-opinion inference model. Journal of Artificial Societies and Social Simulation, 22(3):2, 2019.
- [63] André C. R. Martins. Network generation and evolution based on spatial and opinion dynamics components. International Journal of Modern Physics C, 2019.
- [64] André C. R. Martins. Discrete opinion dynamics with choices. The European Physical Journal B, 93(1):1, 2020. arXiv:1905.10878.
- [65] F.J. León-Medina, J. Tena-Sánchez, and F.J. Miguel. Fakers becoming believers: how opinion dynamics are shaped by preference falsification, impression management and coherence heuristics. Quality and Quantity, 54:385–412, 2020.
- [66] Marcelo V. Maciel and André C. R. Martins. Ideologically motivated biases in a multiple issues opinion model. Physica A, page 124293, 2020. https://arxiv.org/abs/1908.10450.
- [67] Aili Fang, Kehua Yuan, Jinhua Geng, , and Xinjiang Wei. Opinion dynamics with bayesian learning. Complexity, page 8261392, 2020.
- [68] Zhanli Sun and Daniel Müller. A framework for modeling payments for ecosystem services with agent-based models, bayesian belief networks and opinion dynamics models. Environmental Modelling and Software, 45:15–28, 2013. Thematic Issue on Spatial Agent-Based Models for Socio-Ecological Systems.
- [69] André Orléan. Bayesian interactions and collective dynamics of opinion: Herd behavior and mimetic contagion. Journal of Economic Behavior and Organization, 28:257–274, 1995.
- [70] Matthew Rabin and Joel L. Schrag. First Impressions Matter: A Model of Confirmatory Bias*. The Quarterly Journal of Economics, 114(1):37–82, 02 1999.
- [71] James Andreoni and Tymofiy Mylovanov. Diverging opinions. American Economic Journal: Microeconomics, 4(1):209–32, February 2012.
- [72] Ryosuke Nishi and Naoki Masuda. Collective opinion formation model under bayesian updating and confirmation bias. Phys. Rev. E, 87:062123, Jun 2013.
- [73] Víctor M. Eguíluz, Naoki Masuda, and Juan Fernández-Gracia. Bayesian decision making in human collectives with binary choices. PLOS ONE, 10(4):e0121332, 2015.
- [74] Yunlong Wang, Lingqing Gan, and Petar M. Djurić. Opinion dynamics in multi-agent systems with binary decision exchanges. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4588–4592, 2016.
- [75] David C. Knill and Alexandre Pouget. The bayesian brain: the role of uncertainty in neural coding and computation. Trends in Neurosciences, 27(12):712–719, December 2004.
- [76] André C. R. Martins. Probabilistic biases as bayesian inference. Judgment And Decision Making, 1(2):108–117, 2006.
- [77] J. B. Tenenbaum, C. Kemp, and P. Shafto. Theory-based bayesian models of inductive reasoning. In A. Feeney and E. Heit, editors, Inductive reasoning. Cambridge University Press., 2007.
- [78] Joshua B. Tenenbaum, Charles Kemp, Thomas L. Griffiths, and Noah D. Goodman. How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022):1279–1285, 2011.
- [79] André C. R. Martins. Arguments, Cognition, and Science: Need and consequences of probabilistic induction in science. Rowman & Littlefield, 2020.
- [80] André C. R. Martins. Embracing undecidability: Cognitive needs and theory evaluation, 2020. arXiv:2006.02020.
- [81] H. A. Simon. Rational choice and the structure of environments. Psych. Rev., 63:129–138, 1956.
- [82] R. Selten. What is bounded rationality? In G. Gigerenzer and R. Selten, editors, Bounded rationality: The adaptive toolbox, Dahlem Workshop Report, pages 147–171. Cambridge, Mass, MIT Press, 2001.
- [83] R. T. Cox. The Algebra of Probable Inference. John Hopkins University Press, 1961.
- [84] E.T. Jaynes. Probability Theory: The Logic of Science. Cambridge, Cambridge University Press, 2003.
- [85] Ariel Caticha and Adom Giffin. Updating probabilities. In A. Mohammad-Djafari, editor, Bayesian Inference and Maximum Entropy Methods in Science and Engineering, volume 872 of AIP Conf. Proc., page 31, 2007.
- [86] Frederick Eberhardt and David Danks. Confirmation in the cognitive sciences: The problematic case of bayesian models. Minds and Machines, 21(3):389–410, 2011.
- [87] Shira Elqayam and Jonathan St. B. T. Evans. Rationality in the new paradigm: Strict versus soft bayesian approaches. Thinking & Reasoning, 19(3-4):453–470, 2013.
- [88] Nick Chater, Teppo Felin, David C. Funder, Gerd Gigerenzer, Jan J. Koenderink, Joachim I. Krueger, Denis Noble, Samuel A. Nordli, Mike Oaksford, Barry Schwartz, Keith E. Stanovich, and Peter M. Todd. Mind, rationality, and cognition: An interdisciplinary debate. Psychonomic Bulletin and Review, 25(2):793–826, 2018.
- [89] P.C. Watson and P. Johnson-Laird. Psychology of Reasoning: Structure and Content. Harvard University Press, 1972.
- [90] A. Tversky and D. Kahneman. Extension versus intuituive reasoning: The conjuction fallacy in probability judgement. Psych. Rev., 90:293–315, 1983.
- [91] Stuart Oskamp. Overconfidence in case-study judgments. Journal of Consulting Psychology, 29(3):261–265, 1965.
- [92] P. N. Johnson-Laird, Paolo Legrenzi, and Maria Sonino Legrenzi. Reasoning and a sense of reality. British Journal of Psychology, 6(3):395–400, 1972.
- [93] Gerd Gigerenzer, Peter M. Todd, and The ABC Research Group. Simple Heuristics That Make Us Smart. Oxford University Press, 2000.
- [94] Amos Tversky and Daniel Kahneman. Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2):207–232, 1973.
- [95] G. Gigerenzer and D. G. Goldstein. Reasoning the fast and frugal way: Models of bounded rationality. Psych. Rev., 103:650–669, 1996.
- [96] Dan M. Kahan. Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making, 8:407–424, 2013.
- [97] Dan M. Kahan. The expressive rationality of inaccurate perceptions. Behavioral and Brain Sciences, 40:e6, 2017.
- [98] Hugo Mercier and Dan Sperber. Why do humans reason? arguments for an argumentative theory. Behavioral and Brain Sciences, 34:57–111, 2011.
- [99] Hugo Mercier and Dan Sperber. The Enigma of Reason. Harvard University Press, 2017.
- [100] D. Kahneman and A. Tversky. Prospect theory: An analysis of decision under risk. Econometrica, 47:263–291, 1979.
- [101] W. Edwards. Conservatism in human information processing. In B. Kleinmuntz, editor, Formal Representation of Human Judgment. John Wiley and Sons, 1968.
- [102] S. Galam. Contrarian deterministic effect: the hung elections scenario. Physica A, 333:453–460, 2004.
- [103] Anthony O’Hagan. Kendall’s Advanced Theory of Statistics: Bayesian Inference, volume 2B. Arnold, 1994.
- [104] Robert Jervis. Perception and Misperception in International Politics. Princeton University Press, 1976.
- [105] P. Clifford and A. Sudbury. A model for spatial conflict. Biometrika, 60:581–588, 1973.
- [106] R. Holley and T. M. Liggett. Ergodic theorems for weakly interacting systems and the voter model. Ann. Probab., 3:643–663, 1975.
- [107] S. Galam. Modelling rumors: the no plane pentagon french hoax case. Physica A, 320:571–580, 2003.
- [108] S. Galam. Opinion dynamics, minority spreading and heterogeneous beliefs. In B. K. Chakrabarti, A. Chakraborti, and A. Chatterjee, editors, Econophysics & Sociophysics: Trends & Perspectives., pages 363–387. Wiley, 2006.
- [109] Raymond S. Nickerson. Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2):175–220, 1998.
- [110] André C. R. Martins. Network generation and evolution based on spatial and opinion dynamics components. International Journal of Modern Physics C, 30(9):1950077, 2019.
- [111] R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2008. ISBN 3-900051-07-0.