Misinformation Regulation in the Presence of Competition between Social Media Platforms (Extended Version)
Abstract
Social media platforms have diverse content moderation policies, with many prominent actors hesitant to impose strict regulations. A key reason for this reluctance could be the competitive advantage that comes with lax regulation. A popular platform that starts enforcing content moderation rules may fear that it could lose users to less-regulated alternative platforms. Moreover, if users continue harmful activities on other platforms, regulation ends up being futile. This article examines the competitive aspect of content moderation by considering the motivations of all involved players (platformer, news source, and social media users), identifying the regulation policies sustained in equilibrium, and evaluating the information quality available on each platform. Applied to simple yet relevant social networks such as stochastic block models, our model reveals the conditions for a popular platform to enforce strict regulation without losing users. Effectiveness of regulation depends on the diffusive property of news posts, friend interaction qualities in social media, the sizes and cohesiveness of communities, and how much sympathizers appreciate surprising news from influencers.
Index Terms:
Content moderation, Network game theory, Platform competition, Social mediaI Introduction
In recent years, social media platforms have significantly changed their content moderation policies. For instance, Jhaver et al. [1] reported that Pinterest recently started actively blocking some search results for controversial or debunked queries, while Reddit has taken action against certain toxic communities by banning or quarantining them. Meta/Facebook has also identified and flagged groups that engage in hate speech on its platform. Different platforms have implemented widely varying policies, resulting in different standards for what is considered acceptable speech. One of the most visible examples of this was the different treatment of President Trump’s May 2020 messages by Twitter and Facebook, as reported by [2] and [3].
The difference in content moderation policies between social media platforms is likely influenced by a combination of factors, including each company’s interpretation of §230 of the US Communications Decency Act and the founders’ views on freedom of speech. However, it is also probable that economic incentives play a role in shaping these policies. In fact, a platform’s regulation, or lack thereof, can be perceived as a competitive advantage, as users may choose to switch to a competitor if they feel that the moderation rules in place amount to censorship and limit their access to the news and content they value. This effect can be amplified in social networking markets, where the positive externalities of network economics are typically present. As more users join a particular platform, it becomes even more socially advantageous for others to join as well, resulting in a winner-takes-all effect, as has been well documented in the literature, such as [4]. Such a threat of seeing its users leave en masse may play a role in reducing a platformer’s interest in policing misinformation on its network. At the same time, other actors, such as companies advertising on the network, may value regulation (e.g., as a form of so-called ’brand safety’ [5, 6]) and introduce incentives that, in contrast, may push the platformer towards more oversight.
The aim of this article is to investigate the competitive nature of content moderation on simplified platform models by considering the motivations of different players such as the platform owner, news sources, and social media users. We purposefully omit the kind of pro-regulation stakeholders mentioned above in this first study. This is because, in the context of our model, they only make it easier for the platform to implement strict regulation, and we are interested in characterizing the most restrictive such policy that can be sustained in equilibrium.
More precisely, in our model, we examine the effect of regulation when multiple users and a news source (also referred to as an influencer or sender) choose platforms to maximize their individual utility. A user’s utility is determined by their social interaction payoff and news consumption payoff. Platform owners have the option to deplatform a deceptive news source based on their policies. As a result, the news source must decide whether to comply with the strict regulation of a popular platform or relocate to an alternative platform to spread more biased information to a smaller audience.
This study is part of a larger body of research on platform governance, as discussed in reviews [7] and [8]. We specifically examine how platforms can prevent the spread of misinformation, such as through the preclusion of accounts. Twitter account suspension following the 2020 U.S. presidential election is examined in [9]. Our model also takes into account how users and news sources react to the regulations. The way users perceive and interpret content moderation is studied closely in [10]. Meanwhile, Horta Ribeiro et al. [11] investigate how moderation policies affect harmful activity in the wider web ecosystem, and whether this activity becomes more radicalized on alternative platforms. Additionally, Innes and Innes [12] examine the intended and unintended consequences of deplatforming interventions, such as the emergence of “minion accounts” of a banned account.
Other recent works concerned with regulation in information diffusion include [13, 14, 15], all of which place a heavier emphasis on the process underlying dynamic propagation and/or the formation of opinions. Little research has been conducted, however, in regard to the intensity of effective regulation that mainstream platforms can implement without losing users. Understanding this fundamental aspect can advance our conversation on the social responsibilities of tech companies.
This article improves our previous work [16] in that 1) we give tighter sufficient conditions in the propositions for simple social network structures and 2) more realistic network structures (finite networks and random networks with community structures) are investigated. In addition, 3) our analysis is extended to networks consisting of heterogeneous users (sympathizers and non-sympathizers). These improvements allow our model to analyze e.g., how regulation should be adjusted depending on community sizes, in-community cohesiveness, inter-community structures, and sympathizers near an influencer.
The rest of the paper proceeds as follows. First, we introduce our model of information diffusion and regulation on a single platform. Next, we describe the competitive process of platform migration among the sender and users, and characterize the strictest regulation that can be imposed in equilibrium. We then relate its properties to various network structures. Lastly, the analysis scope is extended to heterogeneous users.
II Model
II-A News information from sender
The game players are multiple users and a sender, who choose platform . The sender’s (respectively user ’s) choice of platform or is denoted as ().
The sender’s goal is to persuade as many users as possible to adopt a specific belief about the state of the world, while accounting for users’ platform choices and updates of beliefs in response to its signaling policy. The sender thus acts as a propagandist, whose decision-making process is best described using the framework of Bayesian persuasion [17], as is done in [18].
The world state is represented by a random variable with . User estimates the world state as and gets payoff if , payoff if , or no payoff if . The assumption is made that , implying that the user’s optimal estimation is in absence of any news information. The sender is the only one who can observe the state of the world, and their objective is to have the user estimate regardless of the world state. To persuade users, the sender sends signal . However, the choice of signal is not entirely arbitrary, and the sender selects a level of deceitfulness, denoted as . The signal is generated probabilistically, such that . This means that the sender reports the true news when the state of the world is , but occasionally lies when the world state is unfavorable. User in platform receives the signal with probability of , and then makes their estimation based on the sender’s strategy and the realized signal .
Now let us examine the optimal strategy for user with respect to their estimation . If the user does not receive any signal, they should choose the default optimal estimation of . If they receive signal , the biased sender is reporting unfavorable news and the world state is without a doubt . However, if they receive signal , which could potentially be misleading, a more careful analysis is needed. In this case, the expected payoff for the user when choosing is given by
(1) |
while for , it is given by
(2) |
Therefore, the user should trust signal and select if the sender’s deceitfulness satisfies Otherwise, they should distrust signal and select . Overall, the user’s optimal strategy when receiving a signal is to trust any signal () if , and to ignore any signal () otherwise. Therefore, their expected payoff (for estimating the world state correctly) in platform is given by
(3) |
We call news consumption payoff. The sender’s expected utility is the total number of users who choose , which can be expressed as
(4) |
if . Otherwise, the sender fails to persuade the users, and therefore .
Before introducing social-network aspects of our model, let us consider a scenario with only one user and one platform, , and see the implications of the setting so far. If the sender chooses , their utility will be Since the utility function increases monotonically for , the sender’s optimal strategy is . They prefer a deceitful strategy as long as the user trusts them.
Suppose the platform sets a regulation that prohibits the sender from using a value greater than . This, for example, could be done by setting a content moderation guideline to de-platform a news source that reports misinformation often. In the absence of an alternative platform, the sender must comply with the regulation and restrict their deceitfulness to the range instead of , because a de-platformed sender’s utility would be zero. A relatively strict regulation works effectively in reducing the sender’s optimal strategy from to , thus improving the quality of news information on the platform. Conversely, a loose regulation with is meaningless, as the sender’s optimal strategy remains at .
II-B Social Interaction and Platform Adoption
Users and the sender are connected via an undirected graph, where edges represent friend relationships. Given choices ’s and , this graph induces two subgraphs containing actors that are on the same platform. We assume that the signal originating from the sender travels along the edges of the subgraph corresponding to the sender’s platfom choice and that user receives that message with probability if and only if , and is edges away from in that subgraph. We call parameter information diffusiveness. Works such as [19] have shown that news dissemination follows tree-like broadcast patterns. To account for this, we assume that the receiving probability depends on the shortest path length in the subgraph.
Note that the probability depends on other users’ choices of platforms, and a user in platform does not receive the signal. See Figure 1 for an illustrated example.

The number of user ’s friends in platform (i.e., the size of their neighborhood in the corresponding subgraph) is denoted by . The payoff derived from social interactions with these friends is , where is a parameter, controlled by , which determines the reward obtained for each social interaction in platform . The value of should be thought of as increasing with every service or feature (e.g., Story/Fleet feature, birthday notification…) introduced by the platform to enhance a user’s experience interacting with peers. We call the quality of social interaction. In real life, some friends may be more important than others, but the uniform for all neighbors is a typical assumption in the literature on networked agents’ adoption of a new option with positive externality (e.g., [20]). All in all, user ’s utility in platform is given by
(5) |
Given the sender’s decision on and , user should choose such that for all . We say the users’ choices of platforms are in equilibrium if this condition is satisfied for all users. A trivial equilibrium is for all , but there can be other equilibria.
Since we are interested in situations where a pre-existing, originally “dominant”, platform () imposes regulations in the presence of a competing one (), it makes sense to specifically look for equilibria that occur as the limit of an “adoption process” carried out by the users. The process is described as follows. First, the sender chooses . Then, from the initial state with all users in , users repeatedly update their choices. In each iteration, users choose simultaneously, based on other users’ previous choices. Iterations continue until an equilibrium is reached, if any. We will discuss the convergence of this process next, noting for now that it is essentially a fictitious play-iteration (see, e.g., [20]) for the non-cooperative game played by the users with utilities , once the sender’s choice is enacted.
Proposition 1.
If 1) the social network is acyclic and platform provides higher social interaction quality than platform B, i.e., , or 2) the social network is a finite graph, then the adoption process converges to an equilibrium in a finite number of iterations.
This is the sole equilibrium we will consider going forward, when saying that a property holds “at (the) equilibrium.”
Proof.
We will prove the result for Condition 1) only, as the proof for 2) is similar. If , the initial state is the equilibrium. So we only need to consider
As shown in the Appendix, users can switch from platform to but not from to . It is also shown that for an acyclic graph, user that switches to platform at iteration is edges away from the sender and has , where denotes the value of at iteration . (This implies that such user ’s neighbor who is edges away from the sender has to switch to at iteration . It also implies that if a user edges away does not switch to at iteration , then the user and its downstream users do not switch to forever.)
Suppose user switches to at an iteration of sufficiently large . Since , we have for small . Since user ’s neighbors farther from the sender are still in platform and the neighbor closer to the sender is in , . Since , we have if . Therefore, if . Since user adopts at iteration (i.e., ), this means . Therefore, user is a leaf node. Hence, no user switches to at iteration .
∎
III Strictest Effective Regulation
We now focus on the scenario where platforms have the ability to regulate the level of deceitfulness of the sender. Specifically, we assume that platform enforces a restriction on the sender’s choice of , limiting the range to . In contrast, has no such restriction, i.e., , and the sender can choose any if they choose . In practice, such regulation can be implemented if the source has been present on the platform long enough to enable it to fact-check its messages over a period of time, thus obtaining an empirical estimate of .
Definition 1.
Platform ’s regulation is said to be effective if it decreases the sender’s deceitfulness in platform at equilibrium. The strictest effective regulation is the minimum of effective regulation .
In other words, Definition 1 states that is effective if in the presence of the constraint , (1) the sender stays in and (2) the chosen value is less than if were equal to 1 (i.e., ). Our primary interest lies in identifying the strictest effective regulation . With this regulation, all users remain in platform .
Assuming that either condition 1) or 2) in Proposition 1 holds, we let be the sender’s expected utility in platform . When users take the optimal strategies, the sender considers as a function of . We denote as the maximum value of for .
If the sender chooses platform , all users should choose at the equilibrium. If , according to (4), increases linearly with . Thus, the sender’s optimal level of deceitfulness is . On the other hand, if , then for any .
This observation allows us to characterize an effective regulation. Firstly, if , then the regulation has no impact on the sender’s behavior as the sender would have chosen a lower level of deceitfulness. In other words, the regulation is too lenient to be effective. Secondly, if regulation satisfies , then the sender should remain in platform since it would not gain more utility in platform . Note that when takes the lowest value that satisfies the inequality, we have
(6) |
where corresponds to . Lastly, if regulation satisfies , then the sender should switch to for higher utility. The regulation is too strict, and the sender would prefer to move to the alternative platform with some of its followers.
Rearranging these cases from the viewpoint of designing the strictest effective regulation in platform , we have the following proposition.
Proposition 2.
If , effective regulation does not exist. If , platform can enforce any strict regulation effectively, i.e., . If , regulation should be moderate and
(7) |
where is for .
IV Discussion of model assumptions
Having now presented the main features of our model, and before particularizing its high-level predictions to specific networks, we are in a good position to discuss its central assumptions and simplifications in more details.
IV-A Multiple platforms
The main insights of our model remain relevant in situations where more than two platforms are competing, with one of them being initially dominant. In fact, the two-platform model can be considered as the worst case for this dominant platform because the key factor of platform competition is the positive externality. If users can disperse to many alternative platforms, it is difficult for each one to become competitive against the dominant one. Therefore, the dominant platform might be able to enforce stricter regulation.
IV-B Multihoming
In its current form, our model assumes that users adopt one platform at a time. In reality, users do not necessarily have to leave one platform to join another and can, a priori, consume news and interact with peers on multiple sites contemporaneously. If such multihoming occurs, the sender’s departure to another platform may not necessarily cause users to leave the dominant one, and the latter can in turn enforce any regulation without any risk of losing its user base.
However, if user has a finite time and/or attention budget to allocate between the various sites they join, and if they do not explicitly value diversity of platforms, they will de facto spend all that budget on the platform bringing them highest utility, i.e., move to such that for all , as we have assumed in Section III.
IV-C Sender strategy and multiple senders
Using a similar argument as the one we described for multihoming, one can contend that users do not have to get their news from a single source. In turn, if a specific source leaves the dominant platform, a user can just either continue relying on other sources or decide to adopt a new one. For this reasoning to hold, however, it is necessary that all news sources be seen as substitutable by the users. Assuming otherwise (as we do) and, in particular, that the news consumption payoff enters a user’s utility according to (5) thus implicitly posits that (a) they have already committed to a specific sender of interest, to the exclusion of all others and, (b) that this commitment is independent from the messages sent by the source.
These two points are consistent with the behavior of at least some categories of social media users, whose main motivations for following an influencer are “entertainment” and “out of habit”, as uncovered by recent research in the field of social media studies [21]. Conditions (a) and (b) are also likely to hold more generally if the news source has unique features that make it especially appealing or salient to users, even before receiving any message from it (e.g., if the source is a particularly visible public figure or media outlet, or a charismatic influencers).
Another way in which the users’ ex-ante devotion to the source (encapsulated by (a) and (b)) manifests itself in our model is in the assumption that they know the parameter . This could be the result of users having interacted with the source for a long enough time, and hence “knowing what to expect” from it, and/or being one of the salient features of the source that make it attractive to the users. This is also consistent with the literature on Bayesian Persuasion, which has been used before to model interactions between platforms and news consumers [17, 18, 22]. Note, however, that users do not necessarily heed the source’s signal but, rather, use their knowledge of to interpret the signals they receive.
V Applications to Prototypical Networks
In this section, we apply the main result of Section sec3 to prototypical network structures. To maintain analytical tractability and be able to interpret results qualitatively, we focus on simple network structures with some relevance to actual social media platforms. In particular, we study linear networks, star-chain networks, regular trees, and stochastic block models for the following reasons.
We analyze a linear network as it presents an intuitive analytical representation of our model. In many social media settings, news dissemination takes place with a tree-like hub-spoke structure, often embedded within a larger social interaction network [23, 19]. A star-chain network is built on this structure and possesses cohesive blocking clusters, a crucial element for platform adoption as defined below.
Definition 2.
A cluster, or a set of users, has cohesion if each member has at least fraction of their neighbors within the set. The cohesion of a cluster is the maximum of such .
Without news consumption payoffs, platform adoption process would be governed by cluster cohesion. As shown in [20], a “blocking cluster,” that has the highest cohesion in a graph would determine whether the adoption to the whole population occurs or not. A regular tree network exhibits, in addition to cohesive blocking clusters, the exponentially spreading communication feature, making it a good benchmark to gauge the relative importance of both characteristics (especially when compared to the star chain).
Large literature (e.g., [24], [25]) has demonstrated that homophily is encoded in common social networks. Stochastic block model is a random network suited to express such networks with community structures. Analysis of this parameterized random network is useful because players may not know the exact structure of a large network, and even if they do, computing exact solutions is prohibitively expensive. Instead, an approximate solution based on a few primary factors may be more practical. Coordination games on stochastic block model have been explored in prior works (e.g., [26], [27]).
V-A Linear Network
An infinite linear network has users with edges between and . The news source can send a signal to user 0. If the sender and user are in the same platform, user receives a signal with probability . We will use index instead of to imply the user is edges away from the sender, although coincides with in the linear network.
If the sender opts for platform , all users remain in since . The sender’s utility function is
(8) |
If , some users may switch platforms. Let user be the last user who switches to . We have for , and for , where
(9) |
for . The sender thus maximizes
(10) |
with satisfying the condition
(11) |
It is worth noting that if , then .
Condition (11) can be rewritten equivalently in two ways,
(12) |
(13) |
where
(14) |
Note that are decreasing functions, and the equalities in the conditions hold at the equilibrium. We thus have
(15) |
Using these and , the strictest effective regulation can be designed. In particular, (7) becomes
(16) |
The application of Proposition 2 to this situation is illustrated in Figure 2. For all subsequent numerical experiments, we use the following parameter values unless otherwise specified: . (The value of does not affect the claims of this article qualitatively, and we will study different values of the other parameters in the following sections.) When the sender chooses platform , the utility attains maximum at . If the sender selects , the signal is biased towards the sender’s preference, but few users move to due to the poor news quality. If , many users switch to to follow the sender, but the signal is not biased in favor of the sender. On the other hand, when the sender chooses , all users remain in the platform. The sender’s utility, , increases for , and therefore achieves the maximum value at . Proposition 2 indicates, with these parameters, the strictest effective regulation is such that . If platform enforces a stricter regulation , then the sender’s utility becomes lower than in . In other words, the sender would rather send more biased information to fewer people in the alternative platform than comply with the regulation to gain more potential receivers, and thus platform would lose some users. On the other hand, regulation works effectively, making the sender less deceptive, and the platform does not lose any users.

We will now inspect for what , platform can enforce the strict regulation . Our previous work [16] proved a sufficient condition for :
Proposition 3.
[16] For an infinite linear network, if or , then the strictest effective regulation is .
We will improve this proposition with a tighter bound.
Let . Suppose that the sender chooses . At the equilibrium, the sender’s best strategy satisfies . The strictest effective regulation takes the value of zero if and only if , i.e.,
(17) |
or
(18) |
Therefore, using defined here, we have the following.
Proposition 4.
For an infinite linear network, the strictest effective regulation is if .
In Figure 3(a), the strictest effective regulation is calculated for different and , fixing . The white curve presents as in Proposition 4. Above this curve, platformer can enforce any regulation without losing users.

We now consider a finite linear network with users . Note that does not take the value of since user always chooses the same platform as user . Following the argument in Appendix, we let
Then Proposition 4 becomes
Proposition 5.
For a finite linear network, the strictest effective regulation is if .
Panel (d) in Figure 3 shows the strictest effective regulation for a finite linear network. As a rule, and as is evident from the figure, in the case of linear networks, platform can impose any regulation on the sender as long as its quality of social interaction is sufficiently larger than that of platform .
V-B Star-chain Network
In this section, we consider a star-chain network, where hub user has leaf nodes and two hub users as their neighbors. The news source sends the signal to hub user 0. The cohesion of blocking clusters is . If hub users are in the same platform as the sender, hub user receives the signal with probability (and their leaf nodes receive with probability ).
The news source’s utility in platform is
(19) |
Suppose the sender chooses platform . Let be defined as an integer such that at the equilibrium, hub user chooses platform and hub user chooses . Note that a peripheral user must always follow their hub user’s choice of platform. The sender in maximizes their expected utility
(20) |
subject to 111For hub user , consider
(21) |
Unlike linear network cases, even if the social interaction quality in platform is lower than , users may prefer the former as it enables them to communicate with many friends. Indeed, Proposition 4 becomes
Proposition 6.
For an infinite star chain, the strictest effective regulation is if .
The proof follows the same argument as the linear network. The proposition indicates that platform needs relatively low quality of social interactions to enforce strict regulations when hub users have many peripheral users. This is because users with many friends have high inertia to switch platforms. Since users have more friends in platform and higher incentive to stay there, the dominant platform has more power to affect players’ behaviors. The higher the cohesion of blocking clusters is, the easier it becomes to enforce strict regulations.
Panel (b) in Figure 3 presents numerical results for . Compared to panel (a), the white curve is lower in the -plane, and the dark area is larger in this panel. This means that platform is more capable of enforcing the strict regulation than in the linear network case.
For a finite star-chain with hub users, the counterpart of Proposition 5 also holds with some modification.222The farthest hub user can stay at platform even when the other hub users are in , while in a finite linear network, the farthest user always chooses the same platform as the second farthest user. Letting
(22) |
for , Proposition 5 becomes
Proposition 7.
For a finite star chain, the strictest effective regulation is if .
Panel (e) in the figure presents the result for .
V-C Regular Tree Network
In an -regular tree network, each user is connected to child nodes. The news source sends the signal to the root node (user 0). The blocking cluster cohesion is like for the star-chain network, but as the distance from the sender increases, the number of users grows exponentially. If users along a path are in the same platform as the sender, then the probability that user receives the signal is .
In platform , the sender’s expected utility is
(23) |
The sender in platform maximizes the utility
(24) |
subject to condition (21). Proposition 6 holds when replacing function by ,
(25) |
In addition, we can show the following.
Proposition 8.
For an infinite regular tree, if and , then .
Proof.
When , the expected number of users who receive the sender’s signal is infinite. On the other hand, with , at most finite users move to platform . Therefore, the sender’s utility is infinity in platform and finite in . By Proposition 2, since , the sender should always choose platform . Any strict regulation thus works effectively, i.e., . ∎
When a signal is diffusive, even lower quality of social interaction is sufficient for strict regulation because of exponentially many potential receivers. Distant users are relatively important to the sender, and the dominant platform becomes powerful. As long as the social interaction quality satisfies , platform can enforce any regulation if the signal’s reproduction rate is greater than one. Lower degree trees require higher for strict regulation.
Panel (c) in Figure 3 shows the strictest effective regulation for . In the regular tree (c) and the star chain (b), platform is more capable of enforcing strict regulation than in the linear network (a), due to cohesive blocking clusters. In the regular tree (c), platform is even more capable than in the star chain (b) due to the number of the potential receivers. If , a regular tree has for any .
The finite case follows the same argument as Proposition 5. Let
Proposition 9.
For a finite regular tree, the strictest effective regulation is if .
Note that Proposition 8 does not hold since the signal does not spread to infinitely many users. Panel (f) shows the result for a finite tree truncated at the fifth generation.
V-D Stochastic Block Model

A stochastic block model is a random graph with communities. Community has users, and a pair of users in communities are friends with probability .
To provide analytical expressions, we restrict our study to stochastic block models on which the platform reallocation process satisfies the few assumptions mentioned below.
Assumption 1.
If a user moves to platform , then the other users in the same community do so too.
While this assumption may appear restrictive, we show through numerical simulations in Appendix that it is satisfied for some stochastic block models when is close to one.
Assumption 2.
Communities move to platform in the order of community (if they relocate).
We restrict our attention to network structures such that we can estimate the order of migration. Such network class includes various community structures, for instance a chain, a star graph, and a complete graph of communities as we will see in the following subsections.
Assumption 3.
Friendship across communities is rare (i.e., ), and the first-relocating user in a community has a single friend in the precedingly-relocated communities.
The argument can be naturally extended to more connected first-relocating users, provided that the number of such external friends, not necessarily one, is known to us. Define user as the first (potentially) relocating user in community .
The following proposition is derived in the Appendix.
Proposition 10.
Proposition 10 emphasizes the role played by and in enabling the platformer to apply regulation . These aggregate numbers can further be related to the basic structure and characteristics of the network if, e.g., they satisfy particular scaling laws. In that case, Proposition 10 can be used to gauge the effect of such parameters as well.
V-D1 Community-chain-type Scaling
Our first example is and decaying with according to
(26) | ||||
As explained heuristically in the Appendix, such a scaling can be achieved in a so-called “chain of communities” network. Numerical simulations also presented in Appendix seem to indicate that Assumptions 1–3 hold for these networks when the intra-community tightness (i.e., probability ) is high, which makes them good representatives of the class of networks for which (26) holds and Proposition 10 is applicable. Note that this is all that is needed of the network to perform our analysis.
For example, consider a base model with and given by
(27) |
We utilize Proposition 10 to analyze how the regulation depends on the in-between community’s “tightness.” For variants of the basic parameter sets, instead of for the second community, we consider (case (a1)) and (case (a2)).
Figure 4 shows the strictest regulation for these networks. The simulation does not use Assumptions 1-3 a priori, which Proposition 10 is based on. The white curves indicate that our mathematical analysis of Proposition 10 fits the simulation results (the color map) of the corresponding physical model. In particular, with high , platform can enforce strict regulation more easily for (a2), featuring a tightly connected intermediate community, than for (a1), which contains a loosely connected one. This is in agreement with our qualitative understanding. When is high, the sender’s influence on distant users becomes relatively important. Since a tightly connected intermediate community blocks their platform migration, it helps platform hold more power to impose regulation.
V-D2 Community-star-type Scaling
Another example is given by and that, in contrast, do not decay with :
(28) | ||||
As explained again in the Appendix, such scalings can be obtained for a star and a complete graph of communities (while satisfying Assumptions 1–3), when the sender is in community and for all . We assume, without loss of generality, . This implies that the sender can attract the first users in community more easily than community since they have less friends within community. Assumption 2 is thus satisfied.
Simulations are conducted for complete graphs of communities with the sender at different communities. The model has four communities. In one simulation setting, the sender is in a small community, and . In another, the sender is in a large community, and . The other parameters are
(29) |
so that the network structure itself remains identical in the two settings.
In Figure 4 (b1, b2), the white curves indicate that our mathematical analysis of Proposition 10 is again consistent with the simulation results, which do not presume the Assumptions 1–3. With low , platform can enforce strict regulation more easily if the sender is in a large community (panel (b2)) than if it is in a small one (panel (b1)). This goes along with our qualitative intuition. When is low, the sender’s influence on nearby agents becomes relatively important. Since it is difficult for the sender to persuade a large community into platform , platform can take advantage of this to impose regulation.
VI Influencer and Neighboring Sympathizers
The previous section demonstrated how the strictest effective regulation is affected by network structure. In particular, we saw how the presence of cohesive clusters can preclude new platform adoption (and hence help the platformer impose stricter regulation), and how the location of pivotal users (who determine the power balance between the sender and the platformer) changes depending on information diffusiveness.
In this section, we consider yet another effect, namely heterogeneity in the users’ preference for their received signal. More precisely, from now on, we assume users differ in how much they appreciate the news information received from the sender and in how biased they are toward unorthodox views.
This is captured by the payoff of user (replacing the common value used up to now). A user with low gets high utility when they correctly estimate the surprising world state, i.e., . User with high gets high utility when they correctly estimate the unsurprising world state, i.e., . Therefore users with low , having high , tend to trust a deceitful sender and bet on the unlikely world state. Note that with the standing assumption of , their default estimation is still . We call users with low sympathizers, and high non-sympathizers.


VI-A Sympathizers in Linear Network
For an infinite linear network, our previous arguments can be reused with minor modifications. with replaced by . Therefore, a counterpart of Proposition 4 holds, replacing function in (18) with
(30) |
In the simulation for an infinite linear network, three users close to the sender are sympathizers with , and the rest are non-sympathizers with . For the purpose of comparison, simulation is conducted also for the opposite pattern, , and . As can be seen in Figure 6 (a), it is more difficult (compared to the homogeneous case) for the platformer to impose strict regulation when the users closest to the sender are sympathizers and is low. This can be understood intuitively by noting that, when information is not diffusive, nearby users are relatively important for the sender to decide its strategy, and therefore the nearby sympathizers give it greater power.
When the information is diffusive, on the other hand, users far from the sender are relatively important, and therefore the sender needs to be more truthful to please the distant non-sympathizers (and, hence, the solid curves dips below the dashed on for large in Figure 6 (a)).
The pattern observed in panel (a) is reversed in (c), as sympathizers become helpful to the sender when is high.
VI-B Community of Sympathizers
For stochastic block models, we consider that users in the same community have the same payoff, but that different communities have different payoffs. With a slight abuse of notation, let denote the payoff parameter for community . Then, the counterpart of Proposition 10 holds as follows.
Simulation is conducted for the previous model of community chain, and (27). The basic model has for all . Keeping all other parameters fixed, a variant model has , meaning that the sender is in a community of sympathizers. For comparison, another variant has , corresponding to non-sympathizers in the sender’s community. Figure 6 indicates that in the case that the sender has sympathizers nearby, it is difficult for platform to enforce strict regulation when is low. In the case that non-sympathizers are nearby, on the other hand, platform can enforce strict regulation easily when is low. Like linear network cases, when the information is not diffusive, the nearby users are relatively important for the sender to decide its strategy, and therefore the nearby sympathizers (non-sympathizers) bring advantage (disadvantage) to the sender.
VII Discussion and Conclusion
This article provided a novel perspective on the design of regulation policies that a dominant platform can enforce without reducing its user base, thus shedding light on the theoretical efficacy of such misinformation-mitigation measures, and the responsibility of mainstream platforms.
More precisely, we identified the strictest level of regulation a platform can enforce on an information source without losing users to a non-regulated competitor. We focused on simple yet relevant network structures, such as stochastic block models, and used the model to make predictions in a variety of scenarios. In particular, we showed that a tightly connected community prevents new platform adoption thereby helping the currently dominant platform impose strict regulation, especially if the information is diffusive. We also showed that a news source in a large community tends to comply with strict regulation in the popular platform if the information is not diffusive, while a news source in a small community is more likely to relocate to an alternative platform. Finally, a news source surrounded by sympathizers who appreciate the information it emits exerts a greater indirect influence over the platform’s content moderation policy than one surrounded by non-sympathizers. However, when information is diffusive, this effect diminishes, allowing the platform to enforce stricter regulations.
Appendices
VII-A Lemmata for Proposition 1
Lemma 1.
Suppose . In the adoption process, users can switch from platform to but not from to .
Proof.
We show by induction. Suppose this statement is true at iteration . Then, for each user , the number of neighbors in platform (resp. ) decreases (increases) or stays the same. This means , . Also, because the shortest path in the subgraph for platform does not become longer. This means . Note that and therefore . Hence, and . Therefore, if , then . This means if a user chooses at iteration , then chooses at iteration too. By induction, the statement is true at any iteration. ∎
Lemma 2.
Suppose and the network is acyclic. User that switches to platform at iteration is edges away from the sender and has .
Proof.
We show by induction. Suppose the statement is true for .
Let be the set of users who switched to at iteration . Since the graph is acyclic, each user has only one neighbor upstream (closer to the sender) and the other neighbors downstream (farther from the sender). Therefore, the upstream neighbors of is already in at iteration (otherwise, for , which contradicts the assumption). Therefore, the platform adoption of affects only if user is a downstream neighbor of or is already in .
By assumption, all users more than edges away from the sender are in at iteration . Therefore, the platform adoption of affects only if user is a downstream neighbor of .
Therefore, the platform adoption of affects only if user is a downstream neighbor of or is already in . Therefore, if a user switches to at iteration , then it is a downstream neighbor of . A downstream neighbor of is edges away from the sender and has .
By induction, the statement holds for any . ∎
VII-B Finite Linear Network
We have
(31) | |||
with satisfying
(32) |
or if . For the best strategy in platform B, the sender takes for and for .
The strictest effective regulation takes the value of zero if and only if , i.e.,
This leads to Proposition 5.
VII-C Derivation of Proposition 10
Suppose the sender chooses platform . The critical situation for community ’s platform adoption is that the first-adopting user chooses a platform when a friend from community is in platform and other friends in community are in . This user compares
Hence, users in community chooses if where
(33) |
This is the sender’s best strategy assuming that communities are in platform and are in at equilibrium. Then, the strictest effective regulation satisfies , i.e.,
(34) |
Plugging in and the value of , we obtain the following necessary and sufficient condition for at the equilibrium where communities are in Platform and the rest are in :
Considering all possible assignments of communities to platforms at equilibrium, we achieve Proposition 10.
VII-D Numerical Validation of Assumption 1
To justify Assumption 1 for Proposition 10, we conduct simulations, taking community chains and complete graphs of communities for example.
For variants of the base model of community chain ( and (27)), instead of , we consider . Figure 7 shows the number of users who choose platform when platform sets too strict regulation . For a large , the number of users in platform is 0, 30, 60, or 90, indicating that all users in a community choose the same platform. On the other hand, for a small , users in a community do not necessarily choose the same platform. The simulation result is for a single sample for the random networks.
Similar results are achieved in simulation for the complete graph of communities.

To discuss this property of community migration, we introduce new metrics. Let be the number of users in community who choose platform . Then, define the number of users’ irregular choices as
(35) |
This takes the value of zero when all users in each community choose the same platform, and the maximum value when half users in each community choose a platform different from the other half.
Figure 8 presents the number of users’ irregular choices. The simulation is conducted for 50 samples, fixing and . As in Figures 7, the results consistently indicate that when users are tightly connected within community (i.e., is high), all users in a community choose the same platform. This simulation result supports Assumption 1 for Proposition 10.

VII-E Community Structures Applicable to Proposition 10
VII-E1 Community Chain
The chain of communities is introduced as a physical model that satisfies (26).
Heuristically, a possible way to obtain such a scaling is by considering a community chain with
(36) |
(37) |
(38) |
Roughly speaking, (36) means that most user pairs in the same community are friends; (37) means that although communities are connected, most users do not have friends in other communities; (38) means that in most cases, no user in community has friends both in community and community . Under these conditions, it should be intuitive that most users are two edges away from the nearest neighbor in an adjacent community, thus leading to (26).
VII-E2 Star/Complete Graph of Communities
In order to obtain (28), consider a stochastic block model with the sender in community 1 and for all . In addition to (36), we assume
(39) |
(40) |
Equation (39) means that in most cases, a user in community (resp. 1) does not have friends in community 1 (resp. ), but there are some inter-community links between 1 and . Equation (40) means that edges between peripheral communities do not exist or are negligible. These assumptions indicate that the first-adopting user in community is two-edges away from the sender and most others are three-edges away; hence (28).
References
- [1] S. Jhaver, C. Boylston, D. Yang, and A. Bruckman, “Evaluating the effectiveness of deplatforming as a moderation strategy on twitter,” Proc. of the ACM on Human-Computer Interaction, vol. 5, no. CSCW2, pp. 1–30, 2021.
- [2] T. Romm and E. Dwoskin, “Twitter adds labels for tweets that break its rules - a move with potentially stark implications for trump’s account,” Washington Post, 2019.
- [3] S. Vaidhyanathan, “Facebook and the folly of self-regulation,” WIRED, 2020.
- [4] C. Shapiro and H. R. Varian, Information Rules: A Strategic Guide to the Network Economy. Boston: Harvard Business School Press, 1999.
- [5] R. Devaux, “Should i stay or should i go? platform advertising in times of boycott,” Preprint, 02 2023. [Online]. Available: https://www.researchgate.net/publication/368386572_Should_I_Stay_or_Should_I_Go_Platform_Advertising_in_Times_of_Boycott
- [6] L. Madio and M. Quinn, “User-generated content, strategic moderation, and advertising,” Preprint, vol. 2, 2020.
- [7] J. Rietveld and M. A. Schilling, “Platform competition: A systematic and interdisciplinary review of the literature,” Journal of Management, vol. 47, no. 6, pp. 1528–1563, 2021.
- [8] R. Gorwa, “What is platform governance?” Information, Communication & Society, vol. 22, no. 6, pp. 854–871, 2019.
- [9] F. A. Chowdhury, D. Saha, M. R. Hasan, K. Saha, and A. Mueen, “Examining factors associated with twitter account suspension following the 2020 us presidential election,” in Proc. of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, 2021, pp. 607–612.
- [10] S. Myers West, “Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms,” New Media & Society, vol. 20, no. 11, pp. 4366–4383, 2018.
- [11] M. Horta Ribeiro, S. Jhaver, S. Zannettou, J. Blackburn, G. Stringhini, E. De Cristofaro, and R. West, “Do platform migrations compromise content moderation? evidence from r/the_donald and r/incels,” Proc. of the ACM on Human-Computer Interaction, vol. 5, no. CSCW2, pp. 1–24, 2021.
- [12] H. Innes and M. Innes, “De-platforming disinformation: conspiracy theories and their control,” Information, Communication & Society, pp. 1–19, 2021.
- [13] V. Tzoumas, C. Amanatidis, and E. Markakis, “A game-theoretic analysis of a competitive diffusion process over social networks,” in Internet and Network Economics: 8th International Workshop, WINE 2012, Liverpool, UK, December 10-12, 2012. Proceedings 8. Springer, 2012, pp. 1–14.
- [14] Y. E. Bayiz and U. Topcu, “Countering misinformation on social networks using graph alterations,” arXiv preprint arXiv:2211.04617, 2022.
- [15] L. Damonte, G. Como, and F. Fagnani, “Targeting interventions for displacement minimization in opinion dynamics,” in 2022 IEEE 61st Conference on Decision and Control (CDC). IEEE, 2022, pp. 7023–7028.
- [16] S. Sasaki and C. Langbort, “A model of information regulation in the presence of competition between social media platforms,” IFAC-PapersOnLine, vol. 55, no. 13, pp. 186–191, 2022.
- [17] E. Kamenica and M. Gentzkow, “Bayesian persuasion,” American Economic Review, vol. 11, no. 6, pp. 2590–2615, 2011.
- [18] G. Egorov and K. Sonin, “Persuasion on networks,” National Bureau of Economic Research, Tech. Rep., 2020.
- [19] D. Ediger, K. Jiang, J. Riedy, D. A. Bader, C. Corley, R. Farber, and W. N. Reynolds, “Massive social network analysis: Mining twitter for social good,” in 2010 39th International Conference on Parallel Processing, 2010, pp. 583–593.
- [20] S. Morris, “Contagion,” The Review of Economic Studies, vol. 67, no. 1, pp. 57–78, 01 2000.
- [21] E. Croes and J. Bartels, “Young adults’ motivations for following social influencers and their relationship to identification and buying behavior,” Computers in Human Behavior, vol. 124, 2021.
- [22] B. Hebert and W. Zhong, “Engagement maximization,” 2022. [Online]. Available: https://arxiv.org/pdf/2207.00685.pdf
- [23] I. Himelboim, M. A. Smith, L. Rainie, B. Shneiderman, and C. Espina, “Classifying twitter topic-networks using social network analysis,” Social media+ society, vol. 3, no. 1, 2017.
- [24] M. McPherson, L. Smith-Lovin, and J. M. Cook, “Birds of a feather: Homophily in social networks,” Annual review of sociology, vol. 27, no. 1, pp. 415–444, 2001.
- [25] S. Currarini, M. O. Jackson, and P. Pin, “An economic model of friendship: Homophily, minorities, and segregation,” Econometrica, vol. 77, no. 4, pp. 1003–1045, 2009.
- [26] M. O. Jackson and E. C. Storms, “Behavioral communities and the atomic structure of networks,” 2017. [Online]. Available: https://arxiv.org/abs/1710.04656
- [27] F. Parise and A. Ozdaglar, “Graphon games,” in Proceedings of the 2019 ACM Conference on Economics and Computation, ser. EC ’19. New York, NY, USA: Association for Computing Machinery, 2019, p. 457–458.
![]() |
So Sasaki is working toward the Ph.D. degree with the University of Illinois at Urbana–Champaign, Urbana, IL, USA. He is currently with the Decision and Control Group, Coordinated Science Lab, University of Illinois at Urbana–Champaign. His research interests include game, optimization, control, graph theory, and machine learning, with application to social media. |
![]() |
Cédric Langbort received the Ph.D. degree in Theoretical and Applied Mechanics from Cornell University, Ithaca, NY, USA, in 2005. He is currently a Professor of Aerospace Engineering with the University of Illinois at Urbana–Champaign, Champaign, IL, USA. He is also with the Decision and Control Group, Coordinated Science Lab (CSL), where he works on the applications of control, game, and optimization theory to a variety of fields. |