Receiver-Oriented Cheap Talk Design
Abstract
This paper considers the dynamics of cheap talk interactions between a sender and receiver, departing from conventional models by focusing on the receiver’s perspective. We study two models, one with transparent motives and another one in which the receiver can filter the information that is accessible by the sender. We give a geometric characterization of the best receiver equilibrium under transparent motives and prove that the receiver does not benefit from filtering information in this case. However, in general, we show that the receiver can strictly benefit from filtering and provide efficient algorithms for computing optimal equilibria. This innovative analysis aligns with user-based platforms where receivers (users) control information accessible to senders (sellers). Our findings provide insights into communication dynamics, leveling the sender’s inherent advantage, and offering strategic interaction predictions.
1 Introduction
The conventional sender-receiver model focuses on the interaction between an informed sender, who is fully aware of the state of the game, and an oblivious receiver. In these interactions, the sender must disclose some of the information to the receiver in order to influence its behavior, and the receiver must then play an action that affects the utility of both players. Relevant information disclosure models include cheap talk [10, 17], Bayesian persuasion [16], and their extensions to multiple senders and receivers [12, 7, 1, 2, 3, 4]. Traditionally, all these interactions are always viewed from the point of view of the sender. However, in order to get a complete picture of the communication model, it is also critical to consider the receiver’s perspective. Moreover, since the sender, as the informed side, has an inherent advantage in cheap talk models, analyzing the best outcomes for the receiver can level this advantage and provide a more reliable prediction of strategic interactions.
We focus on interactions where the sender and the receiver can communicate via cheap talk and analyze two settings. First, we consider a setting where the sender has transparent motives (i.e., where the sender has a state-independent utility) in a similar fashion as Lipnowski and Ravid [17], and we follow with a setting where there is a pre-play stage before the communication phase in which the receiver can filter the information that the sender receives about the realized state.
Our first result is providing a geometric characterization of the best sequential equilibrium for the receiver when the sender has transparent motives. It follows from this characterization that (a) the maximum utility that the receiver can get in equilibrium can be computed as a function of the prior by taking the maximum of finitely many linear functions, and (b) that the receiver cannot improve her utility by applying filters when the sender has transparent motives. We continue by showing that, in general (i.e., when the sender does not necessarily have transparent motives), the receiver can strictly benefit from filtering information. Moreover, if the receiver can filter, we provide an algorithm that outputs the best sequential equilibrium for the receiver when her action set is binary. Our algorithm runs in time, where is the number of possible states. For our final result, we consider the same setting but with two senders instead, and reduce the problem of finding the best sequential equilibrium for the receiver to a linear programming instance with variables and constraints. If there are three or more senders, it is easy to check that the receiver does not benefit from filtering information (see Section 4 for more details.)
A novelty in the second part of our analysis is giving the receiver the advantage of determining the information that the sender receives. Intuitively, we can see this setting as the dual of Bayesian persuasion [16]. More precisely, in Bayesian persuasion settings, the sender can freely choose the mechanism by which the information about the realized state is disclosed to the receiver. By contrast, in our setting, the receiver can freely choose the mechanism by which the information about the realized state is disclosed to the sender (note, however, that the sender has to disclose this information to the receiver via cheap talk, which means that the sender can still manipulate the information received if it is in her interests to do so.) This aspect is motivated by several scenarios, as for instance:
-
•
Interacting with search engines like Google, Yahoo! or Bing. Here, the search engine plays the role of a sender that can access the necessary information about a given query on a large database and the user plays the role of a receiver that can impact the outcome of her interaction with the search engine by using specific keywords that influence how the information in the database is retrieved by the search engine.
-
•
Scenarios in which the state of the game is a function of both the sender and the receiver’s information and the sender should propose an agreement to the receiver (as, for example, in big economic transactions or in diplomacy and politics). In these cases, the receiver should strategically decide how much information it should disclose to the sender in order to influence her proposal.
-
•
Online platforms, such as Amazon, store a significant amount of information about their users. They have the ability to determine which information is disclosed to sellers, who, in turn, can utilize this data to target users through advertisements. Our model essentially examines the scenario in which the platform operates for the average user, revealing seller information to maximize the revenue for the average buyer in equilibrium.
Our analysis attempts to model these scenarios more accurately.
1.1 Related Literature
Our work on the setting with transparent motives is closely related to that of Lipnowski and Ravid [17], who studied a cheap talk model where the sender has transparent motives and showed that the best sender equilibrium is the quasi-concave envelope of the sender’s utility function. Our result is complementary since we give the geometric characterization of the best equilibrium for the receiver. In fact, we show that the best equilibrium for the receiver can be computed as the maximum of at most affine functions, where is the number of states in the game.
Restricting the information available to the sender has been increasingly gaining traction in the community. Most notably, Bergemann, Brooks and Morris [6] considered a buyer-seller setting and studied possible ways to limit the seller’s information. In their work, they characterized the set of all pairs of possible utilities achievable in equilibrium. In particular, they provide an algorithm that achieves the optimal way to limit the seller’s information to maximize the expected buyers’ utility. Ichihashi [14] considered a Bayesian persuasion setting and studied how the outcome of the interaction is affected when the sender’s information is restricted. One of their results is that, if the receiver restricts sender information in a pre-play stage, the best utility that the receiver can get in this setting coincides with the one that the receiver would get in the “flipped game”, where the receiver persuades the sender. Other papers have studied similar questions in the context of common-interest coordination games [9, 11] and cheap talk games [15, 13]. The main differences with our work are the fact that (a) we have no restrictions on the utility of the sender or that of the receiver, and (b) we focus on the best equilibrium for the receiver.
The rest of the paper is organized as follows. In Section 2 we introduce the cheap talk setting where the sender has transparent motives. We also provide a geometric characterization of the best equilibrium for the receiver. In Section 3 we introduce filtered information aggregation games, where the receiver can limit the information of the senders. In Section 4 we state our main results regarding the characterization of the best sequential equilibrium for the receiver in filtered information aggregation games, and these are proved in Sections 5, 6 and 7, where we also show how to extend the results of Section 2 to sequential equilibria. We end with a conclusion in Section 8.
2 Cheap Talk with Transparent Motives
2.1 Model
An information transmission game involves a sender and a receiver , and is defined by a tuple , where is the set of actions, is the set of possible states, is a commonly known prior distribution over that assigns a strictly positive probability to each possible state, is a finite set that contains the messages that the sender can send to the receiver ( is usually assumed to be the set of binary strings of length at most , for some positive integer ), and is a utility function such that gives the utility of player (where is either the sender or the receiver) when action is played at state . Each information transmission game instance is divided into three phases. In the first phases, a state is sampled from following the prior distribution , and this state is disclosed to the sender . During the second phase, the sender sends a message to the receiver, and in the third phase the receiver plays an action and each player receives utility.
Given an information transmission game , a strategy profile for consists of a pair of strategies for the sender and the receiver, where is a map from to distributions over (which is denoted by ), and is a map from to . We say that is a Nash equilibrium if no player can increase its utility by defecting from . More precisely, denote by the expected utility that gets when players play , then is a Nash equilibrium if
for all players and all strategies for .
For future reference, some of the results in this paper require a stronger notion of equilibrium known as sequential equilibrium. A sequential equilibrium consists of a strategy profile and a belief system that (in this case) assigns a posterior distribution over to each possible message in . Intuitively, represents the receiver’s beliefs about the posterior distribution over given the message that it receives from the sender. The pair form a sequential equilibrium if
-
•
is consistent with , which means that there exists a sequence of completely mixed strategies that converge to such that the sequence of maps from messages to posterior distributions over obtained by applying Bayes’ rule assuming that the sender plays converges to .
-
•
is, according to , a best-response for the sender for all messages .
In this section, we are interested in information transmission games in which the sender has transparent motives. This means that the sender’s utility only depends only on the action played by the receiver, and not on the state . Thus, for simplicity, we will define functions and that denote the utilities of the sender and the receiver respectively. More precisely, and for all and .
2.2 Characterizing the Best Equilibrium for the Receiver
In this section, we are interested in finding the best Nash equilibrium for the receiver given an information transmission game with transparent motives. We show later how to extend this equilibrium to a sequential equilibrium in Section 7.
We start by considering the case where the state space is binary (i.e., ). We can assume without loss of generality that there exist real numbers such that action is optimal for the receiver if and only if the posterior probability of state after her interaction with the sender is in the interval . To see this, note that the expected utility of the receiver given some action is a linear function over that goes from to . Therefore, the set of points in which an action gives the maximal utility over all other actions is an interval in . If this interval is empty for some action , it means that will never be played under any circumstances since there will always be better options for the receiver. This implies that, without loss of generality, we can simply remove from and keep only those actions that are maximal at some point.
Using a slight abuse of notation, let denote the expected utility of the receiver given action and the posterior probability of . Note that can be computed by the following expression:
For each , let (using the convention that and ), and let be the subset of pairs such that . At a high level, if the optimal actions with prior and the optimal actions with prior can be combined in such a way that the utility of the sender is equal in both combinations.
Let
where and is a value in that satisfies that . The following theorem states that the best utility for the receiver in an information transmission game with transparent motives is given by .
Theorem 1.
Given an information transmission game with and transparent motives, the maximum utility that the receiver can get in a Nash equilibrium is given by , where is the prior probability that the realized state is .
Before proving Theorem 1, we illustrate its use via the following simple example. Consider an information transmission game with transparent motives such that and , in which the utilities are defined as follows:
0 | |||
2 | |||
-2 | |||
1 | |||
-1 |
If we plot the utility of the receiver in a bubbling equilibrium in which no information is revealed we get the following figure, where the dotted line represents the utility of the receiver as a function of the prior probability that the realized state is .
If we follow the construction of Theorem 1, we have that , , , and , and the set consists of all pairs of the form in addition to , , and . Thus, it follows from Theorem 1 that the function that represents the receiver’s utility in a Nash equilibrium is obtained by taking the maximum of the bubbling equilibrium and the following four segments: the segment that connects to , the segment that connects to , the segment that connects to , and the segment that connects to , which are represented in the figure below with a solid green line:
Thus, the maximum utility that the receiver can get in a Nash equilibrium of is represented by the solid black line in the figure below:
Next, we proceed with the proof of Theorem 1.
Proof of Theorem 1.
First we show that the receiver can guarantee at least utility in a Nash equilibrium. Let be the prior probability that the realized state is and suppose that , for some . Since , it follows that . Therefore, by definition, we can find such that
(1) |
We next describe a cheap talk equilibrium that gives the receiver a utility of . Using the fact that , the splitting lemma of Aumann and Maschler [5] implies that there exists a communication protocol such that
-
•
The sender sends a message with probability and another message with probability .
-
•
The posterior probability of after the receiver receives message is and the posterior probability of after the receiver receives message is .
Consider such a communication protocol in which, in the last phase, the receiver plays action with probability and action with probability whenever she receives message . If she receives message instead, she plays with probability and with probability . If the receiver receives a message that is not or , she plays the action that gives the least possible utility for the sender. By construction, since the posterior probability of after receiving message is , both and are best responses for the receiver. Similarly, since the posterior probability of after receiving message is , so are and in this scenario. Altogether, this implies that the receiver best-responds to the sender’s strategy. To see that this strategy profile is also incentive-compatible for the sender note that, by Equation 1, the sender is indifferent between sending or . Moreover, sending any other message would give her less utility than sending or . This shows that the receiver can indeed achieve utility in equilibrium.
We next show that the receiver’s utility is at most in any Nash equilibrium. It follows from standard extreme point considerations that if there exists a Nash equilibrium in which the sender sends three or more messages with strictly positive probability, there exists another Nash equilibrium in which the sender sends at most two different messages and both the sender and the receiver at least as much utility with than with . Thus, for the rest of the proof, we’ll restrict our analysis to communication protocols in which the sender sends at most two different messages with positive probability.
Suppose that the sender only sends one message with strictly positive probability. This means that the receiver plays her action with no additional information about the realized state, except for the prior . Let be the prior probability that the realized state is and let be an index such that . Since, in this case, the posterior distribution is the same as the prior distribution, is the receiver’s best response, and her expected utility is given by . We can rewrite this as an affine combination of and with the following expression:
where is the value in such that . This shows that, by definition, , and therefore the receiver cannot get more utility than in Nash equilibria where the sender sends only one message.
Assume instead that the sender sends two messages and such that the posterior probability that the realized state is after the receiver receives and are and , respectively. We first claim that we can assume without loss of generality that and for some . Suppose that for some . Then, it means that the best response for the receiver when receiving message is to play action with probability . Then, we can construct another equilibrium in which the posterior probability after receiving message is while the posterior probability after receiving message remains unchanged (note that the conditional probabilities of sending and are uniquely determined by Bayes’ plausibility constraints). By definition, playing is still a receiver’s best response to in . Moreover, it is straightforward to check that the sender receives the same utility in the new equilibrium and the original equilibrium. It remains to check that the receiver also (weakly) increases her utility. To see this, note that the posterior distribution of is a mean preserving spread of the original one. Therefore the new posterior distribution Blackwell dominates the original distribution and by Blackwell theorem [8] the receiver is (weakly) better. We can therefore assume from now on that and for some .
We next show that the posterior probabilities and described in the paragraph above satisfy that . To see this, note that the sender’s expected utility must be equal when sending and (otherwise, she can increase her utility by sending one of these messages by probability 1 and the other with probability ). By definition, the only best responses for the receiver given posterior probabilities and are , and , respectively. Suppose that the sender plays with probability when receiving message , and plays with probability when receiving message . Since the sender is indifferent between and , and must satisfy that
which implies that , and therefore that . This implies that we can represent the receiver’s utility as an affine combination of and using the following expression:
where is the unique value in such that (note that this equality must hold because of Bayes’ plausibility constraints). This completes the proof of Theorem 1. ∎
2.3 General State space
In this section we generalize the proof of Theorem 1 to arbitrary information transmission games with transparent motives. At a high level, both the statement and the proof are quite similar to the case with binary states, although they require additional notation.
Let be an information transmission game with transparent motives where and . As in the proof of Theorem 1, we identify with (the -dimensional simplex) and assume without loss of generality that there exists no strictly dominated action. More generally, we assume without loss of generality that all actions are optimal in some posterior distribution, which is that, for all actions , the set
is non-empty.
Let be the polytopes contained in such that contains the set of points for which action is a best response given that the posterior distribution is . Given a point , let be the set of indices such that and let . Intuitively, is the interval ranging from the minimum to the maximum utility that the sender can get with a receiver’s best-response for . Let be the union of the set of vertices of (i.e., is the set of points that are vertices of at least one of the polytopes), and let be the set of tuples of points of such that and the intersection of all of the intervals is non-empty. More precisely,
Moreover, given any vertex , let be the convex hull of (i.e., the set of all the affine combinations of points in ). Let
where is the action in that has the lowest index and are some affine parameters such that . The generalization of Theorem 1 is as follows.
Theorem 2.
Given an information transmission game with transparent motives, the maximum utility that the receiver can get in a Nash equilibrium is given by .
Proof.
Showing that is achievable by the receiver in a Nash equilibrium is analogous to the case with two states. However, the converse direction requires a slightly different reasoning. As in the case with binary states, for every Nash equilibrium in which the sender sends or more messages with positive probability (where is the number of states), there exists a Nash equilibrium in which the sender sends or fewer messages and gives the same utility for the sender and the receiver.
Suppose that is a Nash equilibrium where the sender sends different messages with positive probability. For each , denote by the posterior distribution of possible states after the receiver receives message when the sender plays . As in the case of , we claim that, without loss of generality, we can assume that for every . More precisely, we show next that, for every equilibrium , there exists possibly a different equilibrium in which each message induces a posterior such that (a) for all , (b) the sender gets the same utility in and , and (c) the receiver gets a (weakly) higher utility in than in .
Assume that for some . We claim that we can find a distribution such that (i) and (ii) if is a best response for the receiver given a posterior distribution , then is also a best response for the receiver in every such that . To see this, let be the set of actions that are a best response for the receiver in and let . Since is a convex polytope that contains , we can define as an affine combination of the vertices of (where the coefficients represent the probabilities of each vertex). All such vertices satisfy the above (ii) property.
Assume that, in , each message is sent with total probability probability . We define as follows. The strategy of the receiver is equal to . The sender does the following. For all , the sender sends message with total probability (the same probability as in ). However, instead of sending message with probability , in the sender sends each vertex of with probability . By property (i), the strategy is well defined and, by property (ii), strategy is a best-response for the receiver to .
By construction, the sender has identical utility in and in . To see that the receiver (weakly) increases her utility, note that the posterior distribution in is a mean preserving spread of since we split to the vertices of . Repeating this process for every gives the desired equilibrium.
To finish the proof, note that we can express the receiver’s expected utility as an affine combination of . Moreover, by Bayes’ plausibility assumption, the affine coefficients of the combination are uniquely determined by the identity . This shows that , as desired.
∎
2.4 Efficiently computing for a Binary State Sace
In this section we provide an algorithm that, given an information transmission game with , computes the best utility of the receiver in a Nash equilibrium and runs in time. By Theorem 1, we know that the best utility of the receiver in a Nash equilibrium of is a piece-wise linear function. More precisely, there exists a positive integer , a sequence of real numbers , and linear functions such that, if is the prior probability that the realized state is , then
Before describing the algorithm, we need the following additional notation. Given two pairs of indices and , we say that is dominated by if and . Intuitively, is dominated by if they both belong to and the interval contains the interval . It is easy to check that, if dominates , then, for each point in the segment that goes from to , there exists a point with (this follows directly from the fact that the utility of the receiver is a convex function over the prior). This means that, in order to find the maximum of all the segments with , we can restrict the search to segments that are not dominated by other segments. In particular, if we denote by the maximum index such that , we can restrict our search to all segments of the form .
Suppose that we have an oracle that gives for each in constant time. We show next how to construct an algorithm that outputs and the sequences and using as a black box. At a high level, the algorithm computes all the segments of the form and runs a variant of the Bentley-Ottmann algorithm in which, (a) we only output the segment intersections that involve the segment with the highest -coordinate, and (b) whenever two segments and intersect, it eliminates the segment with the lowest slope. This guarantees that the number of segment intersections processed by the algorithm is at most , and therefore that the total complexity of the algorithm is . More precisely, consists of running a vertical sweep-line that moves across the -axis from left to right while keeping a list of all the segments that intersects, sorted by the -coordinate of their intersection with . In order to do so, we need two data structures: a binary search tree whose purpose is to keep track of all the segments that intersects, sorted by -coordinate, and a priority queue whose purpose is to keep a list of all the events that occur as moves from left to right, sorted by -coordinate. There are three types of events:
-
•
A segment begins. In this case, we insert in and query from the segments and that are immediately above and below , respectively. If intersects or intersects , we add these intersections to the event queue.
-
•
A segment ends. If is still in , we remove from . Let and be the segments in that are immediately above and below . If and intersect, we add the intersection to the event queue.
-
•
There is an intersection between two segments and . If either or has been previously deleted from , this event is ignored. Otherwise, we erase from the topmost segment between and and proceed as if the erased segment ended (i.e., we check the segments that are immediately above and below of the erased segment and, if they intersect, we add the intersection). If the segment removed was also the topmost segment of all , we append to the output.
The full algorithm is as follows. We initialize an empty binary search tree and an empty priority queue and push events to indicating the beginning and end of segments . We then process the events in as described above until becomes empty and output the sequence of points obtained this way, in addition to and . The values correspond to the -coordinates of the points in the output, and the linear functions are the ones that interpolate each pair of consecutive points.
The proof of correctness is equivalent to that of the Bentley-Ottmann algorithm. Note that the only difference between the algorithms is the way that we process the intersection events in . The Bentley-Ottmann algorithm outputs all segment intersections and, to do so, it swaps and inside whenever and intersect (instead of processing them the way that we do). However, since a segment that is surpassed in -coordinate by another segment can never be maximal again, we can optimize the algorithm by removing it right after processing its intersection. Since each intersection event erases one of the segments involved, we process at most intersection events, which means that we process in total events ( starting points, endpoints, and intersections). Since each event is processed in time, the total time complexity is . To see that the output of is correct, note that the vertices that define the piece-wise linear function are either (a) intersections of different segments or (b) points in which a segment ends and another segment begins. If we use an implementation trick where we always process starting point events before endpoint events, the points defined in (b) would also be considered as intersections. This means that, using this implementation trick, the desired set of vertices is the subset of all segment intersections that occur at the top layer of the segments, which is precisely what outputs.
As a final note regarding this part of the algorithm, note that, as in Theorem 1, we are assuming that is given in a form such that there exist real numbers such that action is the receiver’s optimal action in the interval . However, even if we can assume that has this form without loss of generality, the input of the algorithm may not be reduced to this form yet. However, we can perform the same variant of the Bentley-Ottmann algorithm in order to reduce to such a form, and to compute . To see this, note that the linear function that outputs the utility that the receiver gets by playing as a function of the prior is a segment going from to . Reducing to the desired form means precisely to compute the upper envelope of this set of segments.
Now, we have an algorithm that, given as a black box, computes the best utility for the receiver. It remains to show how to compute in time as well.
Efficient Computation of
By definition, if , where . For simplicity, denote by and the minimum and maximum of , respectively. Consider the following algorithm :
-
1.
Step 1: Initialize an array of elements in which for all .
-
2.
Step 2: Initialize an empty array . For each , append and to .
-
3.
Step 3: Sort in lexicographical order (i.e., sort by the first coordinate, break ties with the second coordinate and then with the third one).
-
4.
Step 4: Initialize an empty binary search tree and two empty arrays of elements each. Iterate through the elements of and do the following. If , set , insert in , and set , where is the maximal element of . Otherwise, set , and remove from .
-
5.
Step 5: For each , set to the maximum between and the highest third coordinate between all the elements of in the interval .
We claim that outputs and can be implemented in time. To see that it gives the correct output, note that, given , the highest such that is either (i) the highest such that or appear between and and (after sorting all the starting points and endpoints of each interval as in Step 3), or (ii) the highest such that appears before and does not appear before . The values described in (i) and (ii) are calculated in step 4 and step 5, respectively. To see that can be implemented in time, note that steps 1,2,3 and 4 can be implemented in time. A naive implementation of step 5 would be quadratic in , but it can be performed in using a Segment Tree data structure.
3 Filtered Information Aggregation Games
3.1 Model and Analysis
In this section we analyze the effects of filtering on information aggregation games. At a high level, information aggregation games are information transmission games with multiple senders. For a rigorous definition, an information aggregation game consists of a receiver , and a tuple is the set of senders and , , , and are defined as in Information Transmission games (see Section 2.1), with the only exception that is a function from to instead of a function from to .
We are interested in a setting where the receiver can filter potential information from the senders. We model this as follows: at the beginning of the game, there is an additional phase (phase 0) where the receiver publicly chooses a function that maps each state to a distribution over possible signals (we assume that signals are binary strings of arbitrary length). We call this function filter. The rest of the game follows exactly in the same way as in a (standard) information aggregation game, except that, if the realized state is , the senders receive a signal sampled from instead of itself. We call this a filtered information aggregation game (FIAG). More formally, a FIAG runs as follows:
-
•
Phase 0: The receiver chooses a filter . The filter is disclosed to the senders.
-
•
Phase 1: A state is sampled according to the distribution and is disclosed to the senders.
-
•
Phase 2: Each sender sends a message to the receiver .
-
•
Phase 3: The receiver plays an action .
-
•
Phase 4: Each player receives utility.
For an illustration of how the receiver may benefit from filtering, consider the following example.
Example 1.
Let be an information aggregation game with , , , is the uniform distribution over and is defined by the following table, where the first component of each cell is the utility of the sender and the second component is the utility of the receiver:
We show next that, in Example 1, the receiver can double her maximal utility by filtering the sender’s information.
Proposition 1.
If is the game defined in Example 1,
-
(a)
The utility of the receiver in all sequential equilibria of is .
-
(b)
There exists a sequential equilibrium in where the receiver ends up playing her preferred action in every single state.
Proof.
We claim that in a standard cheap talk equilibrium, the receiver utility is always and thus the receiver cannot get more than his utility at the bubbling equilibrium where information is not revealed. To see this consider without loss of generality an equilibrium with only two messages that are sent with positive probability. If, by way of contradiction, the equilibrium induces a utility that is strictly higher than to the receiver, then both actions are played with positive probability and the receiver is strictly better off playing one of the actions, say action , for at least one of the messages, say . This means that the sender is strictly better off sending at states and message at states . But in this equilibrium, the receiver’s utility is . A contradiction.
In contrast, consider the following equilibrium in the filtered game. First, the receiver filters the sender’s information by partitioning as follows so that the sender only observes the chosen partition element. Then, at the second stage, the sender has two messages that are used to fully reveal his information to the receiver who plays action if the chosen partition element is and action if the chosen partition element is . It is easy to see that the sender maximizes his utility given the realized partition element as he is indifferent among the two possible messages. The receiver in contrast receives his maximal utility under this equilibrium. ∎
However, even if filtering can greatly improve the receiver’s utility in in general, the following result complements Theorems 1 and 2 by stating that the receiver cannot improve her utility by filtering information if the sender has transparent motives.
Proposition 2.
Let be a FIAG where the sender has transparent motives. Then, the receiver cannot increase her utility by filtering information.
Proof.
Given a sequential equilibrium for , consider the following strategy profile where the receiver does not filter. In Phase 0, the receiver selects the filter that simply forwards the realized state to the senders (i.e., the information that the senders receive is not filtered). In Phase 2, the sender first simulates what signal would she get with and what message would she send to the receiver in , and then she sends to the receiver. In Phase 3, the receiver plays whatever it would have played in after receiving message .
It is straightforward to check that is also sequential equilibrium that gives each player the same utility as in . ∎
In the next section we give an efficient protocols that computes the optimal filters for the receiver in arbitrary games with binary actions and one or two senders.
4 Best Sequential Equilibria with Filters and Binary Actions
We next address the question of best receiver equilibrium from a computational aspect. Both of our results restrict attention to the case where the action set of the receiver is binary.
Theorem 3.
Let be a filtered information aggregation game with and . Then, there exists an algorithm that outputs the best sequential equilibrium for the receiver in that runs in time, where .
Theorem 4.
Let be a filtered information aggregation game with , , and . Then, finding the best sequential equilibrium for the receiver in reduces to a linear programming instance with variables and constraints.
Note that we only provide results for the cases of one and two senders. If is a filtered information aggregation game with there is a trivial strategy profile that gives the receiver the maximal possible utility of the game. Using this strategy, the receiver puts no filters (more precisely, it selects the identity filter), the senders send the signal they get to the receiver, and then the receiver computes the signal that was sent by the majority of the senders and plays the action that gives her the most utility conditioned that the senders got signal . It is straightforward to check that this is indeed a Nash equilibrium: the senders do not get any additional utility by defecting since the other senders will still send the true signal (which means that the receiver will be able to compute the true signal as well), and the receiver does not get any additional utility either since she is always playing the optimal action for each possible state. In Section 7 we show that this strategy profile can be extended to a sequential equilibrium by defining an appropriate belief assessment . This gives the following result:
Theorem 5.
Let be a filtered information aggregation game with such that . Then, can be extended to a sequential equilibrium that gives the receiver the maximal possible utility of the game in .
In particular, we have the following Corollary.
Corollary 1.
If there are three or more senders, the receiver does not get any additional utility by filtering information.
In the following sections, we give algorithms that output the best Nash equilibria for the settings described in Theorems 3, 1 and 4, respectively. Later, in section 7, we show how to extend these Nash equilibria to sequential equilibria. Section 7 also includes the missing details in the proof of Theorem 5.
5 Proof of Theorem 3
In this section we provide an algorithm that outputs the best Nash equilibrium for the receiver in any filtered information game with and . In Section 7 we show how to extend the resulting Nash equilibrium to a sequential equilibrium.
Consider a game that is identical to except for the fact that the filter is fixed beforehand (which means that the receiver doesn’t get to choose the filter in Phase 0). The following proposition shows that there exists a Pareto-optimal Nash equilibrium with a relatively simple form.
Proposition 3.
There exists a Pareto-optimal Nash equilibrium in such that, in , either:
-
•
The receiver always plays 0.
-
•
The receiver always plays 1.
-
•
The receiver always plays the best action for the sender.
Before proving Proposition 3 we need additional notation. First, given a filtered information game , let be the expected utility of player on signal and action . This expected utility can be computed by the following equation:
where is the probability that the realized state is conditional on the fact that the sender received signal . Similarly, has the following expression:
Let be the set of signals in such that (i.e., the set of signals in which the sender prefers ), let be the set of signals such that , and be the set signals such that . Given a strategy profile for , denote by the probability that the sender sends message given signal and denote by the probability that the receiver plays action given message . Moreover, let denote the set of messages that have a strictly positive probability to be sent by the sender on at least one signal in which the sender prefers 0 (i.e., is the set of messages such that there exists such that ). We define and analogously.
With this notation, the following lemma describes all strategy profiles in that are incentive-compatible for the sender.
Lemma 1.
A strategy profile for is incentive-compatible for the sender if and only if the following is satisfied:
-
(a)
If , then for all messages .
-
(b)
If , then for all messages .
Lemma 1 states that, for a strategy profile to be incentive-compatible for the sender, the receiver should play with maximal probability on all messages that could be sent on signals in which the sender prefers 0, and the receiver should play with minimal probability on all messages that could be sent on signals in which the sender prefers 1. In particular, we have the following Corollary.
Corollary 2.
A strategy profile for is incentive-compatible for the sender if and only if there exist two real numbers with such that:
-
(a)
.
-
(b)
.
-
(c)
.
Proof of Lemma 1.
Clearly, if (a) and (b) are satisfied, then is incentive-compatible for the sender. Conversely, suppose that is incentive-compatible for the sender but it doesn’t satisfy (a). This means that there exists a signal and a message that satisfies and such that for some other message . Therefore, if the sender sends instead of whenever it receives signal , it could increase its expected utility. This contradicts the fact that is incentive-compatible for the sender. The proof of the case in which doesn’t satisfy (b) is analogous. ∎
Corollary 2 characterizes the necessary and sufficient conditions for a strategy profile in to be incentive-compatible for the sender. We show next that Proposition 3 follows from adding the receiver incentive-compatibility constraints into the mix. Denote by the receiver’s expected utility when playing action conditioned on the fact that it received message and that the sender plays . Then, we have the following cases:
Case 1: There exists such that : Since is incentive-compatible for the receiver, it must be the case that and therefore that for all messages .
Case 2: There exists such that : This time it must be the case that and therefore that for all messages .
Case 3: for all and for all : In this case, consider a strategy profile that is identical to except that, whenever the receiver receives a message , it plays action with probability 1 (as opposed to probability ), and when the receiver receives a message , it plays action with probability 1 (as opposed to ). It is easy to check that, by construction, Pareto-dominates . Even so, can be further improved: consider a strategy profile such that the sender sends message in all subsets and in all subsets such that , and the sender sends message in all other subsets. Additionally, the receiver plays action when receiving message and plays action when receiving message . It is easy to check that satisfies that and . Therefore, is a Nash equilibrium of . Moreover, Pareto-dominates . To check this, note that both and play the best action for the sender at every possible signal . The only difference is that, in signals in which the sender is indifferent, the receiver plays the best action for herself in while it doesn’t necessarily do so in .
This analysis provides a refinement of Proposition 3. In fact, consider the following two strategy profiles:
-
•
Strategy : Regardless of the signal received, the sender sends an empty message. The receiver plays the action that gives her the most utility with no information.
-
•
Strategy : After receiving the signal, the sender sends its preferred action. The receiver plays the action sent by the sender.
We have the following characterization of Pareto-optimal Nash equilibria:
Proposition 4 (Refinement of Proposition 3).
If is incentive-compatible for the receiver, then is a Pareto-optimal Nash equilibrium of . Otherwise, is a Pareto-optimal Nash equilibrium of .
Proposition 4 shows that there is a simple algorithm that outputs a Pareto-optimal Nash equilibrium of once the filter has been fixed beforehand. This reduces the problem of finding the best Nash equilibrium of for the receiver to that of finding the filter such that the Pareto-optimal Nash equilibrium of gives the best possible utility for the receiver. In fact, the best Nash equilibrium goes as follows:
-
1.
The receiver chooses the filter such that is incentive-compatible for the receiver and gives the receiver the maximum utility in . If there is no filter such filter , the receiver chooses an arbitrary filter.
-
2.
Given the chosen filter and the signal , the sender computes the Pareto-optimal Nash equilibrium of and plays with signal .
-
3.
If the receiver sent an arbitrary filter in step 1, it plays the best action assuming no information. Otherwise, if the receiver receives , it plays action . If it receives , it plays action . If it receives something else it plays the best action assuming no information.
The following proposition shows that satisfies the desired properties.
Proposition 5.
is the Nash equilibrium of that gives the most utility to the receiver.
Proof.
Once the sender chooses a filter , the receiver and the sender both maximize their utilities if they play the Pareto-optimal strategy profile described in Proposition 4. Therefore, it is optimal for the receiver to choose . ∎
5.1 Computation of
In this section we show how to compute the optimal filter for the receiver. Our aim is to find a filter such that is incentive-compatible for the receiver in and such that the expected utility for the receiver with is as large as possible. We begin by showing an algorithm that runs in time, where is the number of states (i.e., the size of ), and then we show the proof of correctness for the algorithm provided.
5.1.1 An Algorithm
For our algorithm, we restrict our search to binary filters (i.e., filters that only send signals in ) and later, in Lemma 2, we show that this is done without loss of generality. With this restriction, we can describe possible candidates by a function that maps each state to the probability that outputs . For simplicity, we’ll also say that is incentive-compatible for the sender (resp., for the receiver) if the strategy profile in which the sender forwards the signal to the receiver and the receiver plays whatever it gets from the sender is incentive-compatible for the sender (resp., for the receiver). We will also say that the sender (resp., the receiver) gets expected utility in , if is the expected utility it gets when running the previous strategy in . Note that is always incentive-compatible for the sender (for all filters ), but a binary filter might not always be since we are forcing the sender to send the signal received instead of its true preference. However, if is incentive-compatible for the sender (resp., for the receiver) in some game for some binary filter , and the sender does not always have the same preference in , then either or its complementary (i.e., the filter that sends whenever sends , and sends whenever sends ) is incentive-compatible for the sender (resp., the receiver) and gives the same utility as in . This shows that, without loss of generality, we can simply search for the binary filter that (a) is incentive-compatible for both the sender and the receiver, and (b) that gives the most utility for the receiver.
Before we start, let (resp., ) denote the set of states in which both the sender and the receiver prefer and let (resp., ) denote the set of states in which the sender strictly prefers and the receiver strictly prefers (resp., the sender strictly prefers and the receiver strictly prefers ). The following algorithm outputs the optimal filter for the receiver.
-
Step 1
Set for all .
-
Step 2
Set for all .
-
Step 3
Sort all states according to the value . Let be the resulting list.
-
Step 4
Set for all and for all . If the resulting filter is incentive-compatible for the sender, output .
-
Step 5
For do the following. If is set to (resp., to ), set it to (resp., to 1). If the resulting filter is incentive-compatible for the sender, find the maximum (resp., the minimum) such that setting to is incentive-compatible for the sender (we show below that it can be computed with a linear equation with amortized cost). If the resulting filter is incentive-compatible for the receiver, output , otherwise output the constant filter .
It is important to note that the algorithm always terminates, since if we get to in step 5, the resulting filter will always output the signal that the sender prefers. It remains to show how to compute in amortized linear time. For this purpose, let (resp., ) denote the difference in utility for the sender (resp., for the receiver) between playing and playing . More precisely,
Then, given a binary filter , is incentive-compatible for the sender in if and only if
(2) |
The first and second equation state that the sender prefers on signal and prefers on signal , respectively. Similarly, is incentive-compatible for the receiver in if and only if
(3) |
It is straightforward to check that the incentive-compatibility equations for the sender monotonically increase and decrease, respectively, whenever we iterate in the fifth step of the algorithm. Therefore, we can simply pre-compute the sums
and analogues for the receiver:
Note that these sums can be computed in amortized linear time over the total number of states. Once this sums are computed, if we are in the th iteration of Step 5, to find we can check in constant time if the solution of any of the following linear equations is in :
If there exists such solution , to check that it is incentive-compatible for both players, we should simply check if the following inequalities are satisfied:
All of these computations can be performed in constant time if the necessary sums are pre-computed. This completed the description of the algorithm, note that it runs in time since it takes operations to sort the elements of , operations to compute the sums, and operations in each iteration of Step 5. In the next section, we show that the algorithm’s output is correct.
5.1.2 Proof of Correctness
In this section, we show that the Algorithm presented in Section 5.1 is correct. We begin by showing that we can indeed restrict our search to binary filters filters.
Lemma 2.
Let be a filter such that is incentive-compatible for the receiver in . Then, there exists a binary filter such that
-
(a)
is incentive-compatible for the receiver in
-
(b)
With strategy profile , the expected utility of the receiver in and is identical.
Proof.
Recall that is a strategy profile in which, for each possible signal , the sender sends its preferred action and then the receiver plays whatever is sent by the sender. This means that, if is incentive-compatible for the receiver in , then we can merge all signals in which the sender prefers , and also merge all signals in which the sender prefers . More precisely, given filter , let and be the sets of signals in which the sender prefers and respectively. Consider a filter that sends signal whenever would send a signal in , and sends signal whenever would send a signal in . In , by construction, both the sender and the receiver prefer action on signal and action on signal . This means that is a Nash equilibrium of . Moreover, again by construction, the expected utility of the receiver (and the sender) when playing in is identical to the one they’d get in . ∎
Recall that binary filters can be described by a function that maps each state to the probability that assigns to . Because of Lemma 1, our aim is to find which values in we should assign to each element in . The following lemmas characterizes these values.
Lemma 3.
There exists a filter that maximizes the utility for the receiver such that
-
(a)
for all .
-
(b)
for all .
-
(c)
At most one state satisfies that .
Proof.
Given a binary filter , if we increase by , the expected utility of the receiver increases by . This means that, if and , setting to both increases the receiver’s utility and preserves the incentive-compatibility constraints (see Equations 2 and 3). Analogously, the same happens by setting to when and . This proves (a) and (b).
To prove (c) suppose that there exist two states and such that . Then, because of (a) and (b) we can assume without loss of generality that . Therefore, if we increase by and decrease by , we’d have that the sender’s utility and incentive-compatibility constraints remain unchanged, but the receiver’s utility increases by (note that this value can be negative). This means that if we choose an that is small enough and is of the same sign as , not only the expected utility of the receiver increases, but also the values and increase and decrease respectively. This contradicts the fact that the filter is optimal for the receiver. ∎
Lemma 3 shows that we can assign for all , for all , and we have to assign either probability or probability to all but at most one of the remaining states. Ideally, we would like to assign probability to all states in and probability to all states in , since this guarantees that the receiver gets the maximum possible utility. However, this may not always be incentive-compatible for the sender, which means that we may have to assign to some of the states the probability that the sender prefers, as opposed to the probability that the receiver prefers. The following lemma characterizes these states.
Lemma 4.
Given a filtered information game , let be the states in sorted by . Then, there exists a binary filter that is optimal for the receiver and a value such that:
-
(a)
for all .
-
(b)
for all .
-
(c)
For , if and if .
-
(d)
For , if and if .
Lemma 4 states that, if we sort the states in which the sender and the receiver have different preferences by the ratio between how much the receiver prefers with respect to and how much the sender prefers with respect to (note that these ratios are always positive), we get that we can split these states into two contiguous blocks in which, in the first block, we assign them the probability that is most convenient for the sender, and for the second block we assign them the probability that is most convenient for the receiver. The proof follows the lines of that of Lemma 3 part (c).
Proof of Lemma 4.
Given two states , we say that if . Using the same argument as in the Proof of Lemma 3, we can assume w.l.o.g. that (a) and (b) are satisfied. Thus, it only remains to show (c) and (d), which follow from the fact that there exists an there exists an optimal binary filter for the receiver such that:
-
(s1)
If , and , then .
-
(s2)
If , and , then .
-
(s3)
If , , and , then .
-
(s4)
If , , and , then .
We will prove (s1) and (s3), the proofs of (s2) and (s4) are analogous.
Proof of (s1): Suppose that there exist such that and , but . If this happens, we can set
Since and , there exists some that is small enough so that and stay between and . Moreover, as in the proof of Lemma 3 part (c), if we perform this change the sender’s utility remains unchanged, but the receiver’s utility increases by
By assumption, we have that
which, together with the fact that and , implies that whenever .
Proof of (s3): The proof is almost identical to that of (s1). Suppose that there exist , such that , and . In this case, we can again set
The only difference with the proof of (s1) is that, in this case, and . Again, there exists a sufficiently small such that and remain between and . Moreover, a straightforward computation shows that the sender’s utility remains unchanged, but that the receiver’s utility changes by
Using that
, and , we get that whenever . ∎
Lemma 4 shows that the algorithm provided in Section 5.1 outputs the correct solution as long as it gets the right value of . The next lemma shows that this value is precisely the one that the algorithm finds. This completes the proof of correctness.
Lemma 5.
Let be the filter that assigns for and for . If is incentive-compatible for the sender, is the optimal filter for the receiver that is incentive-compatible for both players. Otherwise, all filters that are incentive-compatible for both players and are optimal for the receiver satisfy at least one of the following equations:
Proof.
Filter gives the maximum utility to the sender of all possible filters. Therefore, if it is incentive-compatible for the sender, it is the optimal for the receiver. Suppose instead that is not incentive-compatible for the sender but that an optimal filter satisfies
(4) |
Since is not IC for the sender, there exists a state such that or a state such that . In the first case, we can decrease by a small value such that Equation 4 is still satisfied. By doing this, we increase the receiver’s expected utility while obtaining a new filter that is still incentive-compatible for both players. The latter case is analogous except that we increase instead of decreasing it. ∎
6 Proof of Theorem 4
The proof of Theorem 4 follows from the following result of Arieli et al. [3] describing the characterization of the best Nash equilibrium in information aggregation games with two senders, one receiver, binary actions and a mediator.
Theorem 6 ([3]).
Let be an information aggregation game (without filters) with and in which the players can communicate with a third-party mediator. Given , let be the set of states in which prefers action , prefers action , and the receiver prefers action . Let also . Consider the following six maps from to :
-
(a)
for all and otherwise.
-
(b)
for all and otherwise.
-
(c)
for all and otherwise.
-
(d)
for all and otherwise.
-
(e)
for all .
-
(f)
for all .
Then, there exists an and a Nash equilibrium of that maximizes the receiver’s utility in which the function that maps each state to the probability that the receiver ends up playing is equal to .
Intuitively, Theorem 6 states that if there were no filters and the two senders had access to a third-party mediator, there exists a Nash equilibrium that is optimal for the receiver in which the either (a) the receiver only plays if all three players prefer , (b) the sender plays if and only if all three players prefer , (c) the receiver always plays what the first sender prefers, (d) the receiver always plays what the second sender prefers, (e) the receiver plays always 0, or (f) the receiver plays always 1. In the setting described in Section 3.1, players have no access to a mediator. However, we can easily check if the outcomes described in (a), (b), (c), (d), (e) or (f) are incentive-compatible for the receiver (i.e., if the receiver gets more utility with the outcome than when playing with no information), there exists a strategy in (without the mediator) that implements these outcomes.
For instance, suppose that we want to implement the outcome described in (a). Consider the strategy profile in which each sender sends a binary signal to the receiver that is equal to if and only if the realized state is in , and the receiver plays only if both signals are . It is easy to check that this is incentive-compatible for the senders: if the realized state is indeed in , it is a best response for both to send since it guarantees that the receiver will play . Moreover, if the realized state is not in , none of the senders gets any additional utility by defecting from the main strategy since the other sender will always send signal (which implies that the receiver will play ). An analogous reasoning gives a strategy profile that implements (b) and is incentive-compatible for the senders. To implement (c), consider the strategy profile in which both senders send a binary signal that is equal to if and only if the realized state is in , and the receiver plays if and only if the signal from the first sender is . Again, it is straightforward to check that this is incentive-compatible for the senders: it is a best-response for sender to send its preference, while it doesn’t matter what sender sends since it will be ignored. Implementing (d) with a strategy profile that is incentive-compatible for the senders is analogous. To implement (e) (resp., (f)) consider the strategy profile (resp., ) in which the senders send signal regardless of the realized state and the receiver always plays (resp., ). It is easy to check that (resp., ) are incentive-compatible for the senders. Since all outcomes that are implementable without a mediator are also implementable with a mediator, this implies the following proposition:
Proposition 6.
Let be an information aggregation game with and . Then, either , , , , or is a Nash equilibrium that is optimal for the receiver.
Proposition 6 implies that we can break the problem of finding the best Nash equilibrium for the receiver in a filtered information aggregation game into four sub-problems:
-
(a)
Finding a filter such that gives the maximal utility for the receiver in .
-
(b)
Finding a filter such that gives the maximal utility for the receiver in .
-
(c)
Finding a filter such that gives the maximal utility for the receiver in .
-
(d)
Finding a filter such that gives the maximal utility for the receiver in .
Note that these filters might not always exist (for instance, does not exist when the receiver always prefers and the senders always prefer ). Moreover, we are not including the optimal filters for and since all filters give the same utility with these strategies. Finding and (whenever they exist) reduce to the case of one sender (by ignoring and , respectively), and thus can be solved with the algorithm presented in Section 5.1. We next show how to compute using a linear program with variables and constraints. Computing is analogous.
Suppose that is a filter that maximizes the receiver utility when playing strategy . Denote by the set of signals of in which both senders and the receiver prefer to . Consider a filter that samples a signal according to and does the following: if , it sends signal . Otherwise, it sends signal . It is straightforward to check that if is a Nash equilibrium in , it is also a Nash equilibrium in . Moreover, players get the same utilities with and , which means that also maximizes the receiver’s utility. This implies that we can restrict our search to binary filters in which all players prefer action on signal and the receiver prefers action on signal .
Our aim then is finding a binary filter such that both senders and the receiver prefer action on signal , the receiver prefers action on signal , and such that the receiver gets the maximal possible utility by playing on signal and on signal . Let be the elements of and define , and . A binary filter over can be described by a sequence of real numbers between and such that denotes the probability that the filter sends signal on state . The condition that both senders prefer action on signal translates into
Moreover, the utility of the receiver with filter and strategy is given by . This sum can be rearranged into . Therefore, finding a filter such that gives the maximal utility for the receiver in reduces to solving the following linear programming instance:
Note that, even though we are maximizing the receiver’s utility, it may be the case that is not incentive-compatible for the receiver in . For instance, the receiver might prefer playing whenever the senders send . If such a thing happens (which can be tested in linear time), it means that there is no filter satisfying the desired conditions.
7 Extending Nash to sequential equilibria
For simplicity, we showed that the strategies provided in the proofs of Theorems 1, 2, 3 and 4 are Nash equilibria. In this section we show how to extend them to sequential equilibria. This means that, for each of these strategy profiles, we have to provide a compatible belief assessment and argue that the behavior of the players defined off the equilibrium path is a best-response according to their beliefs.
Note that, by construction, the only time that a player can be off the equilibrium path is when the receiver receives a message that the sender was not supposed to send under any circumstances (i.e., a message that had probability of being sent). In the proofs of the Theorems stated above, the strategy profiles that we constructed either had the receiver play the best action for her in the bubbling equilibrium (i.e., the best action for the receiver without any information), or to play the action that gives the least utility to the sender between all the relevant actions (i.e., the actions that are not dominated by combinations of other actions). We show next that there exist belief assessments that are compatible with the given strategy profiles in which playing these actions is justified. More generally, we show that, for all relevant actions , there exists a belief assessment that justifies playing whenever the receiver receives an impossible message.
We prove the claim above for the case of one sender, the extension to Theorem 4 is straightforward. Denote by the set of states and suppose that, with the proposed strategy , the sender sends messages in following a certain function . Let be an action that is optimal for the receiver at some posterior distribution and let be a message such that for all . Consider a sequence of strategies for the sender in which, in each strategy , the sender computes the message that she would send to the receiver using function . However, before sending the message, the sender tosses a coin that lands tails with probability . If the coin lands heads, she sends the computed message. Otherwise, she sends message . Clearly, the sequence converges to . Moreover, the posterior distribution over states induced by each of these strategies conditioned to the fact that the receiver received message is precisely . This means that playing action is always a best-response for the receiver when receiving message . Therefore, in the belief assessment induced by , is also a best response to message .
8 Conclusion
This paper diverges from traditional sender-receiver models, emphasizing the receiver’s perspective in cheap talk interactions. We explore two models: one, with transparent motives and state-independent utility, provides a geometric profile of the optimal receiver equilibrium; the second introduces a pre-play stage, empowering the receiver to influence information available to the sender. Our approach aligns with user-centric platforms, where receivers control information, similar to platforms like Amazon. Our findings yield nuanced insights into communication dynamics, enhancing our comprehension of strategic interactions. An intriguing avenue not explored in this paper is the integration of a third intermediary controlling senders’ information to maximize the intermediary’s utility.
References
- [1] Ricardo Alonso and Odilon Câmara. Persuading voters. American Economic Review, 106(11):3590–3605, 2016.
- [2] Itai Arieli and Yakov Babichenko. Private bayesian persuasion. Journal of Economic Theory, 182:185–217, 2019.
- [3] Itai Arieli, Ivan Geffner, and Moshe Tennenholtz. Mediated cheap talk design. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 5456–5463, 2023.
- [4] Itai Arieli, Ivan Geffner, and Moshe Tennenholtz. Resilient information aggregation. In Rineke Verbrugge, editor, Proceedings Nineteenth conference on Theoretical Aspects of Rationality and Knowledge, TARK 2023, Oxford, United Kingdom, 28-30th June 2023, volume 379 of EPTCS, pages 31–45, 2023.
- [5] Robert J Aumann, Michael Maschler, and Richard E Stearns. Repeated games with incomplete information. MIT press, 1995.
- [6] Dirk Bergemann, Benjamin Brooks, and Stephen Morris. The limits of price discrimination. American Economic Review, 105(3):921–957, 2015.
- [7] Sourav Bhattacharya and Arijit Mukherjee. Strategic information revelation when experts compete to influence. The RAND Journal of Economics, 44(3):522–544, 2013.
- [8] David Blackwell. Equivalent comparisons of experiments. The annals of mathematical statistics, pages 265–272, 1953.
- [9] Andreas Blume and Oliver Board. Language barriers. Econometrica, 81(2):781–812, 2013.
- [10] Vincent P Crawford and Joel Sobel. Strategic information transmission. Econometrica: Journal of the Econometric Society, pages 1431–1451, 1982.
- [11] Kris De Jaegher. A game-theoretic rationale for vagueness. Linguistics and Philosophy, 26(5):637–659, 2003.
- [12] Matthew Gentzkow and Emir Kamenica. Competition in persuasion. The Review of Economic Studies, 84(1):300–322, 2016.
- [13] Jeanne Hagenbach and Frédéric Koessler. Cheap talk with coarse understanding. Games and Economic Behavior, 124:105–121, 2020.
- [14] Shota Ichihashi. Limiting sender’s information in bayesian persuasion. Games and Economic Behavior, 117:276–288, 2019.
- [15] Gerhard Jäger, Lars P Metzger, and Frank Riedel. Voronoi languages: Equilibria in cheap-talk games with high-dimensional types and few signals. Games and economic behavior, 73(2):517–537, 2011.
- [16] Emir Kamenica and Matthew Gentzkow. Bayesian persuasion. American Economic Review, 101(6):2590–2615, 2011.
- [17] Elliot Lipnowski and Doron Ravid. Cheap talk with transparent motives. Econometrica, 88(4):1631–1660, 2020.