Sequential Information Design: Markov Persuasion Process and Its Efficient Reinforcement Learning
Abstract
In today’s economy, it becomes important for Internet platforms to consider the sequential information design problem to align its long term interest with incentives of the gig service providers (e.g., drivers, hosts). This paper proposes a novel model of sequential information design, namely the Markov persuasion processes (MPPs). Specifically, in an MPP, a sender, with informational advantage, seeks to persuade a stream of myopic receivers to take actions that maximizes the sender’s cumulative utilities in a finite horizon Markovian environment with varying prior and utility functions. Planning in MPPs thus faces the unique challenge in finding a signaling policy that is simultaneously persuasive to the myopic receivers and inducing the optimal long-term cumulative utilities of the sender. Nevertheless, in the population level where the model is known, it turns out that we can efficiently determine the optimal (resp. -optimal) policy with finite (resp. infinite) states and outcomes, through a modified formulation of the Bellman equation that additionally takes persuasiveness into consideration.
Our main technical contribution is to study the MPP under the online reinforcement learning (RL) setting, where the goal is to learn the optimal signaling policy by interacting with with the underlying MPP, without the knowledge of the sender’s utility functions, prior distributions, and the Markov transition kernels. For such a problem, we design a provably efficient no-regret learning algorithm, the Optimism-Pessimism Principle for Persuasion Process (OP4), which features a novel combination of both optimism and pessimism principles. In particular, we obtain optimistic estimates of the value functions to encourage exploration under the unknown environment. Meanwhile, we additionally robustify the signaling policy with respect to the uncertainty of prior estimation to prevent receiver’s detrimental equilibrium behavior. Our algorithm enjoys sample efficiency by achieving a sublinear -regret upper bound. Furthermore, both our algorithm and theory can be applied to MPPs with large space of outcomes and states via function approximation, and we showcase such a success under the linear setting.
1 Introduction
Most sequential decision models assume that there is a sole agent who possesses and processes all relevant (online or offline) information and takes an action accordingly. However, the economic literature on information design [26, 8] highlights the importance of considering information asymmetry in decision making, where the decision maker and information possessor may be two parties having different interests and goals. For example, a ride-sharing platform holds historical and real-time data on active riders and driver types in different locations, based on which they have developed centralized combinatorial optimization algorithms and reinforcement learning algorithms for vehicle repositioning, routing and order matching to optimize their operational efficiency and profit [33, 42, 34, 43]. But the de facto decision makers are the drivers. Moreover, as increasingly many drivers are freelancers instead of employees, the platform cannot expect to give mandatory orders to them. On the other hand, if the platform shares no information on rider demand, most drivers will not be able to efficiently find profitable trips. Therefore, it is not only realistic but also necessary to consider an information design problem that aligns the interests of the two parties in sequential decision making processes of this kind.
Given the large data sets being collected by corporations and governments, with avowed goals that relate data analysis to social welfare, it is timely to pursue formal treatments of sequential information design, to understand how to strategically inform the (sequential) decision makers (e.g., users, clients or citizens) impacted by centralized data analysis. In particular, we wish to understand the resulting equilibrium outcomes of both parties. As a concrete example, consider an online shopping platform which may make use of learning tools such as reinforcement learning or online convex optimization to manage inventory and ensure profitability [20, 36]. The platform cannot single-handedly manage its inventory, instead it requires information design (a.k.a., Bayesian persuasion) in its interactions with its suppliers and consumers. On the supply side, it could strategically reveal aspects of consumer sentiment (e.g., rough number of visits, search) to the suppliers in order to guide their sales expectation and negotiate for lower unit prices. On the demand side, it could tactically control displayed product information (e.g., last five remaining, editor’s choice) so as to influence consumers’ perception of products and consequently their purchase decisions. Similar situations can be anticipated for a recommendation platform. On the one hand, it should recommend most relevant items to its users for click-through and engagement. On the other hand, its recommendations are subject to misalignments with long-term objectives such as profits (e.g., from paid promotion), social impact (e.g., to prevent misinformation and filter bubbles) or development of a creator ecosystem [49, 53, 37].
1.1 Our Results and Contributions
To provide a formal foundation for the study of sequential information design, we introduce the Markov persuasion process (MPP), where a sender, with informational advantage, seeks to persuade a stream of myopic receivers to take actions that maximize the sender’s cumulative utility in a finite-horizon Markovian environment with varying prior and utility functions. We need to address a key challenge regarding the planning problem in MPPs; specifically, how to find persuasive signaling policies that are also optimized for the sender’s long-term objective. Moreover, in face of the uncertainty for both the environment and receivers, there is a dilemma that the optimal policy based on estimated prior is not necessarily persuasive and thus cannot induce the desired trajectory, whereas a full information revelation policy is always persuasive but usually leads to suboptimal cumulative utility. So the reinforcement learning algorithm in MPPs has to ensure optimality under the premise of robust persuasiveness. This makes our algorithm design non-trivial and regret analysis highly challenging.
We show how to surmount these analysis and design challenges, and present a no-regret learning algorithm, which we refer to as Optimism-Pessimism Principle for Persuasion Process (OP4), that provably achieves a regret with high probability, where are dimensions of the feature spaces, is the horizon length in each episode, is the number of episodes, and hides logarithmic factors as well as problem-dependent parameters. To establish this result, in Section 3.3 we start by constructing a modified formulation of the Bellman equation that can efficiently determine the optimal (resp. -optimal) policy with finite (resp. infinite) states and outcomes. Section 4.2 then considers the learning problem, in particular the design of the OP4 that adopts both the optimistic principle in utility estimation to incentivize exploration and the pessimism principle in prior estimation to prevent a detrimental equilibrium for the receiver. In Sections 4.3 and 4.4, we showcase OP4 in the tabular MPPs and contextual Bayesian persuasion problem, respectively, both of which are practical special cases of MPPs. In Section 5, we then generalize these positive results to MPPs with large outcome and state spaces via linear function approximation and generalized linear models.
In summary, our contributions are threefold. At the conceptual level, we identify the need for sequential information design in real-world problems and accordingly formulate a novel model, the MPP, to capture the misaligned incentives between the (sequential) decision makers and information possessors. At the methodological level, our key insight is a new algorithmic principle—optimism to encourage exploration and pessimism to induce robust equilibrium behavior. Finally, at the technical level, we develop a novel regret decomposition tailored to this combination of optimism and pessimism in the design of online learning algorithms. The fact that the combined optimism-pessimism concept can still lead to regret for strategic setups was not clear before our new regret decomposition lemma. We expect this design principle and our proof techniques can be useful for other strategic learning problems.
1.2 Related Work
Our work is built on the foundation of information design and reinforcement learning. We refer the readers to Section 2.1 and 2.2 for background and formal introductions. Here we focus on the technical and modeling comparisons with related work from dynamic Bayesian persuasion and efficient reinforcement learning.
Dynamic Bayesian persuasion. Starting from seminal work by Kamenica and Gentzkow [26], the study of Bayesian persuasion looks at the design problem to influence an uninformed decision maker through strategic information revelation. Many variants of this model have been studied, with applications in security, advertising, finance, etc. [44, 54, 21, 6]. More recently, several dynamic Bayesian persuasion frameworks have been proposed to model the long-term interest of the sender. Many papers [16, 45, 17, 29] consider the setting where the sender observes the evolving states of a Markov chain, seeks to influence the receiver’s belief of the state through signaling and thereby persuade him to take certain actions. In contrast to our setting, at each round, the receiver’s action has no influence on the evolution of the Markov process and thus can only maximizes his utility on his belief of current state, given all the historical signals received from the sender. In Ely [16], Farhadi and Teneketzis [17], the Markov chain has two states (one is absorbing): the receiver is interested in detecting the jump to the absorbing state, whereas the sender seeks to prolong the time to detection of such a jump. Renault et al. [45] shows a greedy disclosure policy that ignores its influence to the future utility can be optimal in Markov chain with special utility functions. Lehrer and Shaiderman [29] characterize optimal strategies under different discount factors as well as the optimal values the sender could achieve. Closer to our model is that of Gan et al. [19]—we both assume the Markov environment with state transition influenced by receiver’s action, as well as a separate persuasion state drawn from a prior independent of receiver’s action. However, Gan et al. [19] focus on the planning problem for the infinite-horizon MDP, solving sender’s optimal signaling policy when the environment is known in cases when the receiver is myoptic or far-sighted. In particular, it is shown as NP-hard to approximate an optimal policy against a far-sighted receiver, which also justifies our interest on the myoptic receiver. Another closely related work [61] studies the learning problem in repeated persuasion setting (without Markov state transition) between a stream of myopic receivers and a sender without initial knowledge of the prior. It introduces the notion of regret as well as the robustness principle to this learning problem that we adopt and generalize to our model.
Bayesian Incentive-Compatible Bandit Exploration. Our work is also loosely related to a seminal result by Mansour et al. [35], who model the misaligned incentives between a system (i.e., sender) and a stream of myopic agents (i.e., receivers). Mansour et al. [35] shows that using information asymmetry, the system can create intrinsic incentives for agents to follow its recommendations. In this problem, the sender’s objective is limited to the social welfare, i.e, the cumulative utility of all agents, whereas we make no assumption on the sender’s utility function and thus her long-term objective. Besides our model is designed to capture more general situations where each receiver could have different priors and utility functions, and the environment might be Markovian with dynamics under the influence of the receivers’ actions.
Efficient Reinforcement Learning. Reinforcement learning has seen its successful applications in various domains, such as robotics, finance and dialogue systems [27, 58, 30]. Along with the empirical success, we have seen a growing quest to establish provably efficient RL methods. Classical sample efficiency results focus on tabular environments with small, finite state spaces [2, 39, 4, 12, 47, 24, 46]. Notably, through the design principle, known as optimism in the face of uncertainty [28], an RL algorithm would provably incur a regret under the tabular setting, where and are the state and action spaces respectively [24, 4]. More recently, there have been advances in RL with function approximation, especially the linear case. Jin et al. [25] proposed an efficient algorithm for a setting where the transition kernel and the utility function are both linear functions with respect to a feature mapping: . A similar assumption has been studied for different settings and has led to sample efficiency results [55, 14, 38, 57, 22]. Moreover, other general function approximations have been studied in parallel, including generalized linear function approximation [52], linear mixture MDPs based on a ternary feature mapping [3, 60, 9, 59], kernel approximation [56] as well as models based on the low Bellman rank assumption [23, 11]. We make use of these function approximation techniques to model our conditional prior, and we show how to integrate the persuasion structure into these efficient reinforcement learning frameworks, thereby obtaining sample efficient result for large-scale MPPs.
2 Preliminaries
This section provides some necessary background in information design and Markov decision processes, as preparation for our model of Markov persuasion processes presented in the next section.
2.1 Basics of Information Design
Classic information design [26] considers the persuasion problem between a single sender (she) and receiver (he). The receiver is the only actor, and looks to take an action which results in receiver utility and sender utility . Here is the realized outcome of certain environment uncertainty, which is drawn from a prior distribution , and is a finite set of available actions for the receiver. While and the prior distribution are all common knowledge, the sender possesses an informational advantage and can privately observe the realized outcome . The persuasion problem studies how the sender can selectively reveal her private information about to influence the receiver’s decisions and ultimately maximize her own expected utility .
To model the sender’s strategic revelation of information, it is standard to use a signaling scheme, which essentially specifies the conditional distribution of a random variable (namely the signal), given the outcome . Before the realization of the outcome, the sender commits to such a signaling scheme. Given the realized outcome, the sender samples a signal from the conditional distribution according to the signaling scheme and reveals it to the receiver. Upon receiving this signal, the receiver infers a posterior belief about the outcome via Bayes’ theorem (based on the correlation between the signal and outcome as promised by the signaling scheme) and then chooses an action that maximizes the expected utility.
A standard revelation-principle-style argument shows that it is without loss of generality to focus on direct and persuasive signaling schemes [26]. A scheme is direct if each signal corresponds to an action recommendation to the receiver, and is persuasive if the recommended action indeed maximizes the receiver’s a posteriori expected utility. More formally, in a direct signaling scheme , , denotes the probability of recommending action given realized outcome . Upon receiving an action recommendation , the receiver computes a posterior belief for : . Thus, the action recommendation is persuasive if and only if maximizes the expected utility w.r.t. the posterior belief about ; i.e., for any . Equivalently, we define persuasiveness as
Let denote the set of all signaling schemes. To emphasize that the definition of persuasiveness depends on the prior , we denote the set of persuasive schemes on prior by
Given a persuasive signaling scheme , it is in the receiver’s best interest to take the recommended action and thus the sender’s expected utility becomes
Therefore, given full knowledge of the persuasion instance, the sender can solve for an optimal persuasive signaling scheme that maximizes her expected utility through the following linear program (LP) which searches for a persuasive signaling scheme that maximizes (see, e.g., [15] for details):
2.2 Basics of Reinforcement Learning and Markov Decision Processes
The Markov decision process (MDP) [41, 48] is a classic mathematical framework for the sequential decision making problem. In this work, we focus on the model of episodic MDP. Specifically, at the beginning of the episode, the environment has an initial state (possibly picked by an adversary). Then, at each step , the agent takes some action to interact with environment at state . The state obeys a Markov property and thus captures all relevant information in the history . Accordingly, the agent receives the utility and the system evolves to the state of the next step . Such a process terminates after , where is also known as the horizon length. Here, is a finite set of available actions for the agent, is the (possibly infinite) set of MDP states. The utility function and transition kernel may vary at each step. A policy of the agent characterizes her decision making process at step —after observing the state , the agent takes action with probability .
In an episodic MDP with steps, under policy , we define the value function as the expected value of cumulative utilities starting from an arbitrary state,
Here means that the expectation is taken with respect to the trajectory , which is generated by policy on the transition model . Similarly, we define the action-value function as the expected value of cumulative utilities starting from an arbitrary state-action pair,
The optimal policy is defined as , which maximizes the (expected) cumulative utility. Since the agent’s action affects both immediate utility and next states that influences its future utility, it thus demands careful planning to maximize total utility. Notably, can solved by dynamic programming based on the Bellman equation [7]. Specifically, with and for each from to , iteratively update and determine the optimal policy as the greedy policy with respect to .
In online reinforcement learning, the agent has no prior knowledge of the environment, namely, , but aims to learn the optimal policy by interacting with the environment for episodes. For each , at the beginning of the -th episode, after observing the initial state , the agent chooses a policy based on the observations before -th episode. The discrepancy between and serves as the suboptimality of the agent at the -th episode. The performance of the online learning algorithm is measured by the expected regret,
3 Markov Persuasion Processes
This section introduces the Markov Persuasion Process (MPP), a novel model for sequential information design in Markovian environments. It notably captures the motivating yet intricate real-world problems in Section 1. Furthermore, our MPP model is readily applicable to generalized settings with large state spaces by incorporating function approximation techniques.
3.1 A Model of Markov Persuasion Processes (MPPs)
We start by abstracting the sequential information design problem instances in Section 1 into MPPs. Taking as an example recommendation platform for ad keywords, we view the platform as the sender, the advertisers as the receivers. The advertisers decide the actions such as whether to accept the recommended keyword. To better reflect the nature of reality, we model two types of information for MPPs, outcome and state. We use the notion of outcome to characterize the sender’s private information in face of each receiver, such as the features of searchers for some keyword. The outcome follows a prior distribution such as the general demographics of Internet users on the platform. The platform can thus leverage such fine-grained knowledge on keyword features, matching with the specific ad features of each advertiser, to persuade the advertisers to take a recommendation of keywords. Meanwhile, we use the notion of state to characterize the Markovian state of the environment, e.g., the availability of ad keyword slots. The state is affected by the receiver’s action, as the availability changes after some keywords get brought.111Similarly, we can view the online shopping platform as the sender who persuades a stream of receivers (supplier, consumer) to take certain action, whether to take an offer or make a purchase. In this case, sender can privately observe the outcomes such as the consumer sentiments on some random products based on the search and click logs, whereas the states are product reviews, sales or shipping time commonly known to the public and affected by the actions of both supply and demand sides. In case of rider-sharing, outcome represents the fine-grained knowledge of currently active rider types that are privately known to the platform and generally stochastic in accordance to some user demographics, whereas the state captures the general driver supply or rider demand at locations that is affected by the drivers’ decisions. Naturally, both sender’s and receiver’s utility are determined by the receiver’s action jointly with the state of environment and realized outcome , i.e., . Meanwhile, as these applications could serve thousands or millions of receivers every day, to reduce the complexity of our model we assume each receiver appears only once and thus will myopically maximizes his utility at that particular step, whereas the sender is a system planner who aim to maximizes her long-term accumulated expected utility.
More specifically, an MPP is built on top of a standard episodic MDP with state space , action space , and transition kernel . In this paper, we restrict our attention to finite-horizon (i.e., episodic) MPPs with steps denoted by , and leave the study of infinite-horizon MPPs as an interesting future direction. At a high level, there are two major differences between MPPs and MDPs. First, in a MPP, the planner cannot directly take an action but instead can leverage its information advantage and “persuade” a receiver to take a desired action at each step . Second, in an MPP, the state transition is affected not only by the current action and state , but also by the realized outcome of Nature’s probability distribution. Specifically, the state transition kernel at step is denoted as . To capture the sender’s persuasion of a receiver to take actions at step , we assume that a fresh receiver arrives at time with a prior over the outcome . The planner, who is the sender here, can observe the realized outcome and would like to strategically reveal information about in order to persuade the receiver to take a certain action .
Differing from classical single-shot information design, the immediate utility functions for the receiver and sender vary not only at each step but also additionally depend on the commonly observed state of the environment. We assume the receiver to have full knowledge of his utility and prior at each step , and would take the recommended action if and only if maximizes his expected utility under the posterior for .222This assumption is not essential but just for technical rigor. Because even if receivers have limited knowledge or computational power to accurately determine the utility-maximizing actions, the sender should have sufficient ethical or legal reasons to comply with the persuasive constraints in practice. And the receivers would only take the recommendation if the platform has good reputation (i.e., persuasive with high probability).
Formally, an MPP with a horizon length proceeds as follows at each step :
1. A fresh receiver with prior distribution and utility arrives. 2. The sender commits to a persuasive signaling policy , which is public knowledge. 3. After observing the realized state and outcome , the sender accordingly recommends the receiver to take an action . 4. Given the recommended action , the receiver takes an action , receives utility and then leaves the system. Meanwhile, the sender receives utility . 5. The next state is generated according to , the state transition kernel at the -th step.
Here we coin the notion of a signaling policy as a mapping from state to a signaling scheme at the -th step. It captures a possibly multi-step procedure in which the sender commits to a signaling scheme after observing the realized state and then samples a signal after observing the realized outcome. For notational convenience, we denote as the probability of recommending action given state and realized outcome . We can also generalize the notion of persuasiveness from classic information design to MPPs. Specifically, we define as the persuasive set that contains all signaling policies that are persuasive to the receiver with utility and prior for all possible state :
Recall that consists of all mappings from to . As such, the sender’s persuasive signaling scheme is essentially a stochastic policy as defined in standard MDPs— maps a state to a stochastic action —except that here the probability of suggesting action by policy depends additionally on the realized outcome that is only known to the sender.
We say is a feasible policy of the MPP if , because the state transition trajectory would otherwise be infeasible if the receiver is not guaranteed to take the recommended action, i.e., . We denote the set of all feasible policies as .
3.2 MPPs: the Generalized Version with Contexts and Linear Parameterization
To provide a broadly useful modeling concept, we also study a generalized setting of the Markov Persuasion Process with contextual prior and a possibly large space of states, outcomes and contexts.
Contextual Prior.
At the beginning of each episode, a sequence of contexts is realized by Nature and becomes public knowledge. And we allow the prior to be influenced by the context at each step , and thus denote it by . Specifically, the contextual information is able to model the uncertainty such as the varying demographics of active user group affected by events (e.g., scheduled concerts or sport games in ride-sharing) at different time of the day.333In the case of the online shopping platform, the prior of consumer interests may be affected by the different holidays or seasons at different time of year. Here we allow the sequence of contexts to be adversarially generated.
Linear Parameterization.
We also relax the state, context and outcome space to be continuous and additionally assume that the transition kernels and utility functions are linear, and the conditional priors of outcomes are generalized linear models (GLM) of the context at each steps. More formally, for each step , our linearity condition assumes:
-
•
The sender’s utility is , where (1) is a known feature vector; (2) is the unknown linear parameter at step .
-
•
The next state is drawn from the distribution , where is a vector of unknown measures over at step .
-
•
The outcome subjects to a generalized linear model (GLM), which models a wider range of hypothesis function classes.444We note that GLM is a strictly generalization of the linear model assumption that we have for the distribution of transition kernel . While we could use similar technique to extend the distribution of to GLM using techniques similar to that in Wang et al. [52], but we save such an extension for simplicity, since it is not the primary focus of our work. Given the context , there exists a link function such that , where is a feature vector and is an unknown parameter. The noises are independent -sub-Gaussian variables with zero mean. We denote the prior of with parameter at context as .
Without loss of generality, we assume that there exist such that ,555For the simplicity of notation, we will omit the subscript of the norm whenever it is an norm in this paper. for all and . We also assume that , , , , . Such a regularity condition is common in the RL literature.
3.3 Optimal Signaling Policy in MPPs
In order to maximize the sender’s utility, we study the optimal policy in MPPs, in analogy to that of standard MDPs. We start by considering the value of any feasible policy . For each step , we define the value function for the sender as the expected value of cumulative utilities under when starting from an arbitrary state at the -th step. That is, for any , we define
where the expectation is taken with respect to the randomness of the trajectory (i.e., randomness of state transition), realized outcome and the stochasticity of . Accordingly, we define the -function (action-value function) which gives the expected value of cumulative utilities when starting from an arbitrary state-action pair at the -step following the signaling policy , that is,
By definition, , since . To simplify notation, for any -function and any distributions and over and , we additionally denote
Using this notation, the Bellman equation associated with signaling policy becomes
(3.1) |
which holds for all . Similarly, the Bellman optimality equation is
(3.2) |
We remark that the above equations implicitly assume the context (and thus the priors) are determined in advance. To emphasize the values’ dependence on context which will be useful for the analysis of later learning algorithms, we extend the notation to to specify that the value (resp. Q) function is estimated based on which prior conditioned on which sequence of context .
A Note on Computational Efficiency.
We note that the above Bellman Optimality Equation in (3.2) also implies an efficient dynamic program to compute the optimal policy in the basic tabular model of MPP in Subsection 3.1, i.e., when are all discrete. This is because the maximization problem in equation (3.2) can be solved efficiently be a linear program. The generalized MPP of subsection 3.2 imposes some computational challenge due to infinitely many outcomes and states. Fortunately, it is already known that planning in the infinite state MDP with linear function approximation can also be solved efficiently [25]. Following a similar analysis, we can determine through a linear function of with the observed feature . Hence, the dominating operation is to compute at each step. Let the sender utility function be ; such an optimization is exactly the problem of optimal information design with infinitely many outcomes but finitely many actions, which has been studied in previous work [15]. It turns out that there is an efficient algorithm that can signal on the fly for any given outcome and obtains an -optimal persuasive signaling scheme in time. Therefore, in our later studies of learning, we will take these algorithms as given and simply assume that we can compute the optimal signaling scheme efficiently at any given state . One caveat is that our regret guarantee will additionally lose an additive factor at each step due to the availability of only an -optimal algorithm, but this loss can be negligible when we set by using a time algorithm.
4 Reinforcement Learning in MPPs and the Optimism-Pessimism Principle
In this section, we study online reinforcement learning (RL) for learning the optimal signaling policy on an MPP. Here the learner only knows the utility functions of the receivers666 The receiver’s utility is known to the sender because the pricing rules are usually transparent, some are even set by the platform. For example, a rider-sharing platform usually sets per hour or mile payment rules for the drivers. and has no prior knowledge about the prior distribution, the sender’s utility function, and the transition kernel. While the computation of optimal policy in MPPs in Section 3.3 may appear analogous to that of a standard MDP, as we will see that the corresponding RL problem turns out to be significantly different, partially due to the presence of the stream of receivers, whose decisions are self-interested and not under the learner’s control. This makes the learning challenging because if the receivers’ incentives are not carefully addressed, they may take actions that are extremely undesirable to the learner. Such concern leads to the integration of the pessimism principle into our learning algorithm design. Specifically, our learner will be optimistic to the estimation of the -function, similar to many other RL algorithms, in order to encourage exploration. But more interestingly, it will be pessimistic to the uncertainty in the estimation of the prior distributions in order to prepare for detrimental equilibrium behavior. Such dual considerations lead to an interesting optimism-pessimism principle (OPP) for learning MPPs under the online setting. From a technical point of view, our main contribution is to prove how the mixture of optimism and pessimism principle can still lead to no regret algorithms, and this proof crucially hinges on a robust property of the MPP model which we develop and carefully apply to the regret analysis. To the best of our knowledge, this is the first time that OPP is employed to learn the optimal information design in an online fashion. We prove that it can not only satisfy incentive constraints but also guarantees efficiency in terms of both sample complexity and computational complexity.
In order to convey our key design ideas before diving into the intricate technicalities, this section singles out two representative special cases of the online sequential information design problem. In a nutshell, we present a learning algorithm OP4 that combines the principle of optimism and pessimism such that the sender can learn to persuade without initially knowing her own utility or the prior distribution of outcomes. In the tabular MPP, we illustrate the unique challenges of learning to persuade arising from the dynamically evolving environment state according to a Markov process. Through the contextual Bayesian persuasion, we showcase the techniques necessary for learning to persuade with infinitely many states (i.e., contexts) and outcomes. We shall omit most proofs in this section to focus on the high-level ideas, because the proof for the general setting presented in Section 5 suffices to imply all results for the two special cases here.
4.1 Learning Optimal Policies in MPPs: Setups and Benchmarks
We consider the episodic reinforcement learning problem in finite-horizon MPPs. Different from the full knowledge setting in Section 3.3, the transition kernel, the sender’s utility function and the outcome prior at each step of the episode, , are all unknown. The sender has to learn the optimal signaling policy by interacting with the environment as well as a stream of receivers in number of episodes. For each , at the beginning of -th episode, given the data , the adversary picks the context sequence as well as the initial state , and the agent accordingly chooses a signaling policy . Here is the utility collected by the sender at step of episode .
Regret
To evaluate the online learning performance, given the ground-truth outcome prior , we define the sender’s total (expected) regret over the all episodes as
(4.1) |
Note that if is not always feasible under , but is only persuasive with high probability, so the corresponding regret under should be also in high probability sense.
It turns out that in certain degenerate cases it is impossible to achieve a sublinear regret. For example, if the set of possible posterior outcome distributions that induce some as the optimal receiver action has zero measure, then such posterior within a zero-measure set can never be exactly induced by a signaling scheme without a precise knowledge of the prior. Thus, the regret could be if receiver cannot be persuaded to play such action . Therefore, to guarantee no regret, it is necessary to introduce certain regularity assumption on the MPP instance. Towards that end, we shall assume that the receivers’ utility and prior at any step of the MPP instance always satisfies a minor assumption of -regularity as defined below.
Regularity Conditions
An instance satisfies -regularity, if for any feasible state and context , we have
where is the ground-truth prior of outcomes and is the set of outcomes for which the action is optimal for the receiver by at least at state . . In other words, an instance is -regular if every action has at least probability , under randomness of the outcome, to be strictly better than other actions by at least . This regularity condition is analogous to a regularity condition of Zu et al. [61] but is generalizable to infinite outcomes as we consider here.
4.2 Algorithm: Optimism-Pessimism Principle for Persuasion Process (OP4)
The learning task in MPPs involves two intertwined challenges: (1) How to persuade the receiver to take desired actions under unknown ? (2) Which action to persuade the receiver to take in order to explore the underlying environment? For the first challenge, due to having finite data, it is impossible to perfectly recover . We can only hope to construct an approximately accurate estimator of . To guard against potentially detrimental equilibrium behavior of the receivers due to the prior estimation error, we propose to adopt the pessimism principle. Specifically, before each episode, we conduct uncertainty quantification for the estimator of the prior distributions, which enables us to construct a confidence region containing the true prior with high probability. Then we propose to find the signaling policy within a pessimistic candidate set—signaling policies that are simultaneously persuasive with respect to all prior distributions in the confidence region. When the confidence region is valid, such a pessimism principle ensures that the executed signaling policy is always persuasive with respect to the true prior. Furthermore, to address the second challenge, we adopt the celebrated principle of optimism in the face of uncertainty [28], which has played a key role in the online RL literature. The main idea of this principle is that, the uncertainty of the -function estimates essentially reflects our uncertainty about the underlying model. By adding the uncertainty as a bonus function, we encourage actions with high uncertainty to be recommended and thus taken by the receiver when persuasiveness is satisfied. We then fuse the two principles into the OP4 algorithm in Algorithm 1.
Pessimism to Induce Robust Equilibrium Behavior
From the data in the past episode, the sender can estimate the mean of the prior as well as obtain a confidence region through concentration inequalities. Given this partial knowledge of the prior distribution, the sender needs to design a signaling scheme that works in the face of any possible priors in the confidence region in order to ensure the receiver will take its recommended action with high probability. Specifically, we let denote the closed ball in norm of radius centered at . For any set , we let denote the set of signaling policies that are simultaneously persuasive under all weigh vectors : . For any non-empty set , the set is convex since it is an intersection of convex sets , and is non-empty since it must contain the full-information signaling scheme. We note that since is a convex set, we can solve the linear optimization among the policies in in polynomial time (see e.g., [61]).
Optimism to Encourage Exploration
In order to balance exploration and exploitation, we adopt the principle of optimism in face of uncertainty to the value iteration algorithm based on Bellman equation, following in a line of work in online RL such as -learning with UCB exploration [24], UCBVI [4], LSVI-UCB [25] (also see [50, 56, 51] and the references therein). The additional UCB bonus on the -value encourages exploration and has been shown to be a provably efficient online method to improve policies in MDPs. Moreover, this method not only works for the simple tabular setting, but also generalizes to settings with infinite state spaces by exploiting linearity of the -function and a regularized least-squares program to determine the optimal estimation of -value. In fact, within our framework, we could obtain efficient learning result in the infinite state space setting through other optimism-based online RL methods and general function approximators, such as linear mixture MDPs [3, 60, 9, 59], or kernel approximation [56] or bilinear classes [13].
To provide a concrete picture of the learning process, we instantiate the OP4 algorithm in two special cases and showcase our key ideas and techniques before delving into the more involved analysis of the generalized MPP setting. Nevertheless, we remark that whether the problem instance is tabular or in the form of linear or generalized linear approximations is not essential and not the focus of our study. OP4 itself only relies on two things, i.e., the uncertainty quantification for -function and prior estimation. So even the model-free RL framework can be replaced by model-based RL, as we can just construct confidence region for the transition models.
4.3 Warm-up I: Reinforcement Learning in the Tabular MPP
We first consider MPPs in tabular setting with finite states and outcomes, as described in Section 3.1. In this case, the prior on outcomes at each step degenerates to an unknown but fixed discrete distribution independent of context. As linear parameterization is not required for discrete probability distribution, the algorithm can simply update the empirical estimation of through counting. Similarly, the transition kernel is estimated through the occurrence of observed samples, and we uses this estimated transition to compute the -function from the observed utility and estimated value function in the next step, according to the Bellman equation. To be specific, for each , and are estimated through the following equations:
where and respectively count the effective number of samples that the sender has observed arriving at , or the combination ), and is a constant for regularization.
In our learning algorithm, we determine the radius of confidence region for according to confidence bound . Moreover, we add a UCB bonus term of form to to obtain the optimistic -function . Then, it selects a robustly persuasive signaling scheme that maximizes an optimistic estimation of -function with respect to the current prior estimation . Finally, it makes an action recommendation using this signaling scheme, given the state and outcome realization .
Theorem 4.1.
Let , and . Then under -regularity, with probability at least , OP4 has regret of order in tabular MPPs.
To obtain the regret of OP4, we have to consider the regret arising from different procedures. Formal decomposition of the regret is described in Lemma 6.1. Separately, we upper bound errors incurred from estimating -function (Lemma 6.2), the randomness of of choosing the outcome, action and next state (Lemma A.5) as well as estimating the prior of outcome and choosing a persuasive signaling scheme that is robustly persuasive for a subset of priors (Lemmas 6.3 and 6.4). As the two warm-up models are special cases of the general MPP, the proof of the above properties follows from that of the general MPP setting in Section 6, and thus is omitted here.
4.4 Warm-up II: Reinforcement Learning in Contextual Bayesian Persuasion
We now move to another special case with , such that the MPP problem reduces to a contextual-bandit-like problem, where transitions no longer exist. Given a context and a persuasive signaling policy , the value function is simply the sender’s expected utility for any ,
The sender’s optimal expected utility is defined as
Meanwhile, we consider the general setting where outcome is a continuous random variable that subjects to a generalized linear model. To be specific, the prior is conditioned on the context with the mean value . For the prior and link function , we assume the smoothness of the prior and the bounded derivatives of the link function:
Assumption 4.2.
There exists a constant such that for any parameter , we have for any given context .
Assumption 4.3.
The link function is either monotonically increasing or decreasing. Moreover, there exists absolute constants and such that and for all .
It is natural to assume a Lipschitz property of the distribution in Assumption 4.2. For instance, Gaussian distributions and uniform distributions satisfy this property. Assumption 4.3 is standard in the literature [18, 52, 32]. Two example link functions are the identity map and the logistic map with bounded . It is easy to verify that both maps satisfy this assumption.
Different from the tabular setting, we are now unable to use the counting-based estimator to keep track of the distribution of the possibly infinite states and outcomes. Instead, we resort to function approximation techniques and estimate the linear parameters and . In each episode, OP4 respectively updates the estimation and confidence region of and , with which it can determine the outcome prior under pessimism and sender’s utility under optimism. To be specific, the update of solves a constrained least-squares problem and the update of solves precisely a regularized one:
We then estimate the prior by setting to the distribution of and estimate the sender’s utility by setting . On one hand, to encourage exploration, OP4 adds the UCB bonus term of form to the -function, where is the Gram matrix of the regularized least-squares problem and is equivalent to a scalar. This is a common technique for linear bandits. On the other hand, OP4 determines the confidence region of with radius , and ensures that signaling scheme is robustly persuasive for any possible (worst case) prior induced by linear parameters in this region. Combining optimism and pessimism, OP4 picks the signaling scheme among the robust persuasive set that maximizes the sender’s optimistic utility.
Theorem 4.4.
Since we estimate the prior by computing an estimator , we evaluate the persuasiveness of OP4 through the probability that lies in the confidence region centered at with the radius in weighted norm. Due to the smoothness of the prior and the assumption of link function, the error of the estimated prior is bounded by the product of and the weighted norm of feature vector , which yields the same conclusion for in the tabular MPP case. Also compared to Li et al. [31], we do not require any regularity for , since we add a constant matrix to the Gram matrix . This ensures that is always lower bounded by the constant . The proof of the persuasiveness and sublinear regret of contextual bandit can be viewed as a direct reduction of the MPP case when the total step . We decompose the regret in the same way as that in Lemma 6.1 for MPPs and then estimate the upper bound for each item to measure the regret loss.
5 No-Regret Learning in the General Markov Persuasion Process
In this section, we present the full version of the OP4 algorithm for MPPs and show that it is persuasive with high probability and meanwhile achieves average regret .
In the general MPP setting with the linear utility and transition, a crucial property is that the -functions under any signaling policy is always linear in the feature map (akin to linear MDPs [25]). Therefore, when designing learning algorithms, it suffices to focus on linear -functions. In our OP4 algorithm, we iteratively fit the optimal -function, which is parameterized by as at each step . OP4 learns the -functions of MPPs and the prior of persuasion states simultaneously. It operates similarly as that in tabular MPPs and contextual Bayesian persuasion. At the -th episode, given the historical data , we can estimate the unknown vectors by solving the following constrained or regularized least-squares problems:
Additionally, is the estimated value function with the observed context at the episode , which we describe formally later. This estimator is used to replace the unknown transition and distribution in equation (3.2). Moreover, we can update the estimate of outcome prior and -function respectively. Here OP4 adds UCB bonus to to encourage exploration. The formal description is given in Algorithm 2.
Likewise, the MPP setting inherits the regularity conditions and Assumption 4.2 and 4.3 in the last section. Combining the insights from both the tabular MPPs and contextual Bayesian persuasion, we can show that the OP4 is persuasive and guarantees sublinear regret with high probability for general MPPs.
Theorem 5.1.
Recall that the novelty of OP4 is that we adopt pessimism and optimism to induce robust equilibrium behavior and encourage exploration simultaneously. Specifically, pessimism tackles the uncertainty in the prior estimation by selecting a signaling policy that is persuasive w.r.t. all the priors in the confidence region, while optimism in -function estimation encourages exploration. To evaluate the regret of OP4, we provide a novel regret decomposition, which is tailored to this pessimism and optimism combination. Each term represents different aspects of regret loss incurred by either estimation or randomness.
6 Proof Sketch and Technical Highlights
In this section, we present the proof sketch for Theorem 5.1. We first decompose the regret into several terms tailored to MPPs and briefly introduce how to bound each term. Then we highlight our technical contribution about regularity when measuring the loss in the sender’s utility for choosing a signaling scheme that is persuasive for a subset of priors close to each other.
6.1 Proof of Theorem 5.1
In order to prove the sublinear regret for OP4, we construct a novel regret decomposition tailored to MPPs. Our proof starts from decomposing the regret into several terms, each of which indicates the regret loss either from estimation or from the randomness of trajectories. Next, we evaluate each term and then add them together to conclude the upper bound of the regret of OP4. For simplicity of presentation, denote as the expectation of with respect to the ground-truth prior and signaling scheme at the -th step. Then we can define the temporal-difference (TD) error as
(6.1) |
Here is a function on for all and . Intuitively, quantifies how far the -functions are from satisfying the Bellman optimality equation in equation (3.2). Moreover, define and for the trajectory generated by Algorithm 2 at the -th episode as follows
(6.2) |
By definition, capture the randomness of realizing the outcome and signaling the action , while captures the randomness of drawing the next state from . With the notations above, we can decompose the regret into six parts to facilitate the establishment of the upper bound of the regret.
Lemma 6.1 (Regret Decomposition).
In this novel regret decomposition, term indicates the optimism in OP4. Provably, in term is always non-positive due to the optimistic -value estimation, which could simplify this term. Term corresponds to the pessimism in OP4 for inducing a robust equilibria. It evaluates the regret loss incurred by choosing a robustly persuasive signaling policy. Since the signaling policy has to be persuasive to ensure that receivers will always take recommended actions, we cannot simply choose a greedy policy for a fixed prior estimation. Instead, we first apply optimism to construct the optimistic -value estimation and then apply pessimism to select a signaling policy that is robustly persuasive for all the priors in the confidence region. Therefore, we design the regret decomposition, especially term in this form to reflect the optimism and pessimism principle in OP4. Notice that this decomposition does not depend on specific function approximation forms in the algorithm, since not only the estimation of prior and -function but also the chosen signaling policy has no influence on this formula. Therefore, it generally suits all the algorithms for MPPs.
Unlike the regret decomposition in [56], Lemma 6.1 also captures the randomness of realizing the outcome. Since we have to estimate the prior of the outcome and choose a robustly persuasive policy in MPPs, we add term and to evaluate the further regret loss.
The rigorous arguments turn out to be technical, and thus we shall defer the proof of most lemmas to the appendix while aiming to present all the key ideas and conclusions in the following. For term in equation (6.3), although we do not observe the trajectories under prior and signaling policy , we can upper bound both and . The following lemma states this result.
Lemma 6.2 (Optimism).
There exists an absolute constant such that, for any fixed , if we set and in Algorithm 2 with , then with probability at least , we have
for all and .
Term in equation (6.3) can be bounded by Lemma 5.3 from [56] using martingale techniques and the Azuma-Hoeffding inequality [5]. We state the upper bound for term in Lemma A.5. Moreover, term in equation (6.3) evaluates the regret loss caused by estimating the prior and choosing a robustly persuasive signaling policy. Here, we apply the robustness gap defined later to bound this term.
It remains to bound term in equation (6.3). This bound can be derived from Holder inequality and the property of the prior.
Now we are ready to prove our main result, Theorem 5.1. By the decomposition in Lemma 6.1 and all previous lemmas, let , and then we obtain the following upper bound for regret:
With the probability of the given event by Lemma 6.8 and appropriately chosen in previous lemmas, the above inequality holds for the probability at least .
6.2 Inducing Robust Equilibria via Pessimism
One necessary prerequisite is that the signaling policy given by OP4 has to be persuasive to ensure receivers to take recommended actions. However, the optimal signaling policy that is persuasive for the estimated prior can hardly be also persuasive for the true prior, even if the estimation is quite close to it. To ensure persuasiveness under the prior estimation error, we adopt pessimism principle to select a signaling policy that is robustly persuasive for all the priors in the confidence region. And we shall quantify the extra utility loss suffered by the pessimism principle. In this subsection, we start by showing that there exists a robust signaling scheme that suffers only utility loss compared to the optimal expected utility of persuasion algorithm designed with precise knowledge of the prior. Formally, in basic MPP, given any fixed -function , we define the robustness gap for some state and any prior as
(6.4) |
We let be the -norm ball centered the prior distribution with radius .
Lemma 6.5 (Pessimism).
Under -regularity, for all , given a -function , for any state , we have
The proof is given in Appendix A.2. This result extends Proposition 1 in [61]. Notice that the upper bound of does not depend on the value of , which is important for our analysis. Once given a signaling algorithm, at each episode and each step , we are able to obtain an estimation of -function with an explicit form. It is equivalent to the “known” -function mentioned in equation (6.4). Using , we can estimate the expected sender’s utility loss for choosing a signaling mechanism that is persuasive for all priors in a subset. Moreover, if we consider the dependence on context for priors and add the linear assumption of priors to the proceeding lemma, we can bound by the difference of linearity parameter .
Corollary 6.6.
In MPPs, we have to estimate the prior of the outcome since we cannot observe the ground-truth prior. However, the estimation may not satisfy the regularity conditions, which conflicts with the requirements for the prior when proving Lemma 6.5. To address this problem, we give another upper bound of the robustness gap for the prior estimation in Lemma A.1. In addition, to handle the regret loss incurred by estimating the prior, we compute the difference in -functions when choosing respectively persuasive scheme for different priors in Lemma A.2.
We now prove that the above pessimism design guarantees persuasiveness w.r.t. the true prior with high probability. And it suffices to show that the estimation is close enough to the real parameter such that the confidence region centered at given in Algorithm 2 contains . If so, the signaling scheme chosen to be persuasive for the whole set is also persuasive for , where denotes the set of priors that are determined by the parameters .
Lemma 6.7.
There exists a constant , such that for , OP4 Algorithm is persuasive with probability at least , i.e.,
Proof.
We first analyze the probability for being non-persuasive. For any , using the union bound, we have
The following lemma gives the belief of confidence region for the linear parameter . The proof can be directly derived from Lemma 6 in Wang et al. [52].
Lemma 6.8 (Belief of Confidence Region).
For any and , there exists a constant , such that for , given , with probability at least , we have
By Lemma 6.8, taking , then we have . Summing up the failure probabilities over , we have . ∎
7 Conclusion
We have presented a novel model, the MPP, which captures the misaligned incentives of uninformed decision makers and the long-term objective of an information possessor for the first time. We then provide a reinforcement learning algorithm, OP4, that is provably efficient in terms of both computational complexity and sample complexity, under mild assumptions. We remark that while we showcase this algorithm in particular problem instances with linear approximation or GLMs, the framework of OP4 does not rely on the function approximation form, as long as we can quantify the uncertainty of the prior estimation and -function (or transition model). In addition, we expect this optimism-pessimism design principle and its corresponding proof techniques to be generally useful for some other strategic learning problems with misaligned incentives involved.
Besides extending our techniques to other design problems, we point out that several other open problems arises from our work. First, while it is natural that the sender have knowledge of receiver’s utility functions in many cases (see Footnote 6), we hope to also study the problem even without initially knowing receiver’s utility. Similar problem has been studied in Stackelberg games [40, 10] yet without measuring the performance in terms of the cumulative utility of sender (leader). Second, another interesting direction is to study the setting of Markov Bayesian persuasion with one sender and one receiver, both aiming at maximizing their own long-term cumulative utilities when the environment involves Markovian transitions.
References
- Agarwal et al. [2019] Agarwal, A., Jiang, N., Kakade, S. M. and Sun, W. (2019). Reinforcement learning: Theory and algorithms. CS Dept., UW Seattle, Seattle, WA, USA, Tech. Rep.
- Auer et al. [2008] Auer, P., Jaksch, T. and Ortner, R. (2008). Near-optimal regret bounds for reinforcement learning. Advances in neural information processing systems, 21.
- Ayoub et al. [2020] Ayoub, A., Jia, Z., Szepesvari, C., Wang, M. and Yang, L. (2020). Model-based reinforcement learning with value-targeted regression. In International Conference on Machine Learning. PMLR.
- Azar et al. [2017] Azar, M. G., Osband, I. and Munos, R. (2017). Minimax regret bounds for reinforcement learning. In International Conference on Machine Learning. PMLR.
- Azuma [1967] Azuma, K. (1967). Weighted sums of certain dependent random variables. Tohoku Mathematical Journal, Second Series, 19 357–367.
- Badanidiyuru et al. [2018] Badanidiyuru, A., Bhawalkar, K. and Xu, H. (2018). Targeting and signaling in ad auctions. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms. SIAM.
- Bellman [1957] Bellman, R. (1957). A markovian decision process. Indiana Univ. Math. J., 6 679–684.
- Bergemann and Morris [2019] Bergemann, D. and Morris, S. (2019). Information design: A unified perspective. Journal of Economic Literature, 57 44–95.
- Cai et al. [2020] Cai, Q., Yang, Z., Jin, C. and Wang, Z. (2020). Provably efficient exploration in policy optimization. In International Conference on Machine Learning. PMLR.
- Conitzer and Sandholm [2006] Conitzer, V. and Sandholm, T. (2006). Computing the optimal strategy to commit to. In Proceedings of the 7th ACM conference on Electronic commerce.
- Dann et al. [2018] Dann, C., Jiang, N., Krishnamurthy, A., Agarwal, A., Langford, J. and Schapire, R. E. (2018). On oracle-efficient pac rl with rich observations. Advances in neural information processing systems, 31.
- Dann et al. [2017] Dann, C., Lattimore, T. and Brunskill, E. (2017). Unifying pac and regret: Uniform pac bounds for episodic reinforcement learning. Advances in Neural Information Processing Systems, 30.
- Du et al. [2021] Du, S., Kakade, S., Lee, J., Lovett, S., Mahajan, G., Sun, W. and Wang, R. (2021). Bilinear classes: A structural framework for provable generalization in rl. In International Conference on Machine Learning. PMLR.
- Du et al. [2019] Du, S. S., Kakade, S. M., Wang, R. and Yang, L. F. (2019). Is a good representation sufficient for sample efficient reinforcement learning? arXiv preprint arXiv:1910.03016.
- Dughmi and Xu [2019] Dughmi, S. and Xu, H. (2019). Algorithmic bayesian persuasion. SIAM Journal on Computing STOC16–68.
- Ely [2017] Ely, J. C. (2017). Beeps. American Economic Review, 107 31–53.
- Farhadi and Teneketzis [2021] Farhadi, F. and Teneketzis, D. (2021). Dynamic information design: a simple problem on optimal sequential information disclosure. Dynamic Games and Applications 1–42.
- Filippi et al. [2010] Filippi, S., Cappe, O., Garivier, A. and Szepesvári, C. (2010). Parametric bandits: The generalized linear case. Advances in Neural Information Processing Systems, 23.
- Gan et al. [2021] Gan, J., Majumdar, R., Radanovic, G. and Singla, A. (2021). Bayesian persuasion in sequential decision-making. arXiv preprint arXiv:2106.05137.
- Giannoccaro and Pontrandolfo [2002] Giannoccaro, I. and Pontrandolfo, P. (2002). Inventory management in supply chains: a reinforcement learning approach. International Journal of Production Economics, 78 153–161.
- Goldstein and Leitner [2018] Goldstein, I. and Leitner, Y. (2018). Stress tests and information disclosure. Journal of Economic Theory, 177 34–69.
- He et al. [2021] He, J., Zhou, D. and Gu, Q. (2021). Logarithmic regret for reinforcement learning with linear function approximation. In International Conference on Machine Learning. PMLR.
- Jiang et al. [2017] Jiang, N., Krishnamurthy, A., Agarwal, A., Langford, J. and Schapire, R. E. (2017). Contextual decision processes with low bellman rank are pac-learnable. In International Conference on Machine Learning. PMLR.
- Jin et al. [2018] Jin, C., Allen-Zhu, Z., Bubeck, S. and Jordan, M. I. (2018). Is q-learning provably efficient? Advances in neural information processing systems, 31.
- Jin et al. [2020] Jin, C., Yang, Z., Wang, Z. and Jordan, M. I. (2020). Provably efficient reinforcement learning with linear function approximation. In Conference on Learning Theory. PMLR.
- Kamenica and Gentzkow [2011] Kamenica, E. and Gentzkow, M. (2011). Bayesian persuasion. American Economic Review, 101 2590–2615.
- Kober et al. [2013] Kober, J., Bagnell, J. A. and Peters, J. (2013). Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32 1238–1274.
- Lattimore and Szepesvári [2020] Lattimore, T. and Szepesvári, C. (2020). Bandit algorithms. Cambridge University Press.
- Lehrer and Shaiderman [2021] Lehrer, E. and Shaiderman, D. (2021). Markovian persuasion. arXiv preprint arXiv:2111.14365.
- Li et al. [2016] Li, J., Monroe, W., Ritter, A., Galley, M., Gao, J. and Jurafsky, D. (2016). Deep reinforcement learning for dialogue generation. arXiv preprint arXiv:1606.01541.
-
Li et al. [2017a]
Li, L., Lu, Y. and Zhou, D. (2017a).
Provable optimal algorithms for generalized linear contextual
bandits.
CoRR, abs/1703.00048.
http://arxiv.org/abs/1703.00048 - Li et al. [2017b] Li, L., Lu, Y. and Zhou, D. (2017b). Provably optimal algorithms for generalized linear contextual bandits. In International Conference on Machine Learning. PMLR.
- Li et al. [2019] Li, M., Qin, Z., Jiao, Y., Yang, Y., Wang, J., Wang, C., Wu, G. and Ye, J. (2019). Efficient ridesharing order dispatching with mean field multi-agent reinforcement learning. In The world wide web conference.
- Liang et al. [2021] Liang, E., Wen, K., Lam, W. H., Sumalee, A. and Zhong, R. (2021). An integrated reinforcement learning and centralized programming approach for online taxi dispatching. IEEE Transactions on Neural Networks and Learning Systems.
- Mansour et al. [2021] Mansour, Y., Slivkins, A., Syrgkanis, V. and Wu, Z. S. (2021). Bayesian exploration: Incentivizing exploration in bayesian games. Operations Research.
- Meisheri et al. [2020] Meisheri, H., Baniwal, V., Sultana, N. N., Khadilkar, H. and Ravindran, B. (2020). Using reinforcement learning for a large variable-dimensional inventory management problem. In Adaptive Learning Agents Workshop at AAMAS.
- Milano et al. [2020] Milano, S., Taddeo, M. and Floridi, L. (2020). Recommender systems and their ethical challenges. Ai & Society, 35 957–967.
- Neu and Pike-Burke [2020] Neu, G. and Pike-Burke, C. (2020). A unifying view of optimism in episodic reinforcement learning. Advances in Neural Information Processing Systems, 33 1392–1403.
- Osband et al. [2016] Osband, I., Van Roy, B. and Wen, Z. (2016). Generalization and exploration via randomized value functions. In International Conference on Machine Learning. PMLR.
- Peng et al. [2019] Peng, B., Shen, W., Tang, P. and Zuo, S. (2019). Learning optimal strategies to commit to. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33.
- Puterman [2014] Puterman, M. L. (2014). Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons.
- Qin et al. [2020] Qin, Z., Tang, X., Jiao, Y., Zhang, F., Xu, Z., Zhu, H. and Ye, J. (2020). Ride-hailing order dispatching at didi via reinforcement learning. INFORMS Journal on Applied Analytics, 50 272–286.
- Qin et al. [2021] Qin, Z. T., Zhu, H. and Ye, J. (2021). Reinforcement learning for ridesharing: A survey. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC). IEEE.
- Rabinovich et al. [2015] Rabinovich, Z., Jiang, A. X., Jain, M. and Xu, H. (2015). Information disclosure as a means to security. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems. Citeseer.
- Renault et al. [2017] Renault, J., Solan, E. and Vieille, N. (2017). Optimal dynamic information provision. Games and Economic Behavior, 104 329–349.
- Russo [2019] Russo, D. (2019). Worst-case regret bounds for exploration via randomized value functions. Advances in Neural Information Processing Systems, 32.
- Strehl et al. [2006] Strehl, A. L., Li, L., Wiewiora, E., Langford, J. and Littman, M. L. (2006). Pac model-free reinforcement learning. In Proceedings of the 23rd international conference on Machine learning.
- Sutton and Barto [2018] Sutton, R. S. and Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
- Tang and Winoto [2016] Tang, T. Y. and Winoto, P. (2016). I should not recommend it to you even if you will like it: the ethics of recommender systems. New Review of Hypermedia and Multimedia, 22 111–138.
- Wang et al. [2020] Wang, R., Salakhutdinov, R. and Yang, L. F. (2020). Provably efficient reinforcement learning with general value function approximation.
- Wang et al. [2021] Wang, T., Zhou, D. and Gu, Q. (2021). Provably efficient reinforcement learning with linear function approximation under adaptivity constraints. Advances in Neural Information Processing Systems, 34.
- Wang et al. [2019] Wang, Y., Wang, R., Du, S. S. and Krishnamurthy, A. (2019). Optimism in reinforcement learning with generalized linear function approximation. arXiv preprint arXiv:1912.04136.
- Xiao et al. [2019] Xiao, W., Zhao, H., Pan, H., Song, Y., Zheng, V. W. and Yang, Q. (2019). Beyond personalization: Social content recommendation for creator equality and consumer satisfaction. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
- Xu et al. [2015] Xu, H., Rabinovich, Z., Dughmi, S. and Tambe, M. (2015). Exploring information asymmetry in two-stage security games. In Twenty-Ninth AAAI Conference on Artificial Intelligence.
- Yang and Wang [2019] Yang, L. and Wang, M. (2019). Sample-optimal parametric q-learning using linearly additive features. In International Conference on Machine Learning. PMLR.
-
Yang et al. [2020]
Yang, Z., Jin, C., Wang, Z., Wang, M. and
Jordan, M. I. (2020).
Bridging exploration and general function approximation in
reinforcement learning: Provably efficient kernel and neural value
iterations.
CoRR, abs/2011.04622.
https://arxiv.org/abs/2011.04622 - Zanette et al. [2020] Zanette, A., Lazaric, A., Kochenderfer, M. and Brunskill, E. (2020). Learning near optimal policies with low inherent bellman error. In International Conference on Machine Learning. PMLR.
- Zheng et al. [2020] Zheng, S., Trott, A., Srinivasa, S., Naik, N., Gruesbeck, M., Parkes, D. C. and Socher, R. (2020). The ai economist: Improving equality and productivity with ai-driven tax policies. arXiv preprint arXiv:2004.13332.
- Zhou et al. [2021a] Zhou, D., Gu, Q. and Szepesvari, C. (2021a). Nearly minimax optimal reinforcement learning for linear mixture markov decision processes. In Conference on Learning Theory. PMLR.
- Zhou et al. [2021b] Zhou, D., He, J. and Gu, Q. (2021b). Provably efficient reinforcement learning for discounted mdps with feature mapping. In International Conference on Machine Learning. PMLR.
-
Zu et al. [2021]
Zu, Y., Iyer, K. and Xu, H. (2021).
Learning to persuade on the fly: Robustness against ignorance.
EC ’21, Association for Computing Machinery, New York, NY, USA.
https://doi.org/10.1145/3465456.3467593
Appendix A Omitted Proofs and Descriptions
A.1 Formal Description of the OP4
The formal description of the OP4 for MPPs is stated as follows:
A.2 Proof of Lemma 6.5
Proof.
We prove with an explicit construction of a signaling scheme that is robustly persuasive for any prior in and achieve the expected utility at least . To simplify the notation, we omit the in , and .
Let be a direct scheme without loss of generality [26]. For each , let denote the posterior of outcome (i.e., kernel 777In this proof, we will directly work with the posterior without normalization (kernel) to simplify our notations and derivations, because . We use to denote the Hadamard product.) that action is recommended by , so the prior can be composed as . Since is persuasive, we know
Let be the fully revealing signaling scheme that always recommends (signals) the action that maximizes the receivers’ utility at the realized outcome. For each , let denote the posterior of outcome that action is recommended by , so the prior can be composed as . By regularity condition, we have
We now show that the signaling scheme is persuasive for any prior with . One simple way to interpret this “compound” signaling scheme is to follow with probability and follow with probability . Hence, given a recommended action , the receiver would compute the posterior as . Let be the outcome posterior of recommending action under the true prior (resp. the perturbed prior ). So and . By definition of persuasiveness, we need to show that for any recommended action (signal from ) , the action maximizes the receiver’s utility under . This follows from the decomposition below,
The first inequality is by the fact that for any and thus . The second inequality is from . The third inequality is by construction of and induced by signaling scheme and . The last inequality is by the fact that , since
It remains to show the expected utility under signaling scheme is at least . This is due to the following inequalities,
The first and second equalities use the definition and linearity. The third and last inequalities use the fact that and remove the positive term. ∎
A.3 Properties for the Robustness Gap
We present the robustness gap for the ground-truth prior in Lemma 6.5. For the estimation of prior given in Algorithm 2 which may not satisfy the regularity condition, we also have corresponding robustness gap.
Lemma A.1.
For any and , on the event of , we have
Proof.
For any fixed action , on the given event, we have
where is the indicating function. The last inequality uses the regularity condition for the real prior . For , we have . Then by Lemma 6.5, we can arrive at
For , the bound holds trivially since . ∎
The robustness gap defined in equation (6.4) measures the loss in value functions for being robustly persuasive for a subset of priors. In the following lemma, we show that we can also use to bound the difference in expected optimal -functions between different priors.
Lemma A.2.
Denote for any fixed state and . Then given a known -function , we have
Proof.
Fix , we respectively choose the optimal signaling scheme
Then among all the signaling schemes persuasive for all , let maximize . Since is persuasive for , we know by definition. Therefore, we have
The last equality uses the definition of and Lemma A.3. ∎
Lemma A.3.
Given a -function , for any fixed state , and any signaling scheme , we have
Proof.
Fix . For any , we have
where the last inequality is derived from Holder’s inequality. With -function taking values in , we can set and achieve the optimality.
∎
A.4 Proof of Lemma 6.1
Proof.
Before presenting the proof, we first define two operators and :
(A.1) |
for any and any function under the context . Moreover, for any and any state , we define
(A.2) |
After introducing these notations, we decompose the instantaneous regret at the -th episode into two terms,
(A.3) |
Then we consider these two terms separately. By the definition of value functions in (3.1) and the operator in (A.1), we have . By the construction of Algorithm 2, we have similarly. Thus, for the first term defined in equation (A.3), using defined in (A.2), for any , we have
Next, by the definition of the temporal-difference error in (6.1) and the Bellman optimality equation in equation (3.2), we have
Hence we get
Then, by recursively applying the above formula, we have
By the definition of in equation (A.2) and in equation (6.2), we get
Notice that . Therefore, for any episode , we have
Now we come to bound the second term in equation (A.3). By the definition of the temporal-difference error in (6.1), for any , we note that
where the last equality follows the Bellman equation (3.1). Furthermore, using and defined in (6.2), we have
Applying the above equation recursively, we get that
Again by Bellman equation (3.1), we have,
Then we use to simplify the decomposition to the following form:
Therefore, combining and , we can conclude the proof of this lemma.
Therefore, we conclude the proof of the lemma. ∎
A.5 Proof of Lemma 6.2
Proof.
In the following lemma, we firstly bound the difference between the -function maintained in Algorithm 2 (without bonus) and the real -function of any policy by their expected difference at next step, plus an error term. This error term can be upper bounded by our bonus with high probability. This lemma can be derived from Lemma B.4 in [25] with slight revisions.
Lemma A.4.
Set . There exists an absolute constant such that for where , and for any fixed policy , with probability at least , we have for all , , , , ,
for some that satisfies .
A.6 Proof of Lemma 6.3
Proof.
Denote the optimal signaling schemes corresponding to the real prior and the estimated prior respectively as
where the -function is given by Algorithm 2. Notably, is different from the truly optimal policy , since is computed based on the approximate -function . By definition, we can decompose the difference as follows:
(A.4) | ||||
(A.5) | ||||
(A.6) |
By definition, equation (A.4) is always non-positive. Apply Lemma A.2 to equation (A.5) and we can get
According to Corollary 6.6, we can bound the above equation with the norm of feature vector and the radius of confidence region for .
A.7 Proof of Lemma 6.4
A.8 Proof of Corollary 6.6
A.9 Auxiliary Lemmas
This section presents several auxiliary lemmas and their proofs.
Lemma A.5 (Martingale Bound; [9]).
For and defined in (6.2) and for any fixed , with probability at least , we have
Proof.
See [9] for a detailed proof. ∎
Lemma A.6.
Suppose that and for any , there exists a constant such that Let for some . Then,
Proof.
Lemma A.7 (Sum of Potential Function; [1]).
For any sequence of , let for some . Then we have
Proof.
See [1] for a detailed proof. ∎
Lemma A.8 (Determinant-Trace Inequality).
Suppose that and for any , there exists a constant such that Let for some . Then,
Proof.
Let be the eigenvalues of . Since is positive definite, its eigenvalues are positive. Also, note that and . By inequality of arithmetic and geometric means
It remains to upper bound the trace:
and the lemma follows. ∎