This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Delegated Stochastic Probing

Curtis Bechtel
Department of Computer Science
University of Southern California
[email protected]
supported by NSF Grants CCF-1350900 and CCF-2009060.
   Shaddin Dughmi
Department of Computer Science
University of Southern California
[email protected]
supported by NSF CAREER Award CCF-1350900 and NSF Grant CCF-2009060.
(December, 2020)
Abstract

Delegation covers a broad class of problems in which a principal doesn’t have the resources or expertise necessary to complete a task by themselves, so they delegate the task to an agent whose interests may not be aligned with their own. Stochastic probing describes problems in which we are tasked with maximizing expected utility by “probing” known distributions for acceptable solutions subject to certain constraints. In this work, we combine the concepts of delegation and stochastic probing into a single mechanism design framework which we term delegated stochastic probing. We study how much a principal loses by delegating a stochastic probing problem, compared to their utility in the non-delegated solution. Our model and results are heavily inspired by the work of Kleinberg and Kleinberg in “Delegated Search Approximates Efficient Search.” Building on their work, we show that there exists a connection between delegated stochastic probing and generalized prophet inequalities, which provides us with constant-factor deterministic mechanisms for a large class of delegated stochastic probing problems. We also explore randomized mechanisms in a simple delegated probing setting, and show that they outperform deterministic mechanisms in some instances but not in the worst case.

1 Introduction

The division of labor and responsibility, based on expertise, is a defining characteristic of efficient organizations and productive economies. In the context of economic decision-making, such division often manifests through delegation scenarios of the following form: A decision maker (the principal), facing a multivariate decision beset by constraints and uncertainties, tasks an expert (the agent) with collecting data, exploring the space of feasible decisions, and proposing a solution.

As a running example, consider the leadership of a firm delegating some or all of its hiring decisions to an outside recruitment agency. When the principal and the agent have misaligned utilities — such as when the agency must balance the firm’s preferences with its own preferences over, or obligations towards, potential hires — the principal faces a mechanism design problem termed optimal delegation (see e.g. [14, 3]). When the underlying optimization problem involves multiple inter-dependent decisions, such as when hiring a team which must collectively cover a particular set of skills, and when data collection is constrained by logistical or budget considerations, the problem being delegated fits in the framework of stochastic probing, broadly construed (see e.g. [26]).

The present paper is concerned with the above-described marriage of optimal delegation and stochastic probing. We restrict attention to protocols without payments, drawing our inspiration from the recent work of Kleinberg and Kleinberg [16]. The underlying (non-delegated) problem faced by the principal in their “distributional model” is the following: facing nn i.i.d rewards, select the ex-post best draw. As for their “binary model”, there are nn random rewards with binary support, and a cost associated with sampling each; the goal is to adaptively sample the rewards and select one, with the goal of maximizing the ex-post selected reward less sampling costs. For both models, they show that delegating the problem results in a loss of at most half the principal’s utility. Their analysis in both cases is through a reduction to the (classical) single-choice prophet inequality problem, and in particular to the threshold stopping rule of Samuel-Cahn [25].

Both the distributional and binary models of [16] can be viewed as stochastic probing problems, the former being trivial in the absence of delegation, and the latter corresponding to a special case of the well-studied box problem of Weitzman [27]. A number of stochastic probing problems have been known to reduce to contention resolution schemes (e.g. [10, 11, 7, 1, 9]), which in turn reduce to generalizations of the prophet inequality [21]. This suggests that the results of [16] might apply more broadly.

It is this suggestive thread which we pull on in this paper, unraveling what is indeed a broader phenomenon. We study optimal delegation for a fairly general class of stochastic probing problems with combinatorial constraints, and obtain delegation mechanisms which approximate, up to a constant, the principal’s non-delegated utility. Building on recent progress in the literature on stochastic optimization, our results reduce delegated stochastic probing to generalized prophet inequalities of a particular “greedy” form, as well as to the notion of adaptivity gap (e.g. [4, 5]).

1.1 Our Model

Our model features a collection of elements, each of which is associated with a (random) utility for each of the principal and the agent. We assume that different elements are independently distributed, though the principal’s and the agent’s utilities for the same element may be correlated. We allow constraining both the sampled and the selected set of elements via outer and inner constraints, respectively. Each constraint is a downwards-closed set system on the ground set of elements. A probing algorithm for an instance of our model adaptively probes some set of elements subject to the outer constraint, learning their associated utilities in the process. The algorithm then selects as its solution a subset of the probed elements satisfying the inner constraint. We assume that, for both the principal and the agent, utility for a solution is the sum of its per-element utilities.

To situate the non-game-theoretic component of our model within the literature on stochastic probing problems, note that we allow an arbitrary utility distribution for each element, rather than a binary-support distribution characterizing “feasibility”. Moreover, unlike “probe and commit” models, we also allow our algorithm to select its solution after all probes are complete. In both these respects, our model is akin to the stochastic multi-value probing model of [5]. As for our game-theoretic modeling, we assume that the utility distributions, as well as the inner and outer constraints, are common knowledge. The realized utilities, however, are only observed by the agent upon probing.

In the traditional (non-delegation) setting, the principal implements the probing algorithm optimizing her own utility, in expectation. In the delegation setting, the principal and agent engage in the following Stackelberg game. The principal moves first by committing to a policy, or mechanism. Such a policy is a (possibly randomized) map from a set of signals to solutions satisfying the inner constraint, with each element in the solution labeled with its (presumptive) utility for both the principal and the agent. Moving second, the agent probes some set of elements subject to the outer constraint, and maps the observed utilities to a signal. The outcome of the game is then the solution which results from applying the principal’s policy to the agent’s signal. We assume that the principal and agent utilities are additive across elements in the solution, so long as it is labeled with the true per-element utilities. Otherwise, we assume that the principal can detect this discrepancy and effectively “quit” the game, imposing a utility of zero for both parties. We adopt the perspective of the principal, who seeks a policy maximizing her expected utility. The agent, naturally, responds with a strategy maximizing his own expected utility given the policy.

By an argument analogous to that in [16], which we prove in our general setting for completeness’ sake, we can restrict attention to single-proposal mechanisms. In a deterministic single-proposal mechanism, the set of signals is a “menu” of acceptable (labeled) solutions satisfying the inner constraint, as well as a “null” signal which in our setting we can take to be the empty set. The agent, facing such a mechanism, without loss simply implements a probing algorithm to compute a “proposed” solution, tagging each element in the solution with its observed utilities, and ensuring that the solution is acceptable to the principal. We also consider randomized single-proposal mechanisms, where the menu consists of acceptable lotteries (i.e., distributions) over (labeled) solutions, and an agent’s probing algorithm proposes a lottery on the menu.

1.2 Our Results

We study delegation mechanisms which approximate the principal’s non-delegated utility. We refer to the best multiplicative approximation factor as the delegation gap of the associated instance.

Our main set of results concern the design of deterministic single-proposal mechanisms which prove constant delegation gaps for natural classes of inner and outer constraints. Our approach is modular, and reduces a (constructive) αβ\alpha\beta bound on the delegation gap to a (constructive) α\alpha generalized prophet inequality of a particular form on the inner constraint, and a (constructive) bound of β\beta on the adaptivity gap associated with the outer constraint and the rank function of the inner constraint. Drawing on recent work in [9], which derives prophet inequalities of our required form, and in [4, 5], which bounds the adaptivity gap, we obtain constant bounds on the delegation gap for instances of our model with a variety of inner and outer constraints such as matroids and their intersections, among others.

We also begin an exploration of randomized single-proposal mechanisms, where the principal’s menu consists of acceptable lotteries over solutions. We show that, even in the simple setting of no outer constraint and a 11-uniform inner constraint, there are instances for which randomized mechanisms significantly outperform optimal deterministic ones. Nevertheless, there exist worst-case instances where both deterministic and randomized mechanisms suffer a 1/21/2 delegation gap. We leave open whether randomized mechanisms can lead to better bounds on the worst-case delegation gap for more intricate classes of inner and outer constraints.

1.3 Additional Discussion of Related Work

Since the economic literature on delegation is extensive, we only describe a select sample here. The groundwork for the formal study of optimal delegation in economics was initially laid by Holstrom [14, 13]. Subsequent work in economics has considered a variety of optimization problems as the task being delegated (e.g. [2, 23, 3]). We mention the work of Kovac and Mylovanov [18] as being related to our results in Section 5: To our knowledge, they were the first to examine the power of randomized mechanisms for delegation.

Most relevant to the present paper is the aforementioned work of Kleinberg and Kleinberg [16], who examine approximations for optimal delegation. Their distributional model is closely related to the model of Armstrong and Vickers [3], and the optimization problem being delegated in their binary model is a special case of Weitzman’s box problem [27]. Both optimization problems fit nicely in the general literature on stochastic probing (see e.g. [26]), motivating our examination of delegated stochastic probing more broadly.

Also related is the recent work of Khodabakhsh et al [15], who consider a very general model of delegation with discrete actions and states of the world, and an agent who fully observes the state (no outer constraints or sampling costs). They show optimal delegation to be NP-hard and examine limited “bias” assumptions under which simple threshold mechanisms are approximately optimal. Notably, they don’t impose sampling constraints on the agent and their approximations are with respect to the optimal delegation policy rather than the optimal non-delegated policy. For these reasons, our results are not directly comparable.

The optimization problems being delegated in our model fit in the broad class of stochastic probing problems. We do not attempt a thorough accounting of this literature, and instead refer the reader to related work discussions in [26, 5]. To our knowledge, the term “stochastic probing” was originally coined by Gupta and Nagarajan [10], though their binary probe-and-commit model is quite different from ours. More closely related to us are the models of [5, 4], which capture stochastic probing problems with multi-valued reward distributions, no commitment, and combinatorial inner and outer constraints.

As previously mentioned, our work draws on the literature on prophet inequalities. The foundational result in this setting is the (single-choice) prophet inequality of Krengel, Sucheston, and Garling [19, 20]. Generalized prophet inequalities, with respect to various combinatorial constraints, adversaries, and arrival models, have received much attention in the last decade (e.g. [12, 17, 8, 9]); the associated body of work is large, and we recommend the survey by [22]. Closely related to generalized prophet inequalities are contention resolution schemes (see e.g. [6, 9, 1]), with reductions going in both directions [21]. Key to our results are the “greedy” generalized prophet inequalities, derived through “greedy” contention resolution, by Feldman et al [9].

Finally, we briefly elaborate on the relationship between our model and the two models of Kleinberg and Kleinberg [16]. The natural variant of their binary model which replaces sampling costs with combinatorial constraints on the set of samples (outer constraints, in our nomenclature) fits squarely in our model. Their distributional model, which allows nn i.i.d. samples from a distribution over utility pairs, initially appears to be a special case of ours. However, our principal is afforded additional power through their ability to distinguish elements by name alone. Nevertheless, we recover their main result as a special case of ours by observing that our mechanism treats elements symmetrically.

2 Preliminaries

Sections 2.1, 2.2, and 2.3 include brief introductions to some of the key ideas and notations used in this paper. Notably, Section 2.2 defines the key notion of “greedy” prophet inequalities.

2.1 Set Systems

A set system is a pair (E,)(E,\mathcal{I}) where EE is a finite set of elements and 2E\mathcal{I}\subseteq 2^{E} is a family of feasible sets. We focus on downwards-closed set systems, satisfying the following two conditions: (1) \emptyset\in\mathcal{I}, i.e. the empty set is feasible, and (2) if TT\in\mathcal{I} then SS\in\mathcal{I} for all STS\subseteq T, i.e. any subset of a feasible set is feasible. Matroids, matching constraints, and knapsack constraints are all examples of downwards-closed set systems.

For a set system =(E,)\mathcal{M}=(E,\mathcal{I}) and FEF\subseteq E, we use |F=(F, 2F)\mathcal{M}|F=(F,\mathcal{I}\,\cap\,2^{F}) to denote the restriction of \mathcal{M} to FF.

2.2 Prophet Inequalities

A generalized prophet inequality problem is given by a set system =(E,)\mathcal{M}=(E,\mathcal{I}), and for each element eEe\in E an independent random variable XeX_{e} supported on the nonnegative real numbers. Here we adopt the perspective of a gambler, who is given \mathcal{M} and the distributions of the random variables {Xe}eE\left\{X_{e}\right\}_{e\in E} in advance, then encounters the elements EE in an order chosen by an adversary. On encountering ee, the gambler observes the realization xex_{e} of the random variable XeX_{e}, and must immediately decide whether to accept ee, subject to the accepted set SS of elements remaining feasible in \mathcal{M}. The gambler seeks to maximize his utility x(S)=eSxex(S)=\sum_{e\in S}x_{e}, and in particular to compete with a prophet who knows the realization of all random variables in advance. If the gambler can guarantee an α\alpha fraction of the prophet’s utility in expectation, we say that we obtain a generalized prophet inequality with a factor of α\alpha.

For each possible realization xex_{e} of XeX_{e}, we refer to the pair (e,xe)E×+(e,x_{e})\in E\times\mathbb{R}_{+} as an outcome. When the gambler accepts eEe\in E given a realization xex_{e} of XeX_{e}, we also say the gambler accepts the outcome (e,xe)(e,x_{e}).

Although it is most common to consider an adversary who fixes an order of the elements upfront, some recent work has investigated much more powerful adversaries [17, 9]. In this paper, we are interested in the almighty adversary, who knows in advance the realizations of all random variables as well as any random coin flips used by the gambler’s strategy. The almighty adversary can perfectly predict the future and choose a truly worst-case ordering.

Key to our results is the notion of a “greedy” strategy for the gambler. We take inspiration from [9], who defined greedy online contention resolution schemes, and extend their definition to prophet inequality problems.

Definition 2.1.

Fix any instance of a generalized prophet inequality problem. A greedy strategy for the gambler is described by a downwards-closed family 𝒜2E×+\mathcal{A}\subseteq 2^{E\times\mathbb{R}_{+}} of sets of outcomes. A gambler following greedy strategy 𝒜\mathcal{A} accepts an outcome (e,xe)(e,x_{e}) if and only if the set of all accepted outcomes remains in 𝒜\mathcal{A}.

We note that Samuel-Cahn’s [25] threshold rule for the single-choice prophet inequality is greedy, and its competitive factor of 12\frac{1}{2} holds for the almighty adversary [24]. More generally, Feldman et al. [9] show that there exist constant-factor greedy prophet inequalities against the almighty adversary for many classes of constraints.

2.3 Adaptivity Gap

Another key notion we will use is the adaptivity gap for stochastic set function optimization problems. For a detailed introduction, see [4].

We consider maximizing a stochastic set function f:2E+f:2^{E}\to\mathbb{R}_{+} constrained by a downwards-closed set system =(E,)\mathcal{M}=(E,\mathcal{I}). We assume ff is determined by a collection {Xe}eE\left\{X_{e}\right\}_{e\in E} of independent random variables, with the stipulation that f(S)f(S) does not depend on any random variables XeX_{e} for which eSe\notin S.111In other words, one can evaluate f(S)f(S) given access to the realizations of the random variables {Xe}eS\left\{X_{e}\right\}_{e\in S}. We are tasked with “probing” some SES\subseteq E, feasible for \mathcal{M}, with the goal of maximizing f(S)f(S). An adaptive algorithm for this problem probes elements one at a time, where probing ee results in learning the realization of XeX_{e}. Such an algorithm can use the realizations of probed variables to decide on a next element to probe. A non-adaptive algorithm chooses the set SS all at once, independently of the random variables {Xe}eE\left\{X_{e}\right\}_{e\in E}. The adaptivity gap is the minimum (worst-case) ratio of the expected value of the optimal non-adaptive algorithm versus the expected value of the optimal adaptive algorithm.

In [4], Asadpour and Nazerzadeh showed that the adaptivity gap for instances with monotone submodular functions and matroid constraints is 11e1-\frac{1}{e}. Furthermore, they provided an efficient non-adaptive algorithm that achieves this bound. Finally, in [5], Bradac et al. showed that the adaptivity gap is constant for instances with “prefix-closed” constraints (which include all downward-closed constraints) and functions that are the weighted rank function of the intersection of a constant number of matroids.

3 Model

3.1 Formal Definition

Definition 3.1.

An instance II of the delegated stochastic probing problem consists of: two players, which we will call the principal and the agent; a ground set of elements EE; mutually independent distributions μe\mu_{e} with support in +×+\mathbb{R}_{+}\times\mathbb{R}_{+} for each element eEe\in E; an outer constraint out=(E,out)\mathcal{M}_{\text{out}}=(E,\mathcal{I}_{\text{out}}) with feasible sets out\mathcal{I}_{\text{out}}; and an inner constraint in=(E,in)\mathcal{M}_{\text{in}}=(E,\mathcal{I}_{\text{in}}) with feasible sets in\mathcal{I}_{\text{in}}.

Given such an instance, we will additionally define: (Xe,Ye)μe(X_{e},Y_{e})\sim\mu_{e} as random variables denoting the utilities for the principal and agent of element ee; Ω\Omega as the set of outcomes (e,x,y)(e,x,y) for all eEe\in E and all (x,y)supp(μe)(x,y)\in\operatorname{supp}(\mu_{e}); and Ωin2Ω\Omega_{\text{in}}\subseteq 2^{\Omega} as the family of all sets of outcomes whose elements are distinct and feasible in the inner constraint. For convenience, we will also overload notation by considering xx and yy to be utility functions for the principal and agent. Given any subset of outcomes TΩT\subseteq\Omega, let x(T)=(e,x,y)Txx(T)=\sum_{(e,x,y)\in T}x and y(T)=(e,x,y)Tyy(T)=\sum_{(e,x,y)\in T}y be the total utility of outcomes in TT. Similarly for any subset of elements FEF\subseteq E, let x(F)=eFXex(F)=\sum_{e\in F}X_{e} and y(F)=eFYey(F)=\sum_{e\in F}Y_{e} be random variables representing the randomized total utility of elements in FF.

A natural mechanism that the principal might choose to implement is called a single-proposal mechanism. Here, the principal describes the space of solutions she is willing to accept, and then the agent uses this information to search the solution space and propose a single feasible solution.

In the deterministic single-proposal setting, the principal first commits to a family of sets of outcomes Ωin\mathcal{R}\subseteq\Omega_{\text{in}} and announces \mathcal{R} to the agent. The sets in \mathcal{R} are called acceptable, and the principal’s choice of \mathcal{R} is called their policy (or mechanism). After learning \mathcal{R}, the agent will select elements to probe, so long as each element is probed at most once and the set of probed elements is feasible in out\mathcal{M}_{\text{out}}. We allow the agent to probe adaptively, deciding what to do next based on previously probed elements. Let’s say that they probe elements FEF\subseteq E and obtain outcomes SΩS\subseteq\Omega. The agent will then choose some set of outcomes TΩT\subseteq\Omega and propose it to the principal. If TT is acceptable and also a subset of SS then the principal and agent receive x(T)x(T) and y(T)y(T) utility, respectively. Otherwise, they both receive 0 utility.

In the above-described mechanism design setting, we assume that both the principal and agent act to maximize their expected utility. We also assume that all parameters of the problem, except for the realizations of the random variables, are common knowledge.

We note that, similar to the setup in [16], our model assumes that our agent cannot benefit from lying, say by labeling an element ee with utilities other than XeX_{e} and YeY_{e}, or by proposing an element he has not probed. We argue that this is a natural assumption to make: In many applications we foresee (e.g., a firm hiring an employee, or exploring some mergers), a proposal will be accompanied by an easy to verify proof of the claimed utilities (e.g., in the form of a CV for the applicant, or a detailed analysis of the merger).

As in [16], we compare delegation mechanisms against the optimal non-delegated strategy. By non-delegated strategy, we mean the strategy of the principal when they act as both the principal and agent (i.e. they have power to probe and propose as well as accept outcomes).

Given any FEF\subseteq E, let u(F)u(F) be the optimal utility of the non-delegating principal when they probe elements in FF and accept their own favorite set of outcomes, and let v(F)v_{\mathcal{R}}(F) be the utility of the delegating principal with policy \mathcal{R} when the agent probes elements in FF and proposes their favorite acceptable set of outcomes. We can write uu and vv_{\mathcal{R}} as

u(F)\displaystyle u(F) =maxGF,Ginx(G)\displaystyle=\operatorname*{max}_{G\subseteq F,G\in\mathcal{I}_{\text{in}}}x(G)
v(F)\displaystyle v_{\mathcal{R}}(F) =x(argmaxGF,ΩGy(G)),\displaystyle=x\left(\operatorname*{argmax}_{G\subseteq F,\Omega_{G}\in\mathcal{R}}y(G)\right),

where ΩGΩ\Omega_{G}\subseteq\Omega is the set of outcomes from the probed set of elements GG. In the case of ties in the definition of vv_{\mathcal{R}}, our results hold for arbitrary (even adversarial) tie-breaking.

Definition 3.2.

Fix any instance of delegated stochastic probing. Let FF^{*} be a random variable containing the elements probed by an optimal adaptive non-delegating principal, and let FF^{*}_{\mathcal{R}} be a random variable containing the elements probed by an optimal adaptive agent under policy \mathcal{R}. Then for any policy \mathcal{R} and α[0,1]\alpha\in[0,1], we say that \mathcal{R} is an α\alpha-policy for this instance if

𝔼v(F)α𝔼u(F).\operatorname*{\mathbb{E}}v_{\mathcal{R}}(F^{*}_{\mathcal{R}})\geq\alpha\operatorname*{\mathbb{E}}u(F^{*}).
Definition 3.3.

The delegation gap of a family of instances of delegated stochastic probing is the minimum, over all instances in the family, of the maximum α\alpha such that there exists an α\alpha-policy for that instance. This gap measures the fraction of the principal’s non-delegated utility they can achieve when delegating.

3.2 Signaling Mechanisms

Having formally defined the model, we will now describe a broad generalization of single-proposal mechanisms, called signaling mechanisms, and show that these mechanisms don’t provide the principal with any additional power. Note that this discussion is inspired by Section 2.2 from [16], and we simply extend their work to our model.

A signaling mechanism allows the principal to ask the agent for more (or different) information than just a proposed solution. The principal will then take this information and transform it into a solution, which they will accept. One might suspect that expanding the space of mechanisms in this way would give the principal more power. However, as we will show, this isn’t the case even for a broad class of delegation models, which we will now define formally.

Definition 3.4.

An instance of the generalized delegation problem consists of two players called the principal and the agent, a state space SS, a solution space Ψ\Psi, a set 𝒫\mathcal{P} of probing strategies for the agent, a signaling function σ\sigma which maps 𝒫×S\mathcal{P}\times S to strings, a utility function x:S×𝒫×Ψ+x:S\times\mathcal{P}\times\Psi\to\mathbb{R}_{+} for the principal, and a utility function y:S×𝒫×Ψ+y:S\times\mathcal{P}\times\Psi\to\mathbb{R}_{+} for the agent. We require that there is a null solution Ψ\bot\in\Psi such that xs,p()=ys,p()=0x_{s,p}(\bot)=y_{s,p}(\bot)=0 for all sSs\in S and p𝒫p\in\mathcal{P}.

We assume the state of the world is some sSs\in S a-priori unknown to the principal and the agent, though they may have prior information. The agent obtains information about ss by applying a probing strategy p𝒫p\in\mathcal{P} to obtain a signal σp(s)\sigma_{p}(s). For a state sSs\in S, a probing strategy p𝒫p\in\mathcal{P} chosen by the agent, and a solution ψΨ\psi\in\Psi, we associate a utility of xs,p(ψ)x_{s,p}(\psi) and ys,p(ψ)y_{s,p}(\psi) for the principal and the agent, respectively.

We note that the above definition generalizes the delegation problems of Definition 3.1. In particular: the state space SS represents all possible realizations of per-element utilities of the principal and the agent; the solution space Ψ\Psi is the family of feasible subsets of outcomes Ωin\Omega_{\text{in}}, where \bot is the empty set of outcomes; 𝒫\mathcal{P} corresponds to probing algorithms which respect the outer constraint; σp(s)\sigma_{p}(s) is the set of outcomes obtained by invoking algorithm pp in state ss; both utility functions depend on the state sSs\in S and the probing algorithm p𝒫p\in\mathcal{P}, evaluating to 0 for solutions ψ\psi that are inconsistent with the state ss, or if the probing algorithm pp applied to ss does not the probe the elements in ψ\psi.

Given a generalized delegation problem, we define signaling mechanisms as follows.

Definition 3.5.

Fix some instance of the generalized delegation problem. A signaling mechanism proceeds in the following manner. The principal starts by choosing some signal space Σ\Sigma of strings and a solution function ψ:ΣΨ\psi:\Sigma\to\Psi, and the agent responds by choosing a probing strategy p𝒫p\in\mathcal{P} and a reporting function τ\tau from strings to Σ\Sigma. Once these choices have been made, the agent will probe the underlying state ss to obtain a signal σ=σp(s)\sigma=\sigma_{p}(s), then transform this into a new signal τ=τ(σ)\tau=\tau(\sigma) which he reports to the principal. The principal maps the reported signal to a solution ψ(τ)\psi(\tau), which they will accept.

Notice that this model can be made to capture the design of randomized delegation mechanisms by extending Ψ\Psi to the space Δ(Ψ)\Delta(\Psi) of distributions (henceforth lotteries) over solutions, and extending both utility functions to lotteries by taking expectations.

We contrast this broad definition of signaling mechanisms with the comparatively simple single-proposal mechanisms.

Definition 3.6.

Fix an instance of the generalized delegation problem. A single-proposal mechanism is a special case of signaling mechanism in which the principal chooses some set Ψ\mathcal{R}\subseteq\Psi of acceptable outcomes, then sets Σ=Ψ\Sigma=\Psi and ψ(R)=R\psi(R)=R if RR\in\mathcal{R} and ψ(R)=\psi(R)=\bot otherwise.

Intuitively, in a single proposal mechanism the principal declares a menu of acceptable solutions. The agent then proposes a solution, which is accepted if it is on the menu, and replaced with the null solution otherwise. Now we will show that single-proposal mechanisms are just as powerful as signaling mechanisms. In particular, for every signaling mechanism there is a single-proposal mechanism which selects the same solution and the same probing strategy for each state of nature, at equilibrium. This lemma is a simple extension of [16, Lemma 1] to the our generalized delegation model.

Lemma 3.7.

Fix an instance of the generalized delegation problem, as well as the agent’s prior distribution μ\mu on states SS. For any signaling mechanism M=(Σ,ψ)M=(\Sigma,\psi) and a corresponding best response strategy (p,τ)(p,\tau) for the agent, there exists a single-proposal mechanism M=(Σ,ψ)M^{\prime}=(\Sigma^{\prime},\psi^{\prime}) and a corresponding best response (p,τ)(p,\tau^{\prime}) such that (ψτσp)(s)=(ψτσp)(s)(\psi\,\circ\,\tau\,\circ\,\sigma_{p})(s)=(\psi^{\prime}\,\circ\,\tau^{\prime}\,\circ\,\sigma_{p})(s) for all states sSs\in S.

Proof.

Take any signaling mechanism M=(Σ,ψ)M=(\Sigma,\psi) with best response (p,τ)(p,\tau) by the agent. Let =ψ(Σ)\mathcal{R}=\psi(\Sigma) be the set of all possible outputs from this mechanism and let M=(Σ,ψ)M^{\prime}=(\Sigma^{\prime},\psi^{\prime}) be the single-proposal mechanism defined by \mathcal{R}, i.e. Σ=Ψ\Sigma^{\prime}=\Psi and ψ\psi^{\prime} is such that ψ(R)=R\psi^{\prime}(R)=R if RR\in\mathcal{R} and ψ(R)=\psi^{\prime}(R)=\bot otherwise. Finally, let τ=ψτ\tau^{\prime}=\psi\circ\tau.

Notice that the range of τ\tau^{\prime} is contained in ψ(Σ)=\psi(\Sigma)=\mathcal{R}, so by definition of ψ\psi^{\prime} and τ\tau^{\prime} it follows that ψτ=ψτ\psi\circ\tau=\psi^{\prime}\circ\tau^{\prime}. Therefore, it is also the case that (ψτσp)(s)=(ψτσp)(s)(\psi\circ\tau\circ\sigma_{p})(s)=(\psi^{\prime}\circ\tau^{\prime}\circ\sigma_{p})(s) for all sSs\in S. Now we must show that (p,τ)(p,\tau^{\prime}) is a best-response strategy to mechanism MM^{\prime}. Consider any valid alternative strategy (p,τ)(p^{*},\tau^{*}). We aim to show that

𝔼sys,p(ψτσp)(s)𝔼sys,p(ψτσp)(s).\operatorname*{\mathbb{E}}_{s}y_{s,p^{*}}(\psi^{\prime}\circ\tau^{*}\circ\sigma_{p^{*}})(s)\leq\operatorname*{\mathbb{E}}_{s}y_{s,p}(\psi^{\prime}\circ\tau^{\prime}\circ\sigma_{p})(s). (1)

First, we can assume without loss of generality that τ\tau^{*} always outputs a solution in \mathcal{R} because ψ\psi^{\prime} produces \bot (and a utility of 0) for all proposals in Ψ\Psi\setminus\mathcal{R}. Then ψτ=τ\psi^{\prime}\circ\tau^{*}=\tau^{*} and, by definition of \mathcal{R}, we can write τ=ψτ^\tau^{*}=\psi\circ\hat{\tau} for some function τ^\hat{\tau} from strings to Σ\Sigma. Then the left hand side of (1) becomes the expected utility of response (p,τ^)(p^{*},\hat{\tau}) against mechanism M=(Σ,ψ)M=(\Sigma,\psi):

𝔼ys,p(ψτσp)(s)=𝔼ys,p(ψτ^σp)(s)\operatorname*{\mathbb{E}}y_{s,p^{*}}(\psi^{\prime}\circ\tau^{*}\circ\sigma_{p^{*}})(s)=\operatorname*{\mathbb{E}}y_{s,p^{*}}(\psi\circ\hat{\tau}\circ\sigma_{p^{*}})(s)

whereas the right hand side of (1) is the expected utility of response (p,τ)(p,\tau) against MM:

𝔼ys,p(ψτσp)(s)=𝔼ys,p(ψτσp)(s).\operatorname*{\mathbb{E}}y_{s,p}(\psi^{\prime}\circ\tau^{\prime}\circ\sigma_{p})(s)=\operatorname*{\mathbb{E}}y_{s,p}(\psi\circ\tau\circ\sigma_{p})(s).

Since (p,τ)(p,\tau) is a best response for this mechanism, the desired inequality (1) follows. ∎

4 Deterministic Mechanisms

In this section, we will consider deterministic single-proposal mechanisms for delegated stochastic probing problems, as defined in Section 3.1. This is in contrast to randomized mechanisms which we will define later in Section 5. We will show that large classes of these problems have constant-factor policies, and therefore constant-factor delegation gaps.

The focus of this section is on Theorem 4.1 and Theorem 4.5, which together give us a general method of constructing competitive delegation policies from certain prophet inequalities and adaptivity gaps. In particular, Corollary 4.4 gives us constant-factor policies for delegated stochastic probing with no outer constraint and an inner constraint which is the intersection of a constant number of matroid, knapsack, and matching constraints. Similarly, Corollary 4.8 gives us constant-factor policies for delegated stochastic probing with any downwards-closed outer constraint and an inner constraint which is the intersection of a constant number of matroids.

4.1 Inner Constraint Delegation

We will now consider instances of delegated stochastic probing for which there is no outer constraint. We will then combine the results from this section with Theorem 4.5 to get solutions to delegation problems with both inner and outer constraints.

To simulate the lack of an outer constraint, we will consider instances of delegation for which the outer constraint is the trivial set system in which all subsets of the elements are feasible. For any ground set EE of elements, we will write this trivial set system as E\mathcal{M}^{*}_{E}, omitting the subscript when the set of elements EE is clear from context.

Theorem 4.1.

Given an instance I=(E,,in)I=(E,\mathcal{M}^{*},\mathcal{M}_{\text{in}}) of delegated stochastic probing without outer constraints, let JJ be an instance of the prophet inequality problem with random variables XeX_{e} for all eEe\in E and constraint in\mathcal{M}_{\text{in}}. If there exists an α\alpha-factor greedy strategy for JJ against the almighty adversary, then there exists a deterministic α\alpha-policy for II. Furthermore, the proof is constructive when given the strategy for JJ.

Proof.

First, we have by our choice of JJ that the expected utility of the prophet in JJ is equal to the expected utility of the non-delegating principal in II. Notice that the principal has no outer constraint, so we can assume without loss of generality that they probe all elements. Then the prophet and non-delegating principal both get exactly

𝔼maxTinx(T).\operatorname*{\mathbb{E}}\operatorname*{max}_{T\in\mathcal{M}_{\text{in}}}x(T).

Now consider the gambler’s α\alpha-factor greedy strategy, which consists of some collection 𝒜2E×+\mathcal{A}\subseteq 2^{E\times\mathbb{R}_{+}} of “acceptable” sets of outcomes. We will define the delegating principal’s policy as follows

={{(e,x,y):(e,x)A,y+}:A𝒜}.\mathcal{R}=\left\{\left\{(e,x,y):(e,x)\in A,y\in\mathbb{R}_{+}\right\}:A\in\mathcal{A}\right\}.

Notice that policy \mathcal{R} is exactly the same as strategy 𝒜\mathcal{A}, just translated into the language of delegation.

Now we will show that the utility of the delegating principal with policy \mathcal{R} is at least the utility of the gambler with greedy strategy 𝒜\mathcal{A}. In the prophet inequality, the almighty adversary can order the random variables such that the gambler always gets their least favorite among all maximal acceptable sets (the set is always maximal because the gambler’s strategy is greedy). Compare this with delegation, where the agent knows the result of all probed elements as well as the principal’s acceptable sets \mathcal{R}. Since the agent has non-negative utility for all outcomes, we can assume without loss of generality that they will always propose a maximal acceptable set. For every corresponding set of realizations in each problem, the gambler will receive the maximal set in 𝒜\mathcal{A} of minimum value and the principal will receive some maximal set in \mathcal{R}. Since we defined \mathcal{R} to correspond directly with 𝒜\mathcal{A}, the principal’s value must be at least as large as the gambler’s. This is true of all possible realizations, so \mathcal{R} must be an α\alpha-policy for II. ∎

We note that by construction of the principal’s policy \mathcal{R}, this theorem holds even when the principal is unaware of the agent’s utility values yy. This is comparable to the reduction in [16] which similarly worked regardless of the principal’s knowledge of the agent’s utilities.

Unfortunately, applications of this theorem rely on the existence of competitive strategies against the almighty adversary, which is a very strong condition. It is natural to ask whether it’s really necessary in the reduction for the adversary to be almighty. We provide some evidence that this is indeed necessary by considering the special case of a 1-uniform inner matroid. In this case, it’s easy to construct instances for which the utility of the principal and agent sum to a constant value for all outcomes, i.e. Xe+Ye=cX_{e}+Y_{e}=c for all ee and some constant cc. In such an instance, the agent’s goals are directly opposed to the principal’s, so the agent will always propose the principal’s least favorite acceptable outcome. In the corresponding instance of the prophet inequality, the almighty adversary can guarantee that the gambler chooses their least favorite acceptable outcome, while weaker adversaries (that don’t know the realizations of variables) cannot enforce the same guarantee.

Using some known greedy prophet inequalities against the almighty adversary, we get the following corollaries.

Corollary 4.2.

There exist deterministic 12\frac{1}{2}-policies for delegated stochastic probing problems with no outer constraint and a 1-uniform inner constraint.

Proof.

This follows from the existence of 12\frac{1}{2} threshold rules (such as Samuel-Cahn’s median rule [25]) for the 1-uniform prophet inequality against the almighty adversary. ∎

Corollary 4.3.

There exist constant-factor deterministic policies for delegated stochastic probing problems with no outer constraint and three classes of inner constraints. These factors are: 14\frac{1}{4} for matroid constraints, 12e0.1839\frac{1}{2e}\approx 0.1839 for matching constraints, and 3220.0857\frac{3}{2}-\sqrt{2}\approx 0.0857 for knapsack constraints.

Proof.

This corollary is largely based on results from [9]. By combining [9, Theorem 1.8] with [9, Observation 1.6] and optimizing the parameters, we get randomized greedy online contention resolution schemes (OCRS) for three aforementioned constraint systems with the same factors listed above. Then, applying [9, Theorem 1.12], each randomized greedy OCRS corresponds to a randomized greedy prophet inequality against the almighty adversary with the same approximation factor. Since the adversary is almighty, they can predict any randomness in our strategy. Therefore, the randomized strategy is no better than the best deterministic strategy, and there must exist some deterministic strategy achieving the same factor. Finally, we apply our Theorem 4.1 to turn the prophet inequality strategy into a delegation policy with the same factor. ∎

Corollary 4.4.

There exist constant-factor deterministic policies for delegated stochastic probing problems with no outer constraint and an inner constraint that is the intersection of a constant number of matroid, knapsack, and matching constraints.

Proof.

We use [9, Corollary 1.13] along with the same reasoning as Corollary 4.3. ∎

We note that it is open whether there exists a 12\frac{1}{2}-OCRS for matroids against the almighty adversary [21]. The existence of such an OCRS, if greedy, would imply the existence of 12\frac{1}{2}-policy for delegated stochastic probing with a matroid inner constraint and no outer constraint.

Although Corollary 4.2 applies to a model very similar to the distributional delegation model from [16], our principal has the additional power of being able to distinguish between otherwise identical elements by their name alone. However, by observing that Theorem 4.1 turns greedy prophet inequalities that don’t distinguish between identical elements into delegation policies that also don’t distinguish between identical elements, we can derive delegation policies that recover the 12\frac{1}{2}-factor guarantee from [16] for their distributional model. We leave the details for Section A.1.

4.2 Outer Constraint Delegation

Using the adaptivity gap from Section 2.3, we will now show that there are large classes of delegated stochastic probing problems with nontrivial outer constraints for which the principal can achieve, in expectation, a constant-factor of their non-delegated optimal utility.

Theorem 4.5.

Let I=(E,out,in)I=(E,\mathcal{M}_{\text{out}},\mathcal{M}_{\text{in}}) be an instance of delegated stochastic probing. Suppose that, for all FoutF\in\mathcal{I}_{\text{out}}, there exists a deterministic α\alpha-policy for the restriction IF=(F,F,in|F)I_{F}=(F,\mathcal{M}^{*}_{F},\mathcal{M}_{\text{in}}|F) of instance II to FF. Suppose also that the adaptivity gap for weighted rank functions of in\mathcal{M}_{\text{in}} on set system out\mathcal{M}_{\text{out}} is at least β\beta. Then there exists a deterministic αβ\alpha\beta-policy for instance II.

Proof.

Given any set of elements FEF\subseteq E, we can write the utility of the non-delegating principal who probes FF as

u(F)=maxGF,Ginx(G)u(F)=\operatorname*{max}_{G\subseteq F,G\in\mathcal{I}_{\text{in}}}x(G)

and the utility of the delegating principal with policy \mathcal{R} who probes FF as

v(F)=x(argmaxGF,ΩGy(G)),v_{\mathcal{R}}(F)=x\left(\operatorname*{argmax}_{G\subseteq F,\Omega_{G}\in\mathcal{R}}y(G)\right),

where ΩGΩ\Omega_{G}\subseteq\Omega is the set of outcomes from the probed elements GG.

Notice that for any fixed set of realizations from all random variables, uu is just the weighted rank function of set system in\mathcal{M}_{\text{in}}. Therefore, by the adaptivity gap for such a function over set system out\mathcal{M}_{\text{out}}, there exists a fixed set FoutF\in\mathcal{I}_{\text{out}} such that

𝔼u(F)β𝔼u(E),\operatorname*{\mathbb{E}}u(F)\geq\beta\operatorname*{\mathbb{E}}u(E^{*}), (2)

where EoutE^{*}\in\mathcal{I}_{\text{out}} is a random variable representing the optimal set of elements selected by an adaptive non-delegating principal. Notice that expectation is also over the randomness of EE^{*}.

Now we will consider the same delegation instance with access to only the elements in FF, i.e. instance (F,out|F,in|F)(F,\mathcal{M}_{\text{out}}|F,\mathcal{M}_{\text{in}}|F). Since FoutF\in\mathcal{I}_{\text{out}}, the outer matroid doesn’t restrict probing at all and this instance is equivalent to IF=(F,F,in|F)I_{F}=(F,\mathcal{M}_{F}^{*},\mathcal{M}_{\text{in}}|F). By our assumption, this problem has some α\alpha-approximate delegation policy. Let \mathcal{R} be one such policy. Then we have

𝔼v(F)α𝔼u(F).\operatorname*{\mathbb{E}}v_{\mathcal{R}}(F)\geq\alpha\operatorname*{\mathbb{E}}u(F). (3)

Since \mathcal{R} contains outcomes only from elements in FF, an agent restricted to \mathcal{R} in the original instance II has no incentive to probe elements outside of FF. Because FoutF\in\mathcal{I}_{\text{out}}, the agent can probe all of FF. Therefore, we can assume without loss of generality that an optimal adaptive strategy will choose to probe exactly the elements in FF. Then

𝔼v(E)=𝔼v(F),\operatorname*{\mathbb{E}}v_{\mathcal{R}}(E^{*}_{\mathcal{R}})=\operatorname*{\mathbb{E}}v_{\mathcal{R}}(F), (4)

where EEE^{*}_{\mathcal{R}}\subseteq E is a random variable containing exactly the elements probed by an optimal adaptive agent when when restricted to acceptable set \mathcal{R} in the original instance II.

Combining (2), (3), and (4), we get the desired inequality:

𝔼v(E)\displaystyle\operatorname*{\mathbb{E}}v_{\mathcal{R}}(E^{*}_{\mathcal{R}}) =𝔼v(F)\displaystyle=\operatorname*{\mathbb{E}}v_{\mathcal{R}}(F)
α𝔼u(F)\displaystyle\geq\alpha\operatorname*{\mathbb{E}}u(F)
αβ𝔼u(E).\displaystyle\geq\alpha\beta\operatorname*{\mathbb{E}}u(E^{*}).

Corollary 4.6.

There exist deterministic 12(11e)0.3160\frac{1}{2}\left(1-\frac{1}{e}\right)\approx 0.3160-policies for delegated stochastic probing problems with matroid outer constraints and a 1-uniform inner constraint.

Proof.

By Corollary 4.2, there is a 12\frac{1}{2}-policy for any instance of delegated stochastic probing with a 1-uniform inner constraint and no outer constraint. Every restriction of our present instance II to some independent set FF of the outer matroid is of this form.

From [4], we have a 11e1-\frac{1}{e} adaptivity gap for stochastic submodular on matroid constraints. Since the weighted rank function of any matroid is submodular, the adaptivity gap of weighted rank functions of the inner 1-uniform matroid constraint on the outer matroid constraint is also 11e1-\frac{1}{e}.

Therefore, the conditions of Theorem 4.5 hold with α=12\alpha=\frac{1}{2} and β=11e\beta=1-\frac{1}{e}, and we get the desired factor. ∎

Corollary 4.7.

There exist deterministic 14(11e)0.1580\frac{1}{4}\left(1-\frac{1}{e}\right)\approx 0.1580-policies for delegated stochastic probing problems with matroid outer and inner constraints.

Proof.

Similar to Corollary 4.6, we use the 11e1-\frac{1}{e} adaptivity gap for submodular functions over matroid constraints along with Corollary 4.3. ∎

Corollary 4.8.

There exist constant-factor deterministic policies for delegated stochastic probing with any downward-closed outer constraint and an inner constraint which is the intersection of a constant number of matroids.

Proof.

By [5, Theorem 1.2], we have constant-factor adaptivity gaps for weighted rank functions of the intersection of a constant number of matroids over “prefix-closed” constraints, which include all downward-closed constraints. By Corollary 4.4, we have constant-factor policies for delegated stochastic probing with no outer constraint and an inner constraint which is the intersection of a constant number of matroids. Combining these results with Theorem 4.5, we get the desired constant factors. ∎

5 Lottery Mechanisms

One natural generalization of the delegated stochastic probing model defined in section 3.1 is to allow the principal to use randomized mechanisms. For example, one may consider the generalization of single-proposal mechanisms which attaches a probability pRp_{R} to each set of outcomes RΩinR\subseteq\Omega_{\text{in}}, and accepts a proposed set RR with precisely that probability (and accepts the empty set otherwise). More general lotteries (i.e. with non-binary support) are also possible. It’s then natural to ask whether there exist instances for which some randomized policy does better than all deterministic ones. Even further, we can ask whether there exists a randomized policy that strictly outperforms deterministic ones in the worst case. In other words, can randomization give us a strictly better delegation gap?

In this section, we will broadly discuss randomized mechanisms and then consider the special case of delegation with 1-uniform inner constraints and no outer constraints. In this special case, there exist instances for which randomization significantly helps the principal, and there are worst-case instances in which the delegation gap of 12\frac{1}{2} is tight for randomized mechanisms as well as deterministic ones. Before getting to these results, we will discuss methods of randomization and then formalize what we mean by a randomized mechanism.

There are two obvious ways that the single-proposal mechanism can be randomized. The first is for the principal to sample a deterministic policy \mathcal{R} from some distribution and then run the single proposal mechanism defined by \mathcal{R}. However, noticing that our model of delegation is a Stackelberg game, we can conclude that there always exists a pure optimal strategy for the principal, so this type of randomization doesn’t give the principal any more power.

The second type of randomness is for the policy itself to be a set of acceptable distributions over sets of outcomes (i.e. a menu of lotteries), from which the agent may propose one. The principal then samples a set of outcomes from the proposed lottery. This expands the space of mechanisms, conceivably affording the principal more power in influencing the agent’s behavior. We will focus on these randomized mechanisms for the rest of this section.

Definition 5.1.

A lottery mechanism is a randomized mechanism for delegated stochastic probing consisting of a set \mathcal{R} of distributions, or lotteries, each with support in Ωin\Omega_{\text{in}}. After the set of acceptable lotteries \mathcal{R} is selected and announced to the agent, the delegated stochastic probing mechanism proceeds largely the same. The agent probes outcomes SS and proposes one of the lotteries LL\in\mathcal{R}. The principal then samples a set of outcomes TLT\sim L from that lottery. If TST\subseteq S, then the principal and agent receive x(T)x(T) and y(T)y(T) utility, respectively. Otherwise, they both receive 0 utility.

We note that this sort of mechanism is a generalized single-proposal mechanism in the sense of Section 3.2: Each lottery represents a solution and an agent’s expected utility for a lottery represents their utility for that solution. Therefore, Lemma 3.7 applies to lottery mechanisms as well.

5.1 Power of Lotteries

The increased power of lottery mechanisms means that for some instances of delegated stochastic probing there exist lottery policies that provide the principal with a better expected utility than the best deterministic policies. In fact, we will show that there are instances for which some lottery policies nearly achieve the principal’s non-delegated expected utility, while the best deterministic policies achieve only about half of this value.

First, we will make the observation that it never benefits the principal to declare two lotteries in \mathcal{R} with identical support but different distributions. This is because the principal knows the utility function of the agent and can predict which lottery the agent will prefer. Therefore, we can assume that for any given support, the principal will declare at most one lottery.

Proposition 5.1.

For all 0<ϵ<10<\epsilon<1, there exists an instance of delegated stochastic probing for which the best lottery mechanisms achieve 23ϵ+2ϵ22ϵ\frac{2-3\epsilon+2\epsilon^{2}}{2-\epsilon} of the principal’s non-delegated expected utility, while the best deterministic mechanisms achieve 12ϵ\frac{1}{2-\epsilon} of the principal’s non-delegated expected utility. As ϵ\epsilon approaches 0, the former approaches 11 while the latter approaches 12\frac{1}{2}.

Proof.

Consider an instance with elements E={1,2}E=\left\{1,2\right\}, a 1-uniform matroid inner constraint, no outer constraint, and distributions for elements 11 and 22 as detailed in Table 1.

Outcome Element ee Utility xx Utility yy Probability e[(x,y)]\operatorname*{\mathbb{P}}_{e}[(x,y)]
ω0\omega_{0} 11 0 0 1ϵ1-\epsilon
ω1\omega_{1} 11 1/ϵ1/\epsilon 1ϵ1-\epsilon ϵ\epsilon
ω2\omega_{2} 22 11 11 11
Table 1: Each row represents a single outcome and contains its name, element ee, utilities xx and yy, and the probability that it is probed from element ee.

Since there are no outer constraints we assume that both elements are probed. The non-delegating principal can accept ω1\omega_{1} when it appears and accept ω2\omega_{2} otherwise. This gives them an expected utility of ϵ/ϵ+1ϵ=2ϵ\epsilon/\epsilon+1-\epsilon=2-\epsilon. By enumerating all deterministic policies, we can confirm that the best such policy gives the delegating principal an expected utility of 11. Therefore, the best deterministic policy achieves 12ϵ\frac{1}{2-\epsilon} of the principal’s non-delegated utility.

Now consider a lottery policy with lotteries AA and BB such that A[ω1]=1\operatorname*{\mathbb{P}}_{A}[\omega_{1}]=1, B[ω2]=12ϵ\operatorname*{\mathbb{P}}_{B}[\omega_{2}]=1-2\epsilon, and B[ω0]=2ϵ\operatorname*{\mathbb{P}}_{B}[\omega_{0}]=2\epsilon. We can quickly verify that this gives the delegating principal an expected utility of 23ϵ+2ϵ22-3\epsilon+2\epsilon^{2}. Therefore, at least one lottery policy achieves 23ϵ+2ϵ22ϵ\frac{2-3\epsilon+2\epsilon^{2}}{2-\epsilon} of the principal’s non-delegated utility. ∎

Unfortunately, there is good reason not to be too optimistic about the increased power of lottery mechanisms. As we will now show, there also exist instances for which the best lottery policies and the best deterministic policies all achieve approximately half of the principal’s non-delegated expected utility. Since Corollary 4.2 gives us a deterministic 12\frac{1}{2}-policy, this tells us that, in the worst case, the factor 12\frac{1}{2} is tight even for lottery policies in the special case of no outer constraint and a 1-uniform inner constraint.

Proposition 5.2.

For all 0<ϵ<10<\epsilon<1, there exists an instance of delegated stochastic probing with a 1-uniform inner constraint and no outer constraint for which the best lottery policies and the best deterministic policies all achieve 12ϵ\frac{1}{2-\epsilon} of the principal’s non-delegated expected utility. As ϵ\epsilon approaches 0, this approaches 12\frac{1}{2}.

Proof.

Consider an instance with elements E={1,2}E=\left\{1,2\right\}, a 1-uniform matroid inner constraint, no outer constraint, and distributions for elements 11 and 22 as detailed in Table 2.

Outcome Element ee Utility xx Utility yy Probability e[(x,y)]\operatorname*{\mathbb{P}}_{e}[(x,y)]
ω0\omega_{0} 11 0 0 1ϵ1-\epsilon
ω1\omega_{1} 11 1/ϵ1/\epsilon 0 ϵ\epsilon
ω2\omega_{2} 22 11 11 11
Table 2: Each row represents a single outcome and contains its name, element ee, utilities xx and yy, and the probability that it is probed from element ee. Notice that this instance is identical to the one from Table 1 except for the agent’s utility for outcome ω1\omega_{1}.

In the case of ties, we assume that the agent prefers to break ties first in the principal’s favor and then arbitrarily among any remaining ties. This assumption serves only to simplify the proof, and can be avoided with careful modifications to the utility table.

The non-delegating principal can accept ω1=(1,1/ϵ,0)\omega_{1}=(1,1/\epsilon,0) when it appears and accept ω2=(2,1,1)\omega_{2}=(2,1,1) otherwise. This gives them an expected utility of ϵ/ϵ+1ϵ=2ϵ\epsilon/\epsilon+1-\epsilon=2-\epsilon. By enumerating all deterministic policies, we can confirm that the best such policy gives the delegating principal an expected utility of 11. Therefore, the best deterministic policy achieves 12ϵ\frac{1}{2-\epsilon} of the principal’s non-delegated utility.

Finding the best menu of lotteries takes slightly more work. Since the inner constraint is 1-uniform, each lottery is supported on singletons as well as the empty set. Recall also that we can restrict attention to menus where no two lotteries have the same support. We claim that we can further restrict attention to menus with exactly two lotteries AA and BB, with AA supported on {ω0,ω2}\left\{\omega_{0},\omega_{2}\right\} and BB supported on {ω1,ω2}\left\{\omega_{1},\omega_{2}\right\}. To see this, observe that:

  1. 1.

    Shifting all probability mass from the empty set to ω0\omega_{0} or ω1\omega_{1} in any lottery does not affect the agent’s utility and can only increase the principal’s utility. In the case of tie-breaking, the principal’s favorite remaining lottery is no worse than before.

  2. 2.

    If there is a lottery with both ω0\omega_{0} and ω1\omega_{1} in its support, shifting all probability mass from one of these outcomes to the other does not affect the agent’s utility, and in at least one direction this shift of probability mass will make the policy no worse for the principal. Again, in the case of tie-breaking, the principal’s favorite remaining lottery is no worse than before.

  3. 3.

    A menu without lottery AA is no better than the same menu with lottery AA for which all probability mass of AA is assigned to ω0\omega_{0}. Similarly, a menu without lottery BB is no better than the same menu with lottery BB for which all probability mass of BB is assigned to ω1\omega_{1}.

Parametrizing the probability of each outcome, we get: A[ω2]=a\operatorname*{\mathbb{P}}_{A}[\omega_{2}]=a, A[ω0]=1a\operatorname*{\mathbb{P}}_{A}[\omega_{0}]=1-a, B[ω2]=b\operatorname*{\mathbb{P}}_{B}[\omega_{2}]=b, and B[ω1]=1b\operatorname*{\mathbb{P}}_{B}[\omega_{1}]=1-b for some a,b[0,1]a,b\in[0,1]. No matter what the agent probes ({ω0,ω2}\left\{\omega_{0},\omega_{2}\right\} or {ω1,ω2}\left\{\omega_{1},\omega_{2}\right\}), their favorite lottery is BB if bab\geq a and AA otherwise. If we choose bab\geq a, the delegating principal gets expected utility ϵ(b+(1b)/ϵ)+(1ϵ)b=1\epsilon(b+(1-b)/\epsilon)+(1-\epsilon)\cdot b=1. Otherwise, the principal gets ϵa+(1ϵ)a=a\epsilon\cdot a+(1-\epsilon)\cdot a=a, which can be made as large as 11 for a=1a=1. Therefore, the best lottery policy achieves 12ϵ\frac{1}{2-\epsilon} of the principal’s non-delegated utility. ∎

6 Open Questions

Due to this novel combination of delegation with stochastic probing, we believe that this paper ultimately opens up many more questions than it answers. In this section, we will make some of these questions explicit.

  • While we focused on the existence of constant-factor delegation policies regardless of their computational complexity, applying these solutions to practical problems requires some guarantee that they can be easily computed and represented. Are there delegated stochastic probing problems for which constant-factor policies are NP-hard to compute in general? Are there special cases for which constant-factor policies can always be computed in polynomial time?

  • In Section 5, we showed that the constant given in Corollary 4.2 is tight. Are the other factors given in Section 4.1 tight? We note that this is related to an open question by [21] about 12\frac{1}{2} prophet inequalities on matroids against the almighty adversary.

  • Are the constant factors given in Section 4.2 tight? Due to the broad applicability of adaptivity gaps, our method is unlikely to take advantage of special structure that may be present in delegated stochastic probing problems. Therefore, it seems probable that better constants exist, but we make no claim to a conjecture.

  • Our model assumes that probing is always zero-cost, so it doesn’t generalize the binary model from [16] or the box problem of [27]. It’s natural to ask whether we can get constant-factor delegation gaps with probing costs in addition to (or as a replacement for) outer constraints.

  • Our model doesn’t allow the principal to incentivize the agent with transfers (such as payments), so it’s natural to ask how such an extension to the model could improve the principal’s worst-case guarantees.

  • If the principal is delegating to multiple agents simultaneously, can they get better worst-case guarantees than delegating to a single agent? We note that there are many ways to define this formally. For example, a stronger principal may be able to define different acceptable sets for each agent whereas a weaker principal may be forced to declare one acceptable set for all agents.

  • It’s not hard to imagine practical applications of stochastic probing for which elements are not independently distributed. Can we get competitive guarantees even in the absence of the independence assumption?

References

  • Adamczyk and Włodarczyk [2018] M. Adamczyk and M. Włodarczyk. Random order contention resolution schemes. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 790–801. IEEE, 2018.
  • Alonso and Matouschek [2008] R. Alonso and N. Matouschek. Optimal delegation. The Review of Economic Studies, 75(1):259–293, 2008.
  • Armstrong and Vickers [2010] M. Armstrong and J. Vickers. A model of delegated project choice. Econometrica, 78(1):213–244, 2010.
  • Asadpour and Nazerzadeh [2016] A. Asadpour and H. Nazerzadeh. Maximizing stochastic monotone submodular functions. Management Science, 62(8):2374–2391, 2016.
  • Bradac et al. [2019] D. Bradac, S. Singla, and G. Zuzic. (near) optimal adaptivity gaps for stochastic multi-value probing. arXiv preprint arXiv:1902.01461, 2019.
  • Chekuri et al. [2014] C. Chekuri, J. Vondrák, and R. Zenklusen. Submodular function maximization via the multilinear relaxation and contention resolution schemes. SIAM Journal on Computing, 43(6):1831–1879, 2014.
  • Chen et al. [2009] N. Chen, N. Immorlica, A. R. Karlin, M. Mahdian, and A. Rudra. Approximating matches made in heaven. In International Colloquium on Automata, Languages, and Programming, pages 266–278. Springer, 2009.
  • Dütting et al. [2017] P. Dütting, M. Feldman, T. Kesselheim, and B. Lucier. Prophet inequalities made easy: Stochastic optimization by pricing non-stochastic inputs. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 540–551. IEEE, 2017.
  • Feldman et al. [2016] M. Feldman, O. Svensson, and R. Zenklusen. Online contention resolution schemes. In Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete algorithms, pages 1014–1033. Society for Industrial and Applied Mathematics, 2016.
  • Gupta and Nagarajan [2013] A. Gupta and V. Nagarajan. A stochastic probing problem with applications. In International Conference on Integer Programming and Combinatorial Optimization, pages 205–216. Springer, 2013.
  • Gupta et al. [2016] A. Gupta, V. Nagarajan, and S. Singla. Algorithms and adaptivity gaps for stochastic probing. In Proceedings of the twenty-seventh annual ACM-SIAM symposium on Discrete algorithms, pages 1731–1747. SIAM, 2016.
  • Hajiaghayi et al. [2007] M. T. Hajiaghayi, R. Kleinberg, and T. Sandholm. Automated online mechanism design and prophet inequalities. In AAAI, volume 7, pages 58–65, 2007.
  • Holmstrom [1980] B. Holmstrom. On the theory of delegation. Technical report, Discussion Paper, 1980.
  • Holmstrom [1978] B. R. Holmstrom. On Incentives and Control in Organizations. PhD thesis, Stanford University, 1978.
  • Khodabakhsh et al. [2020] A. Khodabakhsh, Y. Liu, E. Pountourakis, S. Taggart, and Y. Zhang. Threshold policies for delegation. working paper, 2020.
  • Kleinberg and Kleinberg [2018] J. Kleinberg and R. Kleinberg. Delegated search approximates efficient search. In Proceedings of the 2018 ACM Conference on Economics and Computation, pages 287–302, 2018.
  • Kleinberg and Weinberg [2012] R. Kleinberg and S. M. Weinberg. Matroid prophet inequalities. In Proceedings of the forty-fourth annual ACM Symposium on Theory of Computing, pages 123–136. ACM, 2012.
  • Kováč and Mylovanov [2009] E. Kováč and T. Mylovanov. Stochastic mechanisms in settings without monetary transfers: The regular case. Journal of Economic Theory, 144(4):1373–1395, 2009.
  • Krengel and Sucheston [1977] U. Krengel and L. Sucheston. Semiamarts and finite values. Bulletin of the American Mathematical Society, 83(4):745–747, 1977.
  • Krengel and Sucheston [1978] U. Krengel and L. Sucheston. On semiamarts, amarts, and processes with finite value. Probability on Banach spaces, 4:197–266, 1978.
  • Lee and Singla [2018] E. Lee and S. Singla. Optimal online contention resolution schemes via ex-ante prophet inequalities. In 26th Annual European Symposium on Algorithms (ESA 2018). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2018.
  • Lucier [2017] B. Lucier. An economic view of prophet inequalities. ACM SIGecom Exchanges, 16(1):24–47, 2017.
  • Melumad and Shibano [1991] N. D. Melumad and T. Shibano. Communication in settings with no transfers. The RAND Journal of Economics, pages 173–198, 1991.
  • Roughgarden [2016] T. Roughgarden. Twenty lectures on algorithmic game theory. Cambridge University Press, 2016.
  • Samuel-Cahn et al. [1984] E. Samuel-Cahn et al. Comparison of threshold stop rules and maximum for independent nonnegative random variables. the Annals of Probability, 12(4):1213–1216, 1984.
  • Singla [2018] S. Singla. Combinatorial Optimization Under Uncertainty: Probing and Stopping-Time Algorithms. PhD thesis, PhD thesis, Carnegie Mellon University, 2018.
  • Weitzman [1979] M. L. Weitzman. Optimal search for the best alternative. Econometrica: Journal of the Econometric Society, pages 641–654, 1979.

Appendix A Appendix

A.1 Symmetric Delegation Policies

While our model is not a direct generalization of the distributional model used by Kleinberg and Kleinberg, we can obtain a generalization by considering delegated stochastic probing with a restricted class of policies, which we call symmetric policies. Given this variant, we can recover the 12\frac{1}{2} factor that they obtained. First, we need to define some notation and terminology.

Given any object XX (such as a set, tuple, or recursive combination of the two) containing atomic elements EE, we can consider the operation of taking two elements e1,e2Ee_{1},e_{2}\in E and swapping all instances of e1e_{1} and e2e_{2} in XX. More generally, for any permutation π\pi of elements in EE, we can consider rewriting all elements ee to π(e)\pi(e) simultaneously. We will denote the object obtained from this operation as X[Eπ(E)]X[E\to\pi(E)].

Definition A.1.

Fix an instance of delegated stochastic probing with elements EE, outer constraint out\mathcal{M}_{\text{out}}, and inner constraint in\mathcal{M}_{\text{in}}. We say that a subset of elements FEF\subseteq E are symmetric if μe=μf\mu_{e}=\mu_{f} for all e,fFe,f\in F and for all permutations π\pi on FF we have that in[Fπ(F)]=in\mathcal{M}_{\text{in}}[F\to\pi(F)]=\mathcal{M}_{\text{in}} and out[Fπ(F)]=out\mathcal{M}_{\text{out}}[F\to\pi(F)]=\mathcal{M}_{\text{out}}.

Definition A.2.

Fix an instance of delegated stochastic probing with elements EE, outer constraint out\mathcal{M}_{\text{out}}, and inner constraint in\mathcal{M}_{\text{in}}. We say that a policy \mathcal{R} is symmetric if [Fπ(F)]=\mathcal{R}[F\to\pi(F)]=\mathcal{R} for all symmetric sets of elements FEF\subseteq E and all permutations π\pi on FF.

Intuitively, symmetric elements are ones which are identical in everything except name. Then symmetric policies are ones that don’t distinguish between symmetric elements. Using this intuition, we will now consider the problem of delegated stochastic probing with kk identically distributed elements EE, a 1-uniform inner constraint, and no outer constraint. Given any such instance, it’s easy to see that all elements EE are symmetric. Notice the similarity between such an instance and the distributional model. The only difference is that our principal has the power to distinguish between outcomes sampled from different elements. However, if the principal is restricted to symmetric policies, then their policy cannot distinguish between different elements, so it must characterize acceptable outcomes based only on their (x,y)(x,y) utility. This is equivalent to the distributional model.

There are also natural definitions of symmetric elements and strategies in the prophet inequality problem.

Definition A.3.

Fix an instance of the prophet inequality problem with elements EE and feasibility constraint \mathcal{M}. We say that a subset of elements FEF\subseteq E are symmetric if XeX_{e} and XfX_{f} are identically distributed for all e,fFe,f\in F and for all permutations π\pi on FF we have that [Fπ(F)]=\mathcal{M}[F\to\pi(F)]=\mathcal{M}.

Definition A.4.

Fix an instance of the prophet inequality problem with elements EE and feasibility constraint \mathcal{M}. We say that a strategy 𝒜\mathcal{A} is symmetric if 𝒜[Fπ(F)]=𝒜\mathcal{A}[F\to\pi(F)]=\mathcal{A} for all symmetric sets of elements FEF\subseteq E and all permutations π\pi on FF.

Given these definitions, we will show that Theorem 4.1 actually transforms symmetric greedy prophet inequalities against the almighty adversary into symmetric delegation policies. This is stated formally in Proposition A.1.

Proposition A.1.

Given an instance I=(E,,in)I=(E,\mathcal{M}^{*},\mathcal{M}_{\text{in}}) of delegated stochastic probing without outer constraints, let JJ be an instance of the prophet inequality problem with random variables XeX_{e} for all eEe\in E and constraint in\mathcal{M}_{\text{in}}. If there exists a symmetric α\alpha-factor greedy strategy for JJ against the almighty adversary, then there exists a symmetric deterministic α\alpha-policy for II. Furthermore, the proof is constructive when given the strategy for JJ.

Proof.

The proof is identical to the proof of Theorem 4.1, but we observe that the greedy strategy 𝒜\mathcal{A} for prophet inequality problem JJ is symmetric, so the policy \mathcal{R} derived from 𝒜\mathcal{A} must also be symmetric by construction. ∎

Since the 12\frac{1}{2} prophet inequality used in Corollary 4.2 is a threshold policy, it must be symmetric. Therefore, we have a symmetric 12\frac{1}{2}-policy for delegated stochastic probing problems with no outer constraint and a 1-uniform inner constraint. This recovers a 12\frac{1}{2}-factor for the distributional model of [16], as well as for the slight generalization of this model with multiple distributions and a separate cardinality constraint for each one.