This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Improved Approximation Ratios of Fixed-Price Mechanisms in Bilateral Trades

Zhengyang Liu [email protected] Beijing Institute of Technology Zeyu Ren [email protected] Renmin University of China Zihe Wang [email protected] Renmin University of China
Abstract

We continue the study of the performance for fixed-price mechanisms in the bilateral trade problem, and improve approximation ratios of welfare-optimal mechanisms in several settings. Specifically, in the case where only the buyer distribution is known, we prove that there exists a distribution over different fixed-price mechanisms, such that the approximation ratio lies within the interval of [0.71,0.7381][0.71,0.7381]. Furthermore, we show that the same approximation ratio holds for the optimal fixed-price mechanism, when both buyer and seller distributions are known. As a result, the previously best-known (11/e+0.0001)(1-1/e+0.0001)-approximation can be improved to 0.710.71. Additionally, we examine randomized fixed-price mechanisms when we receive just one single sample from the seller distribution, for both symmetric and asymmetric settings. Our findings reveal that posting the single sample as the price remains optimal among all randomized fixed-price mechanisms.

1 Introduction

In this paper, we study the bilateral trade problem in a basic model: one buyer and one seller want to trade a single and indivisible good. Both of them have their own private valuations to the good, drawn from public distributions respectively. The mechanism designer can access both of their value distributions, but not their particular values. Depending on whether value distributions of these two agents are identical, we have the symmetric setting and the asymmetric (or general) setting. The designer aims to maximize the social welfare, i.e., the value of the one holding the good after the trade (if any).

Formally, we denote by BB the value of the buyer, and SS the value of the seller. The ideal solution is the so-called “first-best” mechanism, which is the case that the trade happens once B>SB>S and the corresponding social welfare would be max{B,S}\max\left\{B,S\right\}. The celebrated result due to [MS83] states that no individually rational (IR) and Bayesian incentive-compatible (BIC) mechanism with the budget-balanced (BB) constraint can achieve the first-best optimum. In the same paper, Myerson and Satterthwaite also present a “second-best” mechanism attaining the optimal among all the IR, BIC and BB ones. However, a drawback of the second-best mechanism is that the characterization is complicated to describe and thus not suitable for practical implementation. Hence we need to pursue simple and almost-optimal mechanisms.

Fixed-price mechanisms are the best choice in the literature [HR87, McA08, CBGK+17, BCWZ17]. In fixed-price mechanisms, the mechanism designer picks a price pp (depends on the distributions of two agents, possibly) and gives it to both of the buyer and the seller. In the randomized case, one can pick the price drawn from some distribution. If the price is between the values of two agents (i.e., SpBS\leq p\leq B), the trade happens. It is easy to see that fixed-price mechanisms are dominant-strategy incentive-compatible (DSIC). A recent research line tries to prove the near-optimal guarantees of such mechanisms. Our paper improves approximation ratios in variant settings for this problem.

1.1 Related Work

Forty years ago, Myerson and Satterthwaite [MS83] initiated the study of the optimal incentive-compatible mechanisms in bilateral trades. In [HR87], Hagerty and Rogerson showed that fixed-price mechanisms are the unique choice of strong budget balanced and dominant-strategy truthful mechanisms in bilateral trades, essentially. By relaxing the budget balance condition to a no-deficit condition, fixed-price mechanisms can be optimal due to [DK15, SZ16].

Recently, a research line pursued near-optimal guarantees of fixed-price mechanisms in bilateral trades w.r.t. the welfare. Blumrosen and Dobzinski [BD14] considered the Median Mechanism which sets a price equals to the median of the distribution of the seller and showed that the approximation ratio is 0.50.5. Blumrosen and Dobzinski [BD21] also showed that given the distribution of either the seller or the buyer, there is a price distribution such that the randomized fixed-price mechanism in expectation achieves at least 11/e0.631-1/e\approx 0.63 of the optimal efficiency. It implies that for every pair of the seller and the buyer distributions, there exists a deterministic fixed-price mechanism that achieves at least 11/e0.631-1/e\approx 0.63 of the optimal efficiency. Recently, Kang, Pernice and Vondrák [KPV22] showed that in the general setting, the approximation ratio can be greater than 11/e+0.00011-1/e+0.0001, beating the previous work. Their method is choosing the better one between two mechanisms depending on whether the asymmetry between two agents’ distributions is severe or not. In the same paper, they also provided the tight ratio (2+2)/4(2+\sqrt{2})/4 for the symmetric setting. On the other side, [CBKLT16] gave an example of the seller and the buyer distributions where no fixed-price mechanism can achieve 0.74850.7485 of the optimal efficiency. This impossibility result is improved to 0.73850.7385 by [KV18].

Another setting captures the case where we can only obtain the minimum-possible amount of the prior information from the distributions. For the general setting, [DFL+21] provided a 1/21/2-approximation analysis by using the single sample from the seller distribution as the price. For the symmetric setting, Kang, Pernice and Vondrák [KPV22] showed that a 3/43/4-approximation to the welfare can be obtained.

Another notion of characterizing the performance of mechanisms in bilateral trades is gain from trade (GFT), i.e., the expected social utility attained from the trading. Very recently, [DMSW22] and [Fei22] proved that a combination of simple mechanisms (RandOff) achieves the constant approximation of the first-best, and the state-of-the-art ratio is 1/3.151/3.15. Obtaining the exact ratio is a well-known open problem. On the hardness side, [BDK21] and [CGMZ21] independently showed that there exist instances that RandOff cannot achieve half to the optimal GFT. Besides, [LLR89] and [BM16] proved that no BIC, IR mechanisms can obtain better approximation than 2/e2/e to the optimal GFT.

Independent Work.

Very recently, Cai and Wu [CW23] found that the infinite dimensional quadratically constrained quadratic program (QCQP) can characterize the worst-case instance and the approximation ratio is determined by such an instance. They used a finite program to get a lower bound of 0.720.72 and a upper bound of 0.73810.7381 numerically, respectively. It is worth noting that solving the general case of QCQP is an NP-hard problem. Compared with their method, our algorithm is FPTAS. Thus, to approach the tight bound numerically, our method may be more efficient.

1.2 Our Results

In this paper, we reduce gaps mentioned in [KPV22] as shown in Table 1:

  1. 1.

    For any distribution of the buyer, there is a distribution of prices such that the randomized fixed-price mechanism in expectation achieves 0.710.71 approximation of the optimal efficiency. The main tools we use are interesting characterizations of this price distribution (See Section 3.1). Moreover, we characterize the extreme distribution of the buyer where the tight bound can be achieved. We also provide an example where no randomized fixed-price mechanism can achieve 0.73810.7381 approximation. We show that the same approximation ratio holds for the asymmetric setting. Thus, a direct corollary is that for every pair of buyer and seller distributions, there is a fixed-price mechanism that achieves 0.710.71 approximation of the optimal efficiency, improving the previous (11/e+0.0001)(1-1/e+0.0001)-approximation.

  2. 2.

    For the general bilateral trade using a single sample from the seller distribution as the sole information, we show that even for the randomized fixed-price mechanisms, it is impossible to achieve 1/21/2-approximation. Together with the previous work [DFL+21], the approximation ratio 1/21/2 is tight in this setting. In addition, we show that any randomized fixed-price mechanism using a sample as information cannot attain an approximation better than 3/43/4. in the symmetric setting. Together with the previous work [KPV22], the ratio 3/43/4 is also tight.

Variant Approximation
Asymmetric, full knowledge [0.71,0.7381]{\color[rgb]{1,0,1}[0.71,0.7381]}
Asymmetric, buyer’s knowledge [0.71,0.7381]{\color[rgb]{1,0,1}[0.71,0.7381]}
Symmetric, full knowledge (2+2)/4(2+\sqrt{2})/4^{*}[KPV22]
Asymmetric, 1 prior sample from seller 1/2{\color[rgb]{1,0,1}1/2}^{*} (also see [DFL+21])
Symmetric, 1 prior sample 3/4{\color[rgb]{1,0,1}3/4}^{*} (also see [KPV22])
Table 1: A Survey of fixed-price mechanism approximations in bilateral trades w.r.t welfare maximization. Following [KPV22], we also highlight our results in magenta, and mark a * for all the tight results. For cases with 1 prior sample, we get tight results by studying randomized mechanisms.

2 Preliminaries

In the bilateral trade, a buyer and a seller trade an indivisible good. They have independent values to the good, bb for the buyer and ss for the seller. The specific value are unknown to the mechanism designer. We assume that their values are drawn from public distributions FBF_{B} and FSF_{S}, i.e., bFBb\sim F_{B} and sFSs\sim F_{S}. The expected gain from trade achieved by using fixed price pp can be represented by

𝔼B,S[(bs)𝟏bps]\displaystyle\mathbb{E}_{B,S}[(b-s)\cdot\mathbf{1}_{b\geq p\geq s}]
=\displaystyle= 𝔼B,S[(bp+ps)𝟏bps]\displaystyle\mathbb{E}_{B,S}[(b-p+p-s)\cdot\mathbf{1}_{b\geq p\geq s}]
=\displaystyle= Pr(sp)𝔼B[(bp)𝟏bp]+Pr(bp)𝔼S[(ps)𝟏ps]\displaystyle\Pr(s\leq p)\mathbb{E}_{B}[(b-p)\cdot\mathbf{1}_{b\geq p}]+\Pr(b\geq p)\mathbb{E}_{S}[(p-s)\cdot\mathbf{1}_{p\geq s}]
\displaystyle\geq FS(p)𝔼B[(bp)𝟏bp]+(1FB(p))𝔼S[(ps)𝟏ps]\displaystyle F_{S}(p)\mathbb{E}_{B}[(b-p)\cdot\mathbf{1}_{b\geq p}]+(1-F_{B}(p))\mathbb{E}_{S}[(p-s)\cdot\mathbf{1}_{p\geq s}]
=\displaystyle= FS(p)p(1FB(b))db+(1FB(p))𝔼S[(ps)𝟏ps]\displaystyle F_{S}(p)\int_{p}^{\infty}\left(1-F_{B}(b)\right)\mathrm{d}b+(1-F_{B}(p))\mathbb{E}_{S}[(p-s)\cdot\mathbf{1}_{p\geq s}]
=\displaystyle= 𝔼S[(p(1FB(b))db+(1FB(p))(ps))𝟏ps].\displaystyle\mathbb{E}_{S}\left[\left(\int_{p}^{\infty}(1-F_{B}(b))\mathrm{d}b+(1-F_{B}(p))(p-s)\right)\cdot\mathbf{1}_{p\geq s}\right].

The inequality is due to that we ignore the case where the buyer’s value is exactly the price. When the buyer distribution is continuous, the inequality becomes an equality. The welfare can be separated into the sum of the seller’s value of the good and the gain from trade. Thus, We define the expected welfare W(p;FS,FB):=𝔼S[s]+𝔼B,S[(bs)𝟏bps]\textrm{W}(p;F_{S},F_{B}):=\mathbb{E}_{S}[s]+\mathbb{E}_{B,S}[(b-s)\cdot\mathbf{1}_{b\geq p\geq s}]. We also let W1(p;FS,FB)W_{1}(p;F_{S},F_{B}) denote the expression:

𝔼S[s+(p(1FB(b))db+(1FB(p))(ps))𝟏ps].\mathbb{E}_{S}\left[s+\left(\int_{p}^{\infty}(1-F_{B}(b))\mathrm{d}b+(1-F_{B}(p))(p-s)\right)\cdot\mathbf{1}_{p\geq s}\right].

We denote by OPT-W(FS,FB)\textrm{OPT-W}(F_{S},F_{B}) the first-best optimal welfare, that is max{B,S}\max\{B,S\}. We can also rewrite it as:

OPT-W(FS,FB)\displaystyle\textrm{OPT-W}(F_{S},F_{B}) =𝔼S[s]+𝔼B,S[(bs)𝟏bs]\displaystyle=\mathbb{E}_{S}[s]+\mathbb{E}_{B,S}[(b-s)\cdot\mathbf{1}_{b\geq s}]
=𝔼S[s]+𝔼S[𝔼B[(bs)𝟏bs]]\displaystyle=\mathbb{E}_{S}[s]+\mathbb{E}_{S}[\mathbb{E}_{B}[(b-s)\cdot\mathbf{1}_{b\geq s}]]
=𝔼S[s+s(1FB(b))db].\displaystyle=\mathbb{E}_{S}\left[s+\int^{\infty}_{s}(1-F_{B}(b))\mathrm{d}b\right].

If the distributions FSF_{S} and FBF_{B} are identical, we use FF instead and rewrite W(p;FS,FB)\textrm{W}(p;F_{S},F_{B}) and OPT-W(FS,FB)\textrm{OPT-W}(F_{S},F_{B}) as W(p;F)\textrm{W}(p;F) and OPT-W(F)\textrm{OPT-W}(F), respectively.

In this paper, we consider the welfare approximation ratios of the optimal fixed-price mechanism,

infFB,FSΔL1(+)supp+W(p;FS,FB)OPT-W(FS,FB),\inf_{F_{B},F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\sup_{p\in\mathbb{R}_{+}}\frac{\mathrm{W}(p;F_{S},F_{B})}{\mathrm{OPT}\text{-}\mathrm{W}(F_{S},F_{B})},

where ΔL1(+)\Delta_{L^{1}}(\mathbb{R}_{+}) denotes the set of all the probability distributions over non-negative real numbers. Both EB[b]E_{B}[b] and ES[s]E_{S}[s] should be greater than 0. We are interested in the case where OPT-W(FS,FB)\mathrm{OPT}\text{-}\mathrm{W}(F_{S},F_{B}) is finite and it requires both EB[b]E_{B}[b] and ES[s]E_{S}[s] are finite. For any function FBF_{B}, we can always construct a continuous function F~B\tilde{F}_{B} such that OPT-W(FS,F~B)\mathrm{OPT}\text{-}\mathrm{W}(F_{S},\tilde{F}_{B}) can be arbitrarily close to OPT-W(FS,FB)\mathrm{OPT}\text{-}\mathrm{W}(F_{S},F_{B}) and

supp+W(p,FS,FB)=supp+W(p,FS,F~B)=supp+W1(p,FS,F~B).\sup_{p\in\mathbb{R}_{+}}\mathrm{W}(p,F_{S},F_{B})=\sup_{p\in\mathbb{R}_{+}}\mathrm{W}(p,F_{S},\tilde{F}_{B})=\sup_{p\in\mathbb{R}_{+}}\mathrm{W}_{1}(p,F_{S},\tilde{F}_{B}).

Therefore, we have

infFB,FSΔL1(+)supp+W(p;FS,FB)OPT-W(FS,FB)=infFB,FSΔL1(+)supp+W1(p;FS,FB)OPT-W(FS,FB).\inf_{F_{B},F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\sup_{p\in\mathbb{R}_{+}}\frac{\mathrm{W}(p;F_{S},F_{B})}{\mathrm{OPT}\text{-}\mathrm{W}(F_{S},F_{B})}=\inf_{F_{B},F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\sup_{p\in\mathbb{R}_{+}}\frac{\mathrm{W}_{1}(p;F_{S},F_{B})}{\mathrm{OPT}\text{-}\mathrm{W}(F_{S},F_{B})}.

3 Welfare Approximation in the General Case

In this section, we consider the general case. We first introduce the optimization problem to facilitate determining the approximation ratios. Then we will show some useful properties of our problem in Section 3.1. Using these properties, we characterize the extreme case where the tight bound can be achieved in Section 3.2. Finally we propose an algorithm to approximate the optimal solution and give numerical bound results in Section 3.3.

We note that

supp+W1(p;FS,FB)OPT-W(FS,FB)=supFPΔL1(+)𝔼p[W1(p;FS,FB)]OPT-W(FS,FB).\sup_{p\in\mathbb{R}_{+}}\frac{\textrm{W}_{1}(p;F_{S},F_{B})}{\textrm{OPT-W}(F_{S},F_{B})}=\sup_{F_{P}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\frac{\mathbb{E}_{p}[\textrm{W}_{1}(p;F_{S},F_{B})]}{\textrm{OPT-W}(F_{S},F_{B})}.

Furthermore, given a number β\beta, proving that β\beta is a lower bound of the approximation ratio is equivalent to proving that the following formula is always non-negative:

infFB,FSΔL1(+)supFPΔL1(+){𝔼p[W1(p;FS,FB)]βOPT-W(FS,FB)}.\displaystyle\inf_{F_{B},F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\sup_{F_{P}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\left\{\mathbb{E}_{p}[\textrm{W}_{1}(p;F_{S},F_{B})]-\beta\cdot\textrm{OPT-W}(F_{S},F_{B})\right\}.

Using the specific form in Section 2, we have

𝔼p[W1(p;FS,FB)]βOPT-W(FS,FB)\displaystyle\mathbb{E}_{p}[\textrm{W}_{1}(p;F_{S},F_{B})]-\beta\cdot\textrm{OPT-W}(F_{S},F_{B})
=\displaystyle= 𝔼p[𝔼s[s+(p(1FB(b))db+(1FB(p))(ps))𝟏ps]]β𝔼s[s+s(1FB(b))db]\displaystyle\mathbb{E}_{p}\left[\mathbb{E}_{s}\left[s+\left(\int_{p}^{\infty}(1-F_{B}(b))\mathrm{d}b+(1-F_{B}(p))(p-s)\right)\cdot\mathbf{1}_{p\geq s}\right]\right]-\beta\cdot\mathbb{E}_{s}\left[s+\int^{\infty}_{s}(1-F_{B}(b))\mathrm{d}b\right]
=\displaystyle= 𝔼p,s[(1β)s+(p(1FB(b))db+(1FB(p))(ps))𝟏psβs(1FB(b))db].\displaystyle\mathbb{E}_{p,s}\left[(1-\beta)s+\left(\int_{p}^{\infty}(1-F_{B}(b))\mathrm{d}b+(1-F_{B}(p))(p-s)\right)\cdot\mathbf{1}_{p\geq s}-\beta\int^{\infty}_{s}(1-F_{B}(b))\mathrm{d}b\right].

For convenience, we set HB(b):=1FB(b)H_{B}(b):=1-F_{B}(b) and GB(b):=bHB(p)dpG_{B}(b):=\int_{b}^{\infty}H_{B}(p)\mathrm{d}p. We have

𝔼p[W1(p;FS,FB)]βOPT-W(FS,FB)\displaystyle\mathbb{E}_{p}[\textrm{W}_{1}(p;F_{S},F_{B})]-\beta\cdot\textrm{OPT-W}(F_{S},F_{B})
=\displaystyle= 𝔼p,s[(1β)s+(GB(p)+HB(p)(ps))𝟏psβGB(s)].\displaystyle\mathbb{E}_{p,s}\left[(1-\beta)s+\left(G_{B}(p)+H_{B}(p)(p-s)\right)\cdot\mathbf{1}_{p\geq s}-\beta G_{B}(s)\right].

We also use Γ(β,FB,FS,FP)\Gamma(\beta,F_{B},F_{S},F_{P}) to denote

𝔼p,s[(1β)s+(GB(p)+HB(p)(ps))𝟏psβGB(s)].\mathbb{E}_{p,s}\left[(1-\beta)s+\left(G_{B}(p)+H_{B}(p)(p-s)\right)\cdot\mathbf{1}_{p\geq s}-\beta G_{B}(s)\right].

Fixing FBF_{B} and switching the order between infFSΔL1(+)\inf_{F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})} and supFPΔL1(+)\sup_{F_{P}\in\Delta_{L^{1}}(\mathbb{R}_{+})}, we have

infFSΔL1(+)supFPΔL1(+)Γ(β,FB,FS,FP)supFPΔL1(+)infFSΔL1(+)Γ(β,FB,FS,FP).\inf_{F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\sup_{F_{P}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\Gamma(\beta,F_{B},F_{S},F_{P})\geq\sup_{F_{P}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\inf_{F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\Gamma(\beta,F_{B},F_{S},F_{P}). (1)

The LHS and RHS of the above inequality correspond to the following two problems:

  • Given distributions FSF_{S} and FBF_{B}, at least how much can fixed-price mechanisms achieve the optimal welfare?

  • Given the distribution FBF_{B}, for any randomized fixed-price mechanism that uses only the knowledge of the buyer distribution, at least how much can it achieve of the optimal welfare for any seller distribution?

Based on Inequality (1), if β\beta is a lower bound of the solution of the second problem, it also applies to the first problem. Additionally, we prove that approximation ratios of these two problems are indeed equal in Section 3.2. From now on, we are concerned with the second problem and deal with the RHS of Inequality (1). To prove β\beta is a lower bound, it is equivalent to show that given any FBΔL1(+)F_{B}\in\Delta_{L^{1}}(\mathbb{R}_{+}), there exists a price distribution FPF_{P} such that the following inequality holds for any s+s\in\mathbb{R}_{+}:

(1β)s+𝔼pFP[(GB(p)+HB(p)(ps))𝟏ps]βGB(s)0.\displaystyle(1-\beta)s+\mathbb{E}_{p\sim F_{P}}[\left(G_{B}(p)+H_{B}(p)(p-s)\right)\cdot\mathbf{1}_{p\geq s}]-\beta G_{B}(s)\geq 0. (2)

For a weakly increasing function QP:++Q_{P}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+} with QP(0)0Q_{P}(0)\geq 0, we introduce ϕ(s,HB,QP)\phi(s,H_{B},Q_{P}) to denote the expression

(1β)s+s(GB(p)+HB(p)(ps))dQP(p)βGB(s).(1-\beta)s+\int_{s}^{\infty}\left(G_{B}(p)+H_{B}(p)(p-s)\right)\mathrm{d}Q_{P}(p)-\beta G_{B}(s).

Given the buyer distribution FBF_{B}, we consider the following optimization problem:

inf\displaystyle\inf limpQP(p)\displaystyle~{}\lim_{p\to\infty}Q_{P}(p)
s.t. ϕ(s,HB,QP)0,s+,\displaystyle~{}\phi(s,H_{B},Q_{P})\geq 0,\forall s\in\mathbb{R}_{+}, (3)
QP(p1)QP(p2),p1,p2+,p1<p2.\displaystyle~{}Q_{P}(p_{1})\leq Q_{P}(p_{2}),\forall p_{1},p_{2}\in\mathbb{R}_{+},p_{1}<p_{2}.

We prove that the optimal solution actually exists in Theorem 3.1. Let QPQ^{*}_{P} denote an optimal solution that achieves the optimal objective inflimpQP(p)\inf\lim_{p\to\infty}Q_{P}(p). To prove β\beta is a lower bound of the approximation ratio, it is equivalent to show that limpQP(p)1\lim_{p\to\infty}Q_{P}^{*}(p)\leq 1. Here is the idea. If limpQP(p)1\lim_{p\to\infty}Q_{P}^{*}(p)\leq 1, we set FP(0)=QP(0)limpQP(p)+1F_{P}(0)=Q_{P}^{*}(0)-\lim_{p\to\infty}Q_{P}^{*}(p)+1 and FP(p)=QP(p)QP(0)+FP(0)F_{P}(p)=Q_{P}^{*}(p)-Q_{P}^{*}(0)+F_{P}(0) for any p+p\in\mathbb{R}_{+}. It indicates that limpFP(p)=1\lim_{p\to\infty}F_{P}(p)=1. The function FPF_{P} is a cumulative distribution function and satisfies Inequality (2). If limpQP(p)>1\lim_{p\to\infty}Q_{P}^{*}(p)>1, there is no way to find a feasible cumulative distribution function FPF_{P}.

3.1 The Properties of an Optimal Solution

We present some characterizations of an optimal solution QPQ_{P}^{*}. Recall that we focus on how to design QPQ_{P} to minimize the objective function while satisfying all constraints in Problem (3). We consider the coefficient concerned with dQP(p)\mathrm{d}Q_{P}(p) and let ψ(p,s)=GB(p)+HB(p)(ps)\psi(p,s)=G_{B}(p)+H_{B}(p)(p-s) and have that

ψ(p,s)p=HB(p)(ps)0.\frac{\partial\psi(p,s)}{\partial p}=H_{B}^{\prime}(p)(p-s)\leq 0.

That is, the function ψ(p,s)\psi(p,s) weakly decreases in terms of variable pp. We also know that ψ(p,s)0\psi(p,s)\geq 0 for all p[s,)p\in[s,\infty). For any p1<p2p_{1}<p_{2} and ss, dQP(p1)\mathrm{d}Q_{P}(p_{1}) has a weakly larger positive weight than dQP(p2)\mathrm{d}Q_{P}(p_{2}). The idea is that we treat dQP(p)\mathrm{d}Q_{P}(p) as resource and it is better to put resource on the locations with larger weight than that with smaller weight.

We define s2s_{2} as the unique solution to the equation (1β)s2=βGB(s2)(1-\beta)s_{2}=\beta G_{B}(s_{2}). We provide the form of an optimal solution directly.

Theorem 3.1.

Given FBF_{B}, an optimal solution QPQ_{P}^{*} satisfies that

  • if ss2s\leq s_{2}, the closed form of its derivative function qP(s)q_{P}^{*}(s) is

    β(HB(s)GB(s)ss2HB(t)2dtGB(s)2)+(1β)GB(s2)GB(s)2;\beta\left(\frac{H_{B}(s)}{G_{B}(s)}-\frac{\int_{s}^{s_{2}}H_{B}(t)^{2}\mathrm{d}t}{G_{B}(s)^{2}}\right)+\left(1-\beta\right)\cdot\frac{G_{B}(s_{2})}{G_{B}(s)^{2}};
  • if s>s2s>s_{2}, the equation QP(s)=QP(s2)Q_{P}^{*}(s)=Q_{P}^{*}(s_{2}) holds.

Note that QPQ_{P}^{*} is continuous and it is differentiable in [0,s2][0,s_{2}]. We first show that QPQ_{P}^{*} satisfies constraints of Problem (3).

Proposition 3.2.

The function QPQ_{P}^{*} in Theorem 3.1 satisfies:

  • For any ss2s\leq s_{2}, it holds that ϕ(s,HB,QP)=0\phi(s,H_{B},Q_{P}^{*})=0.

  • For any s>s2s>s_{2}, it holds that ϕ(s,HB,QP)>0\phi(s,H_{B},Q_{P}^{*})>0.

  • For any p1,p2+p_{1},p_{2}\in\mathbb{R}_{+} and p1<p2p_{1}<p_{2}, it holds that QP(p1)QP(p2)Q_{P}^{*}(p_{1})\leq Q_{P}^{*}(p_{2}).

Proof.

Given any ss2s\leq s_{2}, we rewrite ss2HB(p)qP(p)dp\int_{s}^{s_{2}}H_{B}(p)q_{P}^{*}(p)\mathrm{d}p as:

ss2HB(p)[β(HB(p)GB(p)ps2HB(t)2dtGB(p)2)+(1β)GB(s2)GB(p)2]dp\displaystyle\int_{s}^{s_{2}}H_{B}(p)\left[\beta\left(\frac{H_{B}(p)}{G_{B}(p)}-\frac{\int_{p}^{s_{2}}H_{B}(t)^{2}\mathrm{d}t}{G_{B}(p)^{2}}\right)+\left(1-\beta\right)\cdot\frac{G_{B}(s_{2})}{G_{B}(p)^{2}}\right]\mathrm{d}p
=\displaystyle= 1GB(s)[βss2HB(t)2dt+(1β)ss2HB(t)dt].\displaystyle\frac{1}{G_{B}(s)}\cdot\left[\beta\int_{s}^{s_{2}}H_{B}(t)^{2}\mathrm{d}t+(1-\beta)\int_{s}^{s_{2}}H_{B}(t)\mathrm{d}t\right].

Therefore, we get

GB(s)ss2HB(p)qP(p)dp=βss2HB(t)2dt+(1β)ss2HB(t)dt.G_{B}(s)\int_{s}^{s_{2}}H_{B}(p)q_{P}^{*}(p)\mathrm{d}p=\beta\int_{s}^{s_{2}}H_{B}(t)^{2}\mathrm{d}t+(1-\beta)\int_{s}^{s_{2}}H_{B}(t)\mathrm{d}t.

Next, we calculate the derivative of the above formula about ss and get

HB(s)ss2HB(p)qP(p)dp+GB(s)HB(s)qP(s)=βHB(s)2+(1β)HB(s).H_{B}(s)\int_{s}^{s_{2}}H_{B}(p)q_{P}^{*}(p)\mathrm{d}p+G_{B}(s)H_{B}(s)q_{P}^{*}(s)=\beta H_{B}(s)^{2}+(1-\beta)H_{B}(s).

We know that GB(s2)=(1β)s2β>0G_{B}(s_{2})=\frac{(1-\beta)s_{2}}{\beta}>0. It indicates that HB(s)>0H_{B}(s)>0 for any ss2s\leq s_{2}. Thus, we divide every term by HB(s)H_{B}(s) and get

(1β)ss2HB(p)qP(p)dpGB(s)qP(s)+βHB(s)=0.(1-\beta)-\int_{s}^{s_{2}}H_{B}(p)q_{P}^{*}(p)\mathrm{d}p\\ -G_{B}(s)q_{P}^{*}(s)+\beta H_{B}(s)=0.

Notice that the equation QP(s)=QP(s2)Q_{P}^{*}(s)=Q_{P}^{*}(s_{2}) holds if s>s2s>s_{2}. It means that s2HB(p)dQP(p)=0\int_{s_{2}}^{\infty}H_{B}(p)\mathrm{d}Q_{P}^{*}(p)=0. Therefore, we plug s2HB(p)dQP(p)\int_{s_{2}}^{\infty}H_{B}(p)\mathrm{d}Q_{P}^{*}(p) into the above equation and get

(1β)sHB(p)dQP(p)GB(s)qP(s)+βHB(s)=0.(1-\beta)-\int_{s}^{\infty}H_{B}(p)\mathrm{d}Q_{P}^{*}(p)-G_{B}(s)q_{P}^{*}(s)+\beta H_{B}(s)=0.

We consider the integral. Note that (1β)s2=βGB(s2)(1-\beta)s_{2}=\beta G_{B}(s_{2}). We have

(1β)s+s[GB(p)+HB(p)(ps)]dQP(p)βGB(s)=0.(1-\beta)s+\int_{s}^{\infty}[G_{B}(p)+H_{B}(p)(p-s)]\mathrm{d}Q_{P}^{*}(p)-\beta G_{B}(s)=0.

We finish the proof of the first statement that ϕ(s,HB,QP)=0\phi(s,H_{B},Q_{P}^{*})=0 for any ss2s\leq s_{2}.

For the second statement, we have ϕ(s,HB,QP)=(1β)sβGB(s)\phi(s,H_{B},Q_{P}^{*})=(1-\beta)s-\beta G_{B}(s) for any s>s2s>s_{2} since QP(s)=QP(s2)Q_{P}^{*}(s)=Q_{P}^{*}(s_{2}). Considering the derivative about ss and we have ϕ(s,HB,QP)=1β+βHB(s)>0\phi^{\prime}(s,H_{B},Q_{P}^{*})=1-\beta+\beta H_{B}(s)>0. We know that ϕ(s2,HB,QP)=0\phi(s_{2},H_{B},Q_{P}^{*})=0. Therefore, we finish the proof that ϕ(s,HB,QP)>0\phi(s,H_{B},Q_{P}^{*})>0 for any s>s2s>s_{2}.

For the third statement, if ss2s\leq s_{2}, the derivative function qP(s)>0q_{P}^{*}(s)>0. And if s>s2s>s_{2}, the equation QP(s)=QP(s2)Q_{P}^{*}(s)=Q_{P}^{*}(s_{2}) holds. The function QPQ_{P}^{*} is non-decreasing. Thus, it holds that QP(p1)QP(p2)Q_{P}^{*}(p_{1})\leq Q_{P}^{*}(p_{2}) for any p1,p2+p_{1},p_{2}\in\mathbb{R}_{+} and p1<p2p_{1}<p_{2}.

To summarize, the function QPQ_{P}^{*} satisfies allconstraints of Problem (3).∎

Next, we present that for any Q^P\hat{Q}_{P} satisfying constraints of Problem  (3), it holds that limpQ^P(p)limpQP(p)\lim_{p\to\infty}\hat{Q}_{P}(p)\geq\lim_{p\to\infty}Q_{P}^{*}(p). Therefore, QPQ_{P}^{*} is an optimal solution.

Proof of Theorem  3.1.

From Proposition 3.2, we notice that ϕ(s,HB,QP)=0\phi(s,H_{B},Q_{P}^{*})=0 for any ss2s\leq s_{2}. Therefore, for any Q^P\hat{Q}_{P} satisfying constraints of Problem  (3), we have

ϕ(s,HB,QP)ϕ(s,HB,Q^P),ss2.\phi(s,H_{B},Q_{P}^{*})\leq\phi(s,H_{B},\hat{Q}_{P}),\forall s\leq s_{2}. (4)

Let θ(s):=GB(0)[HB(s)0s1GB(t)2dt+1GB(s)].\theta(s):=G_{B}(0)\cdot\left[-H_{B}(s)\int_{0}^{s}\frac{1}{G_{B}(t)^{2}}\mathrm{d}t+\frac{1}{G_{B}(s)}\right]. Note that θ(s)=GB(0)HB(s)0s1GB(t)2dt0\theta^{\prime}(s)=-G_{B}(0)H_{B}^{\prime}(s)\int_{0}^{s}\frac{1}{G_{B}(t)^{2}}\mathrm{d}t\geq 0. We define the function

ξ(HB,QP):=0ψ(p,0)dQP(p)+0s2sψ(p,s)dQP(p)dθ(s).\xi(H_{B},Q_{P}):=\int_{0}^{\infty}\psi(p,0)\mathrm{d}Q_{P}(p)+\int_{0}^{s_{2}}\int_{s}^{\infty}\psi(p,s)\mathrm{d}Q_{P}(p)\mathrm{d}\theta(s).

We know that ξ(HB,QP)ξ(HB,Q^P)\xi(H_{B},Q_{P}^{*})\leq\xi(H_{B},\hat{Q}_{P}) by Inequality (4). Next, we rewrite ξ(HB,QP)\xi(H_{B},Q_{P}) as:

0s2θ(p)GB(p)dQP(p)+0s2HB(p)0pθ(s)dsdQP(p)\displaystyle\int_{0}^{s_{2}}\theta(p)G_{B}(p)\mathrm{d}Q_{P}(p)+\int_{0}^{s_{2}}H_{B}(p)\int_{0}^{p}\theta(s)\mathrm{d}s\mathrm{d}Q_{P}(p)
+\displaystyle+ θ(s2)s2ψ(p,s2)dQP(p)+s2HB(p)0s2θ(s)dsdQP(p).\displaystyle\theta(s_{2})\int_{s_{2}}^{\infty}\psi(p,s_{2})\mathrm{d}Q_{P}(p)+\int_{s_{2}}^{\infty}H_{B}(p)\int_{0}^{s_{2}}\theta(s)\mathrm{d}s\mathrm{d}Q_{P}(p).

The function ξ(HB,QP)\xi(H_{B},Q_{P}) can be represented by 0s2τ1(p)dQP(p)+s2τ2(p)dQP(p)\int_{0}^{s_{2}}\tau_{1}(p)\mathrm{d}Q_{P}(p)+\int_{s_{2}}^{\infty}\tau_{2}(p)\mathrm{d}Q_{P}(p). Note that 0pθ(s)ds=GB(0)GB(p)0p1GB(t)2dt\int_{0}^{p}\theta(s)\mathrm{d}s=G_{B}(0)G_{B}(p)\int_{0}^{p}\frac{1}{G_{B}(t)^{2}}\mathrm{d}t, hence we have that

τ1(p)=θ(p)GB(p)+HB(p)0pθ(s)ds=GB(0).\tau_{1}(p)=\theta(p)G_{B}(p)+H_{B}(p)\int_{0}^{p}\theta(s)\mathrm{d}s=G_{B}(0).

As for τ2(p)\tau_{2}(p), we have

τ2(p)=θ(s2)ψ(p,s2)+HB(p)0s2θ(s)ds.\tau_{2}(p)=\theta(s_{2})\psi(p,s_{2})+H_{B}(p)\int_{0}^{s_{2}}\theta(s)\mathrm{d}s.

The derivative τ2(p)=HB(p)[θ(s2)(ps2)+0s2θ(s)ds]0\tau_{2}^{\prime}(p)=H_{B}^{\prime}(p)[\theta(s_{2})(p-s_{2})+\int_{0}^{s_{2}}\theta(s)\mathrm{d}s]\leq 0 for any ps2p\geq s_{2}. Thus, we have τ2(p)τ2(s2)=GB(0)\tau_{2}(p)\leq\tau_{2}(s_{2})=G_{B}(0).

Consequently, we have

0GB(0)dQ^P(p)\displaystyle\int_{0}^{\infty}G_{B}(0)\mathrm{d}\hat{Q}_{P}(p)
\displaystyle\geq 0s2GB(0)dQ^P(p)+s2τ2(p)dQ^P(p)\displaystyle\int_{0}^{s_{2}}G_{B}(0)\mathrm{d}\hat{Q}_{P}(p)+\int_{s_{2}}^{\infty}\tau_{2}(p)\mathrm{d}\hat{Q}_{P}(p)
\displaystyle\geq 0s2GB(0)dQP(p)+s2τ2(p)dQP(p)\displaystyle\int_{0}^{s_{2}}G_{B}(0)\mathrm{d}Q_{P}^{*}(p)+\int_{s_{2}}^{\infty}\tau_{2}(p)\mathrm{d}Q_{P}^{*}(p)
=\displaystyle= 0GB(0)dQP(p).\displaystyle\int_{0}^{\infty}G_{B}(0)\mathrm{d}Q_{P}^{*}(p).

The second step is because that ξ(HB,QP)ξ(HB,Q^P)\xi(H_{B},Q_{P}^{*})\leq\xi(H_{B},\hat{Q}_{P}). The last step is because that QP(s)=QP(s2)Q_{P}^{*}(s)=Q_{P}^{*}(s_{2}) for any s>s2s>s_{2}. Finally, we have limpQ^P(p)limpQP(p)\lim_{p\to\infty}\hat{Q}_{P}(p)\geq\lim_{p\to\infty}Q_{P}^{*}(p). ∎

3.2 The Bound of the Approximation Ratio

Recall that β\beta is an lower bound if and only if limpQP(p)1\lim_{p\to\infty}Q_{P}^{*}(p)\leq 1. Considering properties of QPQ_{P}^{*} in Theorem 3.1, we use the result below to figure out the lower bound of the exact approximation ratio.

Theorem 3.3.

Given β>0\beta>0, β\beta is a lower bound of the approximation ratio if and only if for any FBΔL1(+)F_{B}\in\Delta_{L^{1}}(\mathbb{R_{+}}), it satisfies that

0s2[β(HB(s)GB(s)ss2HB(t)2dtGB(s)2)+(1β)GB(s2)GB(s)2]ds1.\int^{s_{2}}_{0}\left[\beta\left(\frac{H_{B}(s)}{G_{B}(s)}-\frac{\int^{s_{2}}_{s}H_{B}(t)^{2}\mathrm{d}t}{G_{B}(s)^{2}}\right)+\left(1-\beta\right)\frac{G_{B}(s_{2})}{G_{B}(s)^{2}}\right]\mathrm{d}s\leq 1.

Since both GB(s)G_{B}(s) and ss2HB(t)2dt\int^{s_{2}}_{s}H_{B}(t)^{2}\mathrm{d}t can be expressed by the function HBH_{B}. Therefore, the objective above is only determined by HBH_{B}. It is easy to see that the function HBH_{B} is scale free in terms of the variable ss, that is, H^B(s)=HB(σs)\hat{H}_{B}(s)=H_{B}(\sigma s) for σ>0\sigma>0 achieving the same objective. WLOG we assume that s2=1s_{2}=1 and search all possible functions HBΔL1[0,1]H_{B}\in\Delta_{L^{1}}[0,1] which are weakly decreasing to maximize the objective obj(HB)\mathrm{obj}(H_{B}):

01[β(HB(s)GB(s)s1HB(t)2dtGB(s)2)+(1β)GB(1)GB(s)2]ds,\int^{1}_{0}\left[\beta\left(\frac{H_{B}(s)}{G_{B}(s)}-\frac{\int^{1}_{s}H_{B}(t)^{2}\mathrm{d}t}{G_{B}(s)^{2}}\right)+\left(1-\beta\right)\frac{G_{B}(1)}{G_{B}(s)^{2}}\right]\mathrm{d}s,

where (1β)1=βGB(1)(1-\beta)\cdot 1=\beta\cdot G_{B}(1). That is GB(1)=1ββG_{B}(1)=\frac{1-\beta}{\beta}.

We know that 1HB(0)HB(1)>01\geq H_{B}(0)\geq H_{B}(1)>0, in which HB(1)>0H_{B}(1)>0 is due to GB(1)>0G_{B}(1)>0. However, we allow that HB(1)H_{B}(1) equals zero by considering the extreme case that there is a very large number with a very low probability in the buyer distribution. Assuming that HBH_{B}^{*} is the optimal solution and setting z1:=sup{z|HB(z)=1}z_{1}:=\sup\{z|H_{B}^{*}(z)=1\} and z2:=inf{z|HB(z)=0}z_{2}:=\inf\{z|H_{B}^{*}(z)=0\}, we present the characterization of HBH_{B}^{*}.

Theorem 3.4.

The optimal function HBH_{B}^{*} achieving the maximum of the objective obj(HB)\mathrm{obj}(H_{B}) satisfies that, for any z[z1,z2]z\in[z_{1},z_{2}], we have

HB(z)=zz2HB(t)2dtHB(z)GB(z)GB(1)2GB(z)30z1GB(t)2dt.{H_{B}^{*}}^{\prime}(z)=\frac{\int^{z_{2}}_{z}H_{B}^{*}(t)^{2}\mathrm{d}t-H_{B}^{*}(z)G_{B}^{*}(z)-G_{B}(1)^{2}}{G_{B}^{*}(z)^{3}\int^{z}_{0}\frac{1}{G_{B}^{*}(t)^{2}}\mathrm{d}t}.
Proof.

To the begin with, we relax the constraint that HBH_{B}^{*} is weakly decreasing on the interval [z1,z2][z_{1},z_{2}]. Therefore, we treat the optimization problem as a variational problem. We use HB(z)=HB(z)+ϵη(z)H_{B}(z)=H_{B}^{*}(z)+\epsilon\eta(z) to denote an arbitrary function defined on [z1,z2][z_{1},z_{2}]. Note that HBΔL1[z1,z2]H_{B}\in\Delta_{L^{1}}[z_{1},z_{2}] does not affect the integral z21qP(z)dz\int_{z_{2}}^{1}q_{P}^{*}(z)\mathrm{d}z. Thus, we define

I(HB)=0z2[β(HB(z)GB(z)z1HB(t)2dtGB(z)2)+(1β)GB(1)GB(z)2]dz.I(H_{B})=\int^{z_{2}}_{0}\left[\beta\left(\frac{H_{B}(z)}{G_{B}(z)}-\frac{\int^{1}_{z}H_{B}(t)^{2}\mathrm{d}t}{G_{B}(z)^{2}}\right)+\left(1-\beta\right)\frac{G_{B}(1)}{G_{B}(z)^{2}}\right]\mathrm{d}z.

Considering Iϵ|ϵ=0\frac{\partial I}{\partial\epsilon}\bigg{|}_{\epsilon=0}, we have

z1z2[\displaystyle\int^{z_{2}}_{z_{1}}\bigg{[} βGB(z)0zβHB(s)GB(s)2ds0z2βHB(z)GB(s)2ds\displaystyle\frac{\beta}{G_{B}^{*}(z)}-\int^{z}_{0}\frac{\beta H_{B}^{*}(s)}{G_{B}^{*}(s)^{2}}\mathrm{d}s-\int^{z}_{0}\frac{2\beta H_{B}^{*}(z)}{G_{B}^{*}(s)^{2}}\mathrm{d}s
\displaystyle- 0z(βsz2HB(t)2dt+(1β)GB(1))2GB(s)3ds]η(z)dz.\displaystyle\int^{z}_{0}\left(-\beta\int^{z_{2}}_{s}H_{B}^{*}(t)^{2}\mathrm{d}t+(1-\beta)G_{B}(1)\right)\cdot\frac{2}{G_{B}^{*}(s)^{3}}\mathrm{d}s\bigg{]}\eta(z)\mathrm{d}z.

By the first order condition, we have Iϵ|ϵ=0=0\frac{\partial I}{\partial\epsilon}\bigg{|}_{\epsilon=0}=0. Notice that η(z)\eta(z) can be arbitrary. Therefore, the coefficient of η(z)\eta(z) should be always zero, so we have that

1GB(z)=0z[HB(s)GB(s)2+2HB(z)GB(s)2+2GB(1)22sz2HB(t)2dtGB(s)3]ds.\frac{1}{G_{B}^{*}(z)}=\int^{z}_{0}\left[\frac{H_{B}^{*}(s)}{G_{B}^{*}(s)^{2}}+\frac{2H_{B}^{*}(z)}{G_{B}^{*}(s)^{2}}+\frac{2G_{B}(1)^{2}-2\int^{z_{2}}_{s}H_{B}^{*}(t)^{2}\mathrm{d}t}{G_{B}^{*}(s)^{3}}\right]\mathrm{d}s.

We calculate the derivative and have

HB(z)=zz2HB(t)2dtHB(z)GB(z)GB(1)2GB(z)30z1GB(t)2dt.{H_{B}^{*}}^{\prime}(z)=\frac{\int^{z_{2}}_{z}H_{B}^{*}(t)^{2}\mathrm{d}t-H_{B}^{*}(z)G_{B}^{*}(z)-G_{B}(1)^{2}}{G_{B}^{*}(z)^{3}\int^{z}_{0}\frac{1}{G_{B}^{*}(t)^{2}}\mathrm{d}t}.

Next, we demonstrate that HB(z)<0{H_{B}^{*}}^{\prime}(z)<0, that is, the function HBH_{B}^{*} satisfies the decreasing constraint naturally. It implies that HBH_{B}^{*} is actually the optimal solution that achieves the maximum of the objective obj(HB)\mathrm{obj}(H_{B}). We prove that HB(z)<0{H_{B}^{*}}^{\prime}(z)<0 by contradiction. Suppose that z3=sup{z|HB(z)0}z_{3}=\sup\{z|{H_{B}^{*}}^{\prime}(z)\geq 0\}. It indicates that HB(z)<0{H_{B}^{*}}^{\prime}(z)<0 for any z(z3,z2]z\in(z_{3},z_{2}]. For the numerator, given a sufficient small δ>0\delta>0, we consider

z3z2HB(t)2dtHB(z3)GB(z3)GB(1)2,\displaystyle\int^{z_{2}}_{z_{3}}H_{B}^{*}(t)^{2}\mathrm{d}t-H_{B}^{*}(z_{3})G_{B}^{*}(z_{3})-G_{B}(1)^{2},
z3+δz2HB(t)2dtHB(z3+δ)GB(z3+δ)GB(1)2.\displaystyle\int^{z_{2}}_{z_{3}+\delta}H_{B}^{*}(t)^{2}\mathrm{d}t-H_{B}^{*}(z_{3}+\delta)G_{B}^{*}(z_{3}+\delta)-G_{B}(1)^{2}.

We know that HB(z3)=HB(z3+δ)δHB(z3+δ)H_{B}^{*}(z_{3})=H_{B}^{*}(z_{3}+\delta)-\delta{H_{B}^{*}}^{\prime}(z_{3}+\delta). Take the difference of above formulas into account. We have

δHB(z3+δ)GB(z3)z3z3+δHB(t)(HB(t)HB(z3+δ))dt\displaystyle\delta{H_{B}^{*}}^{\prime}(z_{3}+\delta)G_{B}^{*}(z_{3})-\int^{z_{3}+\delta}_{z_{3}}H_{B}^{*}(t)\left(H_{B}^{*}(t)-H_{B}^{*}(z_{3}+\delta)\right)\mathrm{d}t
<\displaystyle< δHB(z3+δ)GB(z3)δ2HB(z3)HB(z3+δ)\displaystyle\delta{H_{B}^{*}}^{\prime}(z_{3}+\delta)G_{B}^{*}(z_{3})-\delta^{2}H_{B}^{*}(z_{3}){H_{B}^{*}}^{\prime}(z_{3}+\delta)
=\displaystyle= δHB(z3+δ)(GB(z3)δHB(z3))\displaystyle\delta{H_{B}^{*}}^{\prime}(z_{3}+\delta)\left(G_{B}^{*}(z_{3})-\delta H_{B}^{*}(z_{3})\right)

Note that HB(z3+δ)<0{H_{B}^{*}}^{\prime}(z_{3}+\delta)<0. Thus, the difference is less than 0. We derive that HB(z3)<0{H_{B}^{*}}^{\prime}(z_{3})<0, which contradicts with the assumption that HB(z3)0{H_{B}^{*}}^{\prime}(z_{3})\geq 0. ∎

Remark.

Although we get the structure of HBH_{B}^{*}, it is hard to figure out the expression of HBH_{B}^{*} explicitly. Therefore, we choose the numerical method to get the lower bound later.

In addition, suppose that β^\hat{\beta} is the approximation ratio only using the knowledge of buyer distribution. We prove that β^\hat{\beta} is also the approximation ratio with full knowledge of buyer and seller distributions. We have the following theorem:

Theorem 3.5.

The randomized fixed-price mechanism that only uses the knowledge of FBF_{B} has the same approximation ratio as the fixed-price mechanism that uses both FBF_{B} and FSF_{S}. Additionally, for the worst pair (F^B,F^S)(\hat{F}_{B},\hat{F}_{S}), we have

F^S(p)=GB(1)[H^B(p)0p1G^B(s)2ds+1G^B(p)].\hat{F}_{S}(p)=G_{B}(1)\left[-\hat{H}_{B}(p)\int_{0}^{p}\frac{1}{\hat{G}_{B}(s)^{2}}\mathrm{d}s+\frac{1}{\hat{G}_{B}(p)}\right].

It means that given F^B\hat{F}_{B}, the following equality holds

infFSΔL1(+)supFPΔL1(+)Γ(β^,F^B,FS,FP)=supFPΔL1(+)infFSΔL1(+)Γ(β^,F^B,FS,FP).\inf_{F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\sup_{F_{P}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\Gamma(\hat{\beta},\hat{F}_{B},F_{S},F_{P})=\sup_{F_{P}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\inf_{F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\Gamma(\hat{\beta},\hat{F}_{B},F_{S},F_{P}).

The proof idea is that we can find a seller’s distribution such that W1(p;FS,F^B)β^OPT-W(FS,F^B)\textrm{W}_{1}(p;F_{S},\hat{F}_{B})-\hat{\beta}\cdot\textrm{OPT-W}(F_{S},\hat{F}_{B}) is identical for any price p[0,1]p\in[0,1]. And the seller’s distribution is exactly F^S\hat{F}_{S}.

Proof.

First, we show that gain from trade GFT(p;F^S,F^B)\textrm{GFT}(p;\hat{F}_{S},\hat{F}_{B}) is identical for all p[0,1]p\in[0,1]. We have

F^S(p)=\displaystyle\hat{F}_{S}(p)= G^B(1)[H^B(p)0p1G^B(s)2ds+1G^B(p)]\displaystyle\hat{G}_{B}(1)\left[-\hat{H}_{B}(p)\int_{0}^{p}\frac{1}{\hat{G}_{B}(s)^{2}}\mathrm{d}s+\frac{1}{\hat{G}_{B}(p)}\right]
=\displaystyle= (G^B(p)0pG^B(1)G^B(s)2ds).\displaystyle\left(\hat{G}_{B}(p)\int_{0}^{p}\frac{\hat{G}_{B}(1)}{\hat{G}_{B}(s)^{2}}\mathrm{d}s\right)^{\prime}.

Thus, we have

0pF^S(s)dsG^B(p)=0pG^B(1)G^B(s)2ds.\frac{\int_{0}^{p}\hat{F}_{S}(s)\mathrm{d}s}{\hat{G}_{B}(p)}=\int_{0}^{p}\frac{\hat{G}_{B}(1)}{\hat{G}_{B}(s)^{2}}\mathrm{d}s.

We calculate the derivative and get

F^S(p)G^B(p)+H^B(p)0pF^S(s)ds=G^B(1).\hat{F}_{S}(p)\hat{G}_{B}(p)+\hat{H}_{B}(p)\int_{0}^{p}\hat{F}_{S}(s)\mathrm{d}s=\hat{G}_{B}(1).

We rewrite GFT(p;F^S,F^B)\textrm{GFT}(p;\hat{F}_{S},\hat{F}_{B}) as:

𝔼B,S[(bs)𝟏bps]\displaystyle\mathbb{E}_{B,S}[(b-s)\cdot\mathbf{1}_{b\geq p\geq s}]
=\displaystyle= Pr(sp)𝔼B[(bp)𝟏bp]+Pr(bp)𝔼S[(ps)𝟏ps]\displaystyle\Pr(s\leq p)\mathbb{E}_{B}[(b-p)\cdot\mathbf{1}_{b\geq p}]+\Pr(b\geq p)\mathbb{E}_{S}[(p-s)\cdot\mathbf{1}_{p\geq s}]
=\displaystyle= F^S(p)p(1F^B(b))db+(1F^B(p))0pF^S(s)ds\displaystyle\hat{F}_{S}(p)\int_{p}^{\infty}\left(1-\hat{F}_{B}(b)\right)\mathrm{d}b+(1-\hat{F}_{B}(p))\int_{0}^{p}\hat{F}_{S}(s)\mathrm{d}s
=\displaystyle= F^S(p)G^B(p)+H^B(p)0pF^S(s)ds.\displaystyle\hat{F}_{S}(p)\hat{G}_{B}(p)+\hat{H}_{B}(p)\int_{0}^{p}\hat{F}_{S}(s)\mathrm{d}s.

Therefore, for the worst pair (F^S,F^B)(\hat{F}_{S},\hat{F}_{B}), gain from trade is a constant for any p[0,1]p\in[0,1]. Furthermore, the following equality holds:

W1(p;F^S,F^B)β^OPT-W(F^S,F^B)=c.\textrm{W}_{1}(p;\hat{F}_{S},\hat{F}_{B})-\hat{\beta}\cdot\textrm{OPT-W}(\hat{F}_{S},\hat{F}_{B})=c. (5)

Suppose that F^P\hat{F}_{P} is the optimal price mechanism. We have

𝔼pF^P[W1(p;F^S,F^B)]β^OPT-W(F^S,F^B)=0.\mathbb{E}_{p\sim\hat{F}_{P}}[\textrm{W}_{1}(p;\hat{F}_{S},\hat{F}_{B})]-\hat{\beta}\cdot\textrm{OPT-W}(\hat{F}_{S},\hat{F}_{B})=0.

It indicates that c=0c=0. We also have

maxp[0,1]{W1(p;F^S,F^B)β^OPT-W(F^S,F^B)}\displaystyle\max_{p\in[0,1]}\left\{\textrm{W}_{1}(p;\hat{F}_{S},\hat{F}_{B})-\hat{\beta}\cdot\textrm{OPT-W}(\hat{F}_{S},\hat{F}_{B})\right\}
\displaystyle\geq infFSΔL1(+)maxp[0,1]{W1(p;FS,F^B)β^OPT-W(FS,F^B)}\displaystyle\inf_{F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\max_{p\in[0,1]}\left\{\textrm{W}_{1}(p;F_{S},\hat{F}_{B})-\hat{\beta}\cdot\textrm{OPT-W}(F_{S},\hat{F}_{B})\right\}
=\displaystyle= infFSΔL1(+)supFPΔL1[0,1]Γ(β^,F^B,FS,FP)\displaystyle\inf_{F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\sup_{F_{P}\in\Delta_{L^{1}}[0,1]}\Gamma(\hat{\beta},\hat{F}_{B},F_{S},F_{P})
\displaystyle\geq supFPΔL1[0,1]infFSΔL1(+)Γ(β^,F^B,FS,FP)\displaystyle\sup_{F_{P}\in\Delta_{L^{1}}[0,1]}\inf_{F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\Gamma(\hat{\beta},\hat{F}_{B},F_{S},F_{P})
=\displaystyle= infFSΔL1(+){𝔼pF^P[W1(p;FS,F^B)]β^OPT-W(FS,F^B)}\displaystyle\inf_{F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\left\{\mathbb{E}_{p\sim\hat{F}_{P}}[\textrm{W}_{1}(p;F_{S},\hat{F}_{B})]-\hat{\beta}\cdot\textrm{OPT-W}(F_{S},\hat{F}_{B})\right\}

By Equation (5), the first formula should be zero. Since F^P\hat{F}_{P} is the optimal price mechanism, the last formula is also zero. Consequently, we have

infFSΔL1(+)supFPΔL1[0,1]Γ(β^,F^B,FS,FP)=supFPΔL1[0,1]infFSΔL1(+)Γ(β^,F^B,FS,FP)\inf_{F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\sup_{F_{P}\in\Delta_{L^{1}}[0,1]}\Gamma(\hat{\beta},\hat{F}_{B},F_{S},F_{P})=\sup_{F_{P}\in\Delta_{L^{1}}[0,1]}\inf_{F_{S}\in\Delta_{L^{1}}(\mathbb{R}_{+})}\Gamma(\hat{\beta},\hat{F}_{B},F_{S},F_{P})\qed

3.3 Numerical results

Now we try to approach the lower bound by the methods of dicretization and dynamic programming. We show that there exists a step function H^B\hat{H}_{B} that approximates the objective of HBH_{B}^{*}, where H^B\hat{H}_{B} weakly decreases at points over {0,1n,,nn}\left\{0,\frac{1}{n},\ldots,\frac{n}{n}\right\}. Then, we can search the optimal step function H^B\hat{H}_{B} with ϵ\epsilon granularity for any s{0,1n,,nn}s\in\left\{0,\frac{1}{n},\ldots,\frac{n}{n}\right\} to maximize obj(H^B)\mathrm{obj}(\hat{H}_{B}). Given β\beta, we define M:=maxobj(H^B)M:=\max\mathrm{obj}(\hat{H}_{B}). After adding the discretization error to MM, if the result is less than 11, the corresponding β\beta could be a lower bound. As for the search algorithm in discrete grids, we use dynamic programming (see Algorithm 1). We first present the framework of the algorithm and discuss the concrete discretization error later. We denote by Sol(x,y,z,sk)Sol(x,y,z,s_{k}) the objective value of the following problem:

max\displaystyle\max 0sk[β(H^B(s)G^B(s)s1H^B(t)2dtG^B(s)2)+(1β)GB(1)G^B(s)2]ds\displaystyle\int^{s_{k}}_{0}\left[\beta\left(\frac{\hat{H}_{B}(s)}{\hat{G}_{B}(s)}-\frac{\int^{1}_{s}\hat{H}_{B}(t)^{2}\mathrm{d}t}{\hat{G}_{B}(s)^{2}}\right)+\left(1-\beta\right)\frac{G_{B}(1)}{\hat{G}_{B}(s)^{2}}\right]\mathrm{d}s
s.t. sk1H^B(t)2dt=x,\displaystyle\int_{s_{k}}^{1}\hat{H}_{B}(t)^{2}\mathrm{d}t=x,
sk1H^B(t)dt=y,\displaystyle\int_{s_{k}}^{1}\hat{H}_{B}(t)\mathrm{d}t=y,
H^B(sk)=z.\displaystyle\hat{H}_{B}(s_{k})=z.

Note that sks_{k} is in the set of {0,1n,,1}\left\{0,\frac{1}{n},\ldots,1\right\}. Then, we show the original framework of the algorithm as follows.

Input: β,n,ϵ\beta,n,\epsilon
Output: MM
set s0=0s_{0}=0 and k=0k=0;
Initialize Sol(x,y,z,sk)Sol(x,y,z,s_{k}) by an nϵ2×nϵ×1ϵ\frac{n}{\epsilon^{2}}\times\frac{n}{\epsilon}\times\frac{1}{\epsilon} tensor with zeros;
while sk<1s_{k}<1 do
       sk+1sk+1ns_{k+1}\leftarrow s_{k}+\frac{1}{n};
       Initialize Sol(x,y,z,sk+1)Sol(x,y,z,s_{k+1}) by an nϵ2×nϵ×1ϵ\frac{n}{\epsilon^{2}}\times\frac{n}{\epsilon}\times\frac{1}{\epsilon} tensor with zeros;
       foreach cell Sol(x,y,z,sk)\in Sol(x,y,z,s_{k}) do
             for z=0:zz^{\prime}=0:z do
                   xxzz/nx^{\prime}\leftarrow x-z^{\prime}\cdot z^{\prime}/n;
                   yyz/ny^{\prime}\leftarrow y-z^{\prime}/n;
                   if x0x^{\prime}\geq 0 and y0y^{\prime}\geq 0 and Sol(x,y,z,sk)+sksk+1qP(s)ds>Sol(x,y,z,sk+1)Sol(x,y,z,s_{k})+\int_{s_{k}}^{s_{k+1}}q_{P}(s)\mathrm{d}s>Sol(x^{\prime},y^{\prime},z^{\prime},s_{k+1}) then
                         Sol(x,y,z,sk+1)Sol(x,y,z,sk)+sksk+1qP(s)dsSol(x^{\prime},y^{\prime},z^{\prime},s_{k+1})\leftarrow Sol(x,y,z,s_{k})+\int_{s_{k}}^{s_{k+1}}q_{P}(s)\mathrm{d}s;
                        
                  
            
      kk+1k\leftarrow k+1;
      
MmaxcellSol(x,y,z,1)cellM\leftarrow\max_{cell\in Sol(x,y,z,1)}cell;
Algorithm 1 Framework of dynamic programming

Intuitively, the discretization error depends on both 1n\frac{1}{n} and ϵ\epsilon. The complexity is O(n3ϵ5)O\left(\frac{n^{3}}{\epsilon^{5}}\right).

To reduce the time complexity, we first use the following two inequalities to eliminate some special cases directly:

(1s)s1HB(t)2dt\displaystyle(1-s)\int_{s}^{1}H_{B}(t)^{2}\mathrm{d}t (s1HB(t)dt)2,\displaystyle\geq\left(\int_{s}^{1}H_{B}(t)\mathrm{d}t\right)^{2},
HB(s)s1HB(t)dt\displaystyle H_{B}(s)\int_{s}^{1}H_{B}(t)\mathrm{d}t s1HB(t)2dt.\displaystyle\geq\int_{s}^{1}H_{B}(t)^{2}\mathrm{d}t.

Next, we construct a new function K()K(\cdot) with ϵn\frac{\epsilon}{n} granularity to approximate sk1H^B(t)2dt\int_{s_{k}}^{1}\hat{H}_{B}(t)^{2}\mathrm{d}t. The time complexity can be reduced by 1ϵ\frac{1}{\epsilon} times. We construct functions on grid points as follows:

H^B(in)\displaystyle\hat{H}_{B}\left(\frac{i}{n}\right) =HB(in)1ϵϵ;\displaystyle=\left\lfloor H_{B}^{*}\left(\frac{i}{n}\right)\cdot\frac{1}{\epsilon}\right\rfloor\cdot\epsilon;
K(in)\displaystyle K\left(\frac{i}{n}\right) =j=i+1nHB(jn)21ϵϵn;\displaystyle=\sum_{j=i+1}^{n}\left\lfloor H_{B}^{*}\left(\frac{j}{n}\right)^{2}\cdot\frac{1}{\epsilon}\right\rfloor\cdot\frac{\epsilon}{n};
G^B(in)\displaystyle\hat{G}_{B}\left(\frac{i}{n}\right) =j=i+1nH^B(jn)1n+GB(1).\displaystyle=\sum_{j=i+1}^{n}\hat{H}_{B}\left(\frac{j}{n}\right)\cdot\frac{1}{n}+G_{B}(1).

For s(in,i+1n)s\in\left(\frac{i}{n},\frac{i+1}{n}\right), we use the following continuous extensions:

H^B(s)\displaystyle\hat{H}_{B}(s) =H^B(i+1n),\displaystyle=\hat{H}_{B}\left(\frac{i+1}{n}\right),
K(s)\displaystyle K(s) =K(i+1n)+(i+1ns)H^B(i+1n)2,\displaystyle=K\left(\frac{i+1}{n}\right)+\left(\frac{i+1}{n}-s\right)\cdot\hat{H}_{B}\left(\frac{i+1}{n}\right)^{2},
G^B(s)\displaystyle\hat{G}_{B}(s) =G^B(in)+(i+1ns)H^B(i+1n).\displaystyle=\hat{G}_{B}\left(\frac{i}{n}\right)+\left(\frac{i+1}{n}-s\right)\cdot\hat{H}_{B}\left(\frac{i+1}{n}\right).

Then, we define obj(H^B,K)\mathrm{obj}(\hat{H}_{B},K) as:

obj(H^B,K)=01[β(H^B(s)G^B(s)K(s)G^B(s)2)+(1β)GB(1)G^B(s)2]ds.\mathrm{obj}(\hat{H}_{B},K)=\int^{1}_{0}\left[\beta\left(\frac{\hat{H}_{B}(s)}{\hat{G}_{B}(s)}-\frac{K(s)}{\hat{G}_{B}(s)^{2}}\right)+\left(1-\beta\right)\frac{G_{B}(1)}{\hat{G}_{B}(s)^{2}}\right]\mathrm{d}s.

We use obj(H^B,K)\mathrm{obj}(\hat{H}_{B},K) to approximate obj(HB)\mathrm{obj}(H_{B}^{*}). Next, we analyze the discretization error.

Theorem 3.6.

For the discretization error, we have obj(HB)obj(H^B,K)βln1β1β(1+ϵ+1n)\mathrm{obj}(H_{B}^{*})-\mathrm{obj}(\hat{H}_{B},K)\leq\beta\ln\frac{1-\beta}{1-\beta\left(1+\epsilon+\frac{1}{n}\right)}.

Proof.

First, we have

obj(HB)obj(H^B,K)\displaystyle\mathrm{obj}(H_{B}^{*})-\mathrm{obj}(\hat{H}_{B},K)
=\displaystyle= β01(HB(s)GB(s)H^B(s)G^B(s))dsγ1+β01(K(s)G^B(s)2s1HB(t)2dtGB(s)2)dsγ2\displaystyle\underbrace{\beta\int_{0}^{1}\left(\frac{H_{B}^{*}(s)}{G_{B}^{*}(s)}-\frac{\hat{H}_{B}(s)}{\hat{G}_{B}(s)}\right)\mathrm{d}s}_{\gamma_{1}}+\underbrace{\beta\int_{0}^{1}\left(\frac{K(s)}{\hat{G}_{B}(s)^{2}}-\frac{\int^{1}_{s}H_{B}^{*}(t)^{2}\mathrm{d}t}{G_{B}^{*}(s)^{2}}\right)\mathrm{d}s}_{\gamma_{2}}
+(1β)GB(1)01(1GB(s)21G^B(s)2)dsγ3.\displaystyle+\underbrace{(1-\beta)G_{B}(1)\int_{0}^{1}\left(\frac{1}{G_{B}^{*}(s)^{2}}-\frac{1}{\hat{G}_{B}(s)^{2}}\right)\mathrm{d}s}_{\gamma_{3}}.

Now we will upper bound three parts γ1,γ2\gamma_{1},\gamma_{2} and γ3\gamma_{3} in above equation respectively.

Note that functions HBH_{B}^{*} and H^B\hat{H}_{B} are both decreasing functions. Therefore, we have

GB(s)G^B(s)\displaystyle G_{B}^{*}(s)-\hat{G}_{B}(s)
\displaystyle\leq HB(s)(i+1ns)+j=i+1n1HB(jn)1nH^B(i+1n)(i+1ns)j=i+2nH^B(jn)1n\displaystyle H_{B}^{*}(s)\cdot\left(\frac{i+1}{n}-s\right)+\sum_{j=i+1}^{n-1}H_{B}^{*}\left(\frac{j}{n}\right)\cdot\frac{1}{n}-\hat{H}_{B}\left(\frac{i+1}{n}\right)\cdot\left(\frac{i+1}{n}-s\right)-\sum_{j=i+2}^{n}\hat{H}_{B}\left(\frac{j}{n}\right)\cdot\frac{1}{n}
\displaystyle\leq (HB(s)(HB(i+1n)ϵ))(i+1ns)+j=i+1n1(HB(jn)(HB(jn)ϵ))1n\displaystyle\left(H_{B}^{*}(s)-\left(H_{B}^{*}\left(\frac{i+1}{n}\right)-\epsilon\right)\right)\cdot\left(\frac{i+1}{n}-s\right)+\sum_{j=i+1}^{n-1}\left(H_{B}^{*}\left(\frac{j}{n}\right)-\left(H_{B}^{*}\left(\frac{j}{n}\right)-\epsilon\right)\right)\cdot\frac{1}{n}
\displaystyle\leq 1n+ϵ.\displaystyle\frac{1}{n}+\epsilon.

For any s(in,i+1n)s\in\left(\frac{i}{n},\frac{i+1}{n}\right), we have H^B(s)HB(s)\hat{H}_{B}(s)\leq H_{B}^{*}(s). That is, we always have G^B(s)GB(s)\hat{G}_{B}(s)\leq G_{B}^{*}(s) and K(s)s1HB(t)2dtK(s)\leq\int^{1}_{s}H_{B}^{*}(t)^{2}\mathrm{d}t. Therefore, we get γ30\gamma_{3}\leq 0 directly.

We get that γ2\gamma_{2} is upper bounded by

β01(s1HB(t)2dt)(G^B(s)2GB(s)2)ds\displaystyle\beta\int_{0}^{1}\left(\int^{1}_{s}H_{B}^{*}(t)^{2}\mathrm{d}t\right)\cdot\left({\hat{G}_{B}(s)^{-2}}-{G_{B}^{*}(s)^{-2}}\right)\mathrm{d}s
\displaystyle\leq β01(HB(s)s1HB(t)dt)((s1HB(t)dt+GB(1)ϵ1n)2(s1HB(t)dt+GB(1))2)ds.\displaystyle\beta\int_{0}^{1}\left(H_{B}^{*}(s)\int^{1}_{s}H_{B}^{*}(t)\mathrm{d}t\right)\cdot\left({\left(\int^{1}_{s}H_{B}^{*}(t)\mathrm{d}t+G_{B}(1)-\epsilon-\frac{1}{n}\right)^{-2}}-{\left(\int^{1}_{s}H_{B}^{*}(t)\mathrm{d}t+G_{B}(1)\right)^{-2}}\right)\mathrm{d}s.

Let x:=s1HB(t)dtx:=\int^{1}_{s}H_{B}^{*}(t)\mathrm{d}t and a:=GB(1)ϵ1na:=G_{B}(1)-\epsilon-\frac{1}{n}. We get that γ2\gamma_{2} is upper bounded by

β0GB(0)GB(1)(x(a+x)2x(GB(1)+x)2)dx\displaystyle\beta\int_{0}^{G_{B}^{*}(0)-G_{B}(1)}\left(\frac{x}{(a+x)^{2}}-\frac{x}{(G_{B}(1)+x)^{2}}\right)\mathrm{d}x
=\displaystyle= β(ln(a+x)+aa+xln(GB(1)+x)GB(1)GB(1)+x)|0GB(0)GB(1).\displaystyle\beta\left(\ln(a+x)+\frac{a}{a+x}-\ln\left(G_{B}(1)+x\right)-\frac{G_{B}(1)}{G_{B}(1)+x}\right)\bigg{|}_{0}^{G_{B}^{*}(0)-G_{B}(1)}.

For γ1\gamma_{1}, we have

γ1=\displaystyle\gamma_{1}= β(lnG^B(s)|01lnGB(s)|01)\displaystyle\beta\left(\ln\hat{G}_{B}(s)\bigg{|}_{0}^{1}-\ln G_{B}^{*}(s)\bigg{|}_{0}^{1}\right)
=\displaystyle= β(lnGB(0)lnG^B(0))\displaystyle\beta\left(\ln G_{B}^{*}(0)-\ln\hat{G}_{B}(0)\right)
\displaystyle\leq β(lnGB(0)ln(GB(0)1nϵ)).\displaystyle\beta\left(\ln G_{B}^{*}(0)-\ln\left(G_{B}^{*}(0)-\frac{1}{n}-\epsilon\right)\right).

Hence, the part γ1+γ2\gamma_{1}+\gamma_{2} is upper bounded by

β(lnGB(1)lna+aa+GB(0)GB(1)GB(1)GB(0))βlnGB(1)a.\beta\left(\ln G_{B}(1)-\ln a+\frac{a}{a+G_{B}^{*}(0)-G_{B}(1)}-\frac{G_{B}(1)}{G_{B}^{*}(0)}\right)\leq\beta\ln\frac{G_{B}(1)}{a}.

To summarize, the discretization error is βln1β1β(1+ϵ+1n)\beta\ln\frac{1-\beta}{1-\beta\left(1+\epsilon+\frac{1}{n}\right)}.∎

Consequently, we update Sol~(x,y,z,sk)\tilde{Sol}(x,y,z,s_{k}) as follows:

max\displaystyle\max 0sk[β(H^B(s)G^B(s)K(s)G^B(s)2)+(1β)GB(1)G^B(s)2]ds\displaystyle\int^{s_{k}}_{0}\left[\beta\left(\frac{\hat{H}_{B}(s)}{\hat{G}_{B}(s)}-\frac{K(s)}{\hat{G}_{B}(s)^{2}}\right)+\left(1-\beta\right)\frac{G_{B}(1)}{\hat{G}_{B}(s)^{2}}\right]\mathrm{d}s
s.t. K(sk)=x,\displaystyle K(s_{k})=x,
sk1H^B(t)dt=y,\displaystyle\int_{s_{k}}^{1}\hat{H}_{B}(t)\mathrm{d}t=y,
H^B(sk)=z.\displaystyle\hat{H}_{B}(s_{k})=z.

Therefore, we can run our algorithm with O(n3ϵ4)O\left(\frac{n^{3}}{\epsilon^{4}}\right) complexity. Note that xxzzϵϵnx^{\prime}\leftarrow x-\left\lfloor\frac{z^{\prime}\cdot z^{\prime}}{\epsilon}\cdot\frac{\epsilon}{n}\right\rfloor in every step. We present the calculation of sksk+1qP(s)ds\int_{s_{k}}^{s_{k+1}}q_{P}(s)\mathrm{d}s:

sksk+1β(H^B(sk+1)G^B(sk+1)K(sk+1))+(1β)GB(1)(G^B(sk+1)+(sk+1s)H^B(sk+1))2ds\displaystyle\int_{s_{k}}^{s_{k+1}}\frac{\beta\left(\hat{H}_{B}(s_{k+1})\hat{G}_{B}(s_{k+1})-K(s_{k+1})\right)+(1-\beta)G_{B}(1)}{(\hat{G}_{B}(s_{k+1})+(s_{k+1}-s)\hat{H}_{B}(s_{k+1}))^{2}}\mathrm{d}s
=\displaystyle= β(H^B(sk+1)G^B(sk+1)K(sk+1))+(1β)GB(1)G^B(sk+1)(G^B(sk+1)+(sk+1sk)H^B(sk+1))(sk+1sk).\displaystyle\frac{\beta\left(\hat{H}_{B}(s_{k+1})\hat{G}_{B}(s_{k+1})-K(s_{k+1})\right)+(1-\beta)G_{B}(1)}{\hat{G}_{B}(s_{k+1})(\hat{G}_{B}(s_{k+1})+(s_{k+1}-s_{k})\hat{H}_{B}(s_{k+1}))}\cdot(s_{k+1}-s_{k}).

Finally, we are fully prepared to solve the lower bound β\beta.

Theorem 3.7.

Given distribution FBF_{B}, for randomized fixed-price mechanism that uses only the knowledge of the buyer distribution, the designer can achieve at least 0.710.71 of the optimal welfare for any seller distribution FSF_{S}. Given distributions FSF_{S} and FBF_{B}, the designer can always select a price that achieves at least 0.710.71 of the optimal welfare.

Proof.

We implement the dynamic programming algorithm by Python 3.9, with parameters β=0.71\beta=0.71, n=75n=75 and ϵ=175\epsilon=\frac{1}{75}. We run our algorithm with a computing cloud node whose resource contains the Intel Gold 5218 and 128GB memory. Our program takes 552794s (about 6d). Finally, we get M=0.9440M=0.9440. Besides, the discretization error is 0.04790.0479 by Theorem 3.6. Thus, we have that obj(HB)0.9919<1\mathrm{obj}(H_{B}^{*})\leq 0.9919<1. That is, we find a lower bound β=0.71\beta=0.71. All codes can be found at our github repository. ∎

We also summarize lower bounds for different discretization factors in Table 2.

β\beta nn ϵ\epsilon MM error obj(HB)\mathrm{obj}(H_{B}^{*}) running time
0.690.69 3535 135\frac{1}{35} 0.90560.9056 0.09390.0939 0.99950.9995 41664166s (1\approx 1h)
0.70.7 5050 150\frac{1}{50} 0.92450.9245 0.06860.0686 0.99310.9931 4270842708s (12\approx 12h)
0.710.71 7575 175\frac{1}{75} 0.94400.9440 0.04790.0479 0.99190.9919 552794552794s (6\approx 6d)
Table 2: A summary of different factors

In the end, we choose β=0.7381\beta=0.7381 and construct an instance function HBH_{B}. We show that 01qP(s)ds>1\int^{1}_{0}q_{P}^{*}(s)\mathrm{d}s>1. Consequently, we get an upper bound.

Theorem 3.8.

Given distribution FBF_{B}, the randomized fixed-price mechanism that uses only the knowledge of the buyer distribution has approximation ratio at most 0.73810.7381. Given distributions FSF_{S} and FBF_{B}, no fixed-price mechanism has an approximation ratio better than 0.73810.7381.

Proof.

Let β:=0.7381\beta:=0.7381 and consider the following function on [0,1][0,1]:

HB(s)={1,s0.251λ(1e1.56s),0.25<s<0.60,0.6s1H_{B}(s)=\begin{cases}1,&s\leq 0.25\\ 1-\lambda\left(1-e^{1.5-6s}\right),&0.25<s<0.6\\ 0,&0.6\leq s\leq 1\\ \end{cases}

where λ=1/(1e2.1)\lambda=1/(1-e^{-2.1}). Besides, the function HBH_{B} satisfies GB(1)=1ββG_{B}(1)=\frac{1-\beta}{\beta}. Then, we calculate the integral 01qP(s)ds\int^{1}_{0}q_{P}^{*}(s)\mathrm{d}s, that is

01β[(HB(s)GB(s)s1HB(t)2dtGB(s)2)+(1β)GB(1)GB(s)2]ds.\int^{1}_{0}\beta\left[\left(\frac{H_{B}(s)}{G_{B}(s)}-\frac{\int^{1}_{s}H_{B}(t)^{2}\mathrm{d}t}{G_{B}(s)^{2}}\right)+(1-\beta)\frac{G_{B}(1)}{G_{B}(s)^{2}}\right]\mathrm{d}s.

We divide the integral into three parts: 0.61qP(s)ds\int^{1}_{0.6}q_{P}^{*}(s)\mathrm{d}s, 0.250.6qP(s)ds\int^{0.6}_{0.25}q_{P}^{*}(s)\mathrm{d}s and 00.25qP(s)ds\int^{0.25}_{0}q_{P}^{*}(s)\mathrm{d}s.

For the first part, we have

0.61qP(s)ds=0.61(1β)1GB(1)ds=0.4β=0.29524.\displaystyle\int^{1}_{0.6}q_{P}^{*}(s)\mathrm{d}s=\int^{1}_{0.6}(1-\beta)\frac{1}{G_{B}(1)}\mathrm{d}s=0.4\beta=0.29524.

For the second part, it is difficult to compute accurately. So we use a program to approximate it and have 0.250.6QP(s)ds0.41766\int^{0.6}_{0.25}Q_{P}^{*}(s)\mathrm{d}s\approx 0.41766.

For the third part, the integral 00.25qP(s)ds\int^{0.25}_{0}q_{P}^{*}(s)\mathrm{d}s is

00.25β(GB(0.25)0.251HB(t)2dt)+(1β)GB(1)(0.25s+GB(0.25))2ds\displaystyle\int^{0.25}_{0}\frac{\beta\left(G_{B}(0.25)-\int^{1}_{0.25}H_{B}(t)^{2}\mathrm{d}t\right)+(1-\beta)G_{B}(1)}{\left(0.25-s+G_{B}(0.25)\right)^{2}}\mathrm{d}s
=\displaystyle= A0.25s+GB(0.25)|00.25\displaystyle\frac{A}{0.25-s+G_{B}(0.25)}\bigg{|}_{0}^{0.25}
=\displaystyle= 0.25A0.25GB(0.25)+GB(0.25)2.\displaystyle\frac{0.25A}{0.25G_{B}(0.25)+G_{B}(0.25)^{2}}.

where A=β(GB(0.25)0.251HB(t)2dt)+(1β)GB(1)A=\beta\left(G_{B}(0.25)-\int^{1}_{0.25}H_{B}(t)^{2}\mathrm{d}t\right)+(1-\beta)G_{B}(1). For each term, we compute it precisely and have 00.25qP(s)ds0.28722\int^{0.25}_{0}q_{P}^{*}(s)\mathrm{d}s\approx 0.28722.

Finally, we get 01qP(s)ds1.00012>1\int^{1}_{0}q_{P}^{*}(s)\mathrm{d}s\approx 1.00012>1. That is, given the function HBH_{B} above, no matter how we design the price mechanism, there always exists a seller distribution FSF_{S} such that the welfare ratio is less than 0.73810.7381. Please check numerical results at our github repository. ∎

4 Single-sample Approximation

We consider the fixed-price mechanism that receives a single sample from the seller distribution as sole prior information in this section. Previous works [DFL+21, KPV22] study deterministic mechanisms. We extend their bounds to randomized mechanisms. For asymmetric case, [DFL+21] show that every deterministic IC, IR and BB mechanism has approximation ratio at most 1/21/2. Here we show that the randomization would not help. In other words, given any single sample from the seller distribution, the designer is allowed to use a randomization over different fixed-price mechanisms. We demonstrate that a randomized fixed-price mechanism has approximation ratio at most 1/21/2. [DFL+21] also prove that there exists a deterministic fixed-price mechanism of which the lower bound is 1/21/2. Therefore, for randomized fixed-price mechanisms, approximation ratio 1/21/2 is also tight.

Theorem 4.1.

Every randomized fixed-price mechanism that receives a single sample from the seller’s distribution as information has approximation ratio at most 1/21/2.

The idea is that we fix the buyer’s value to be quite high. At the same time, we construct the seller distribution on many small discrete values. Under such construction, the approximation ratio is exactly the probability that the trade occurs. That is, for any randomized mechanism, we show that there exists an instance that the seller rejects the trade with probability 0.50.5.

Proof.

For a randomized fixed-price mechanism, we denote it as a function FP:ΔL1()F_{P}:\mathbb{R}\rightarrow\Delta_{L^{1}}(\mathbb{R}) that maps a value to a distribution. Then, we construct a set {s1,s2,,sn}\{s_{1},s_{2},\ldots,s_{n}\} that any two adjacent values satisfy Pr[FP(sk)<sk+1]=1ϵ\Pr\left[F_{P}(s_{k})<s_{k+1}\right]=1-\epsilon and sksk+1s_{k}\leq s_{k+1} for k[n1]k\in[n-1]. Consider a seller whose valuation is drawn uniformly from the set and a buyer whose valuation equals sn2ns_{n}\cdot 2^{n}. Let ss^{\prime} denote the seller’s sample and ss denote the seller’s valuation. Then the probability that the seller rejects the trade is:

Prs,s[FP(s)<s]\displaystyle\Pr_{s^{\prime},s}\left[F_{P}(s^{\prime})<s\right] =i=1nPr[FP(s)<si]Pr[s=si]\displaystyle=\sum_{i=1}^{n}\Pr\left[F_{P}(s)<s_{i}\right]\cdot\Pr\left[s=s_{i}\right]
i=2ni1n(1ϵ)1n\displaystyle\geq\sum_{i=2}^{n}\frac{i-1}{n}\cdot(1-\epsilon)\cdot\frac{1}{n}
=n12n(1ϵ).\displaystyle=\frac{n-1}{2n}\cdot(1-\epsilon).

As nn grows to infinity and ϵ\epsilon decreases to 0, the probability that the seller rejects the trade approaches 1/21/2. The approximation ratio approaches the probability that the trade occurs. Therefore, the upper bound of any randomized mechanism is 1/21/2. ∎

As for symmetric case, [KPV22] show that the approximation ratio 3/43/4 is tight recently. We extend their bound to the randomized mechanisms.

Theorem 4.2.

Every randomized fixed-price mechanism that receives a single sample from the distribution as information in the symmetric bilateral trade has approximation ratio at most 3/43/4.

Proof.

Similar to the proof of Theorem 4.1, for any randomized mechanism, we want to show that there exists a distribution that achieves 3/43/4 of welfare. Suppose that the randomized mechanism FP()F_{P}(\cdot) maps the single sample value to a price (note that it is also a random variable). Given FP()F_{P}(\cdot) and any ϵ>0\epsilon>0, we construct a sequence of positive numbers s^={s1,s2,,sn}\hat{s}=\{s_{1},s_{2},\ldots,s_{n}\} such that Pr[FP(si)<sk+1]1ϵ\Pr\left[F_{P}(s_{i})<s_{k+1}\right]\geq 1-\epsilon, where i=1,2,,ki=1,2,\ldots,k. Besides, we introduce a value sas_{a} which is high enough (say sasn2s_{a}\geq s_{n}^{2}). The support of the value distribution is exactly {s1,s2,,sn,sa}\{s_{1},s_{2},\ldots,s_{n},s_{a}\} in which the probability that the value is any of {s1,s2,,sn}\{s_{1},s_{2},\ldots,s_{n}\} is identical, i.e., Pr[s=si]=Pr[s=sj]\Pr\left[s=s_{i}\right]=\Pr\left[s=s_{j}\right] where i,j[n]i,j\in[n]. We denote by qq the total probability that the value is from the sequence s^\hat{s}, that is Pr[ss^]=q\Pr\left[s\in\hat{s}\right]=q, and Pr[v=sa]=1q\Pr\left[v=s_{a}\right]=1-q. For convenience, we denote such a distribution as FF.

For the optimal welfare, we consider two situations. The first one is that at least one of the seller and buyer has the value sas_{a}. The other is that neither seller nor buyer has the value sas_{a}. In this case, if the value is less than sns_{n}, we increase it to sns_{n}, this can only improve the welfare. Thus, we have

OPT-W(F)sa(1q2)+snq2.\textrm{OPT-W}(F)\leq s_{a}\cdot(1-q^{2})+s_{n}\cdot q^{2}.

Given the distribution FF and the randomized mechanism FP()F_{P}(\cdot), we denote by LOSS(F,FP)\mathrm{LOSS}(F,F_{P}) the difference between the optimal welfare and the expected welfare by the randomized mechanism mentioned above. We only consider the case that the buyer’s value is sas_{a} and the seller’s value is greater than the price FP()F_{P}(\cdot). We can lower bound LOSS(F,FP)\mathrm{LOSS}(F,F_{P}), where s¯\bar{s} is the value sample from the distribution FF:

i=1n(sasi)Pr[FP(s¯)<si]Pr(s=si)Pr(b=sa)\displaystyle\sum_{i=1}^{n}(s_{a}-s_{i})\cdot\Pr\left[F_{P}(\bar{s})<s_{i}\right]\cdot\Pr\left(s=s_{i}\right)\cdot\Pr\left(b=s_{a}\right)
\displaystyle\geq (sasn)(1q)(1ϵ)qni=2n(i1)qn\displaystyle(s_{a}-s_{n})\cdot(1-q)\cdot(1-\epsilon)\cdot\frac{q}{n}\cdot\sum_{i=2}^{n}\frac{(i-1)q}{n}
=\displaystyle= (sasn)(1q)(1ϵ)(n1)q22n.\displaystyle(s_{a}-s_{n})\cdot(1-q)\cdot(1-\epsilon)\cdot\frac{(n-1)q^{2}}{2n}.

The second inequality is due to that we only consider that s¯=s1,s2,,si1\bar{s}=s_{1},s_{2},\ldots,s_{i-1}. Now we compute the ratio between the welfare loss and the optimal welfare. Note that sas_{a} is very high, we have

LOSS(F,FP)OPT-W(F)\displaystyle\frac{\mathrm{LOSS}(F,F_{P})}{\textrm{OPT-W}(F)}\geq sa(1q)(1ϵ)(n1)q22nsa(1q2)\displaystyle\frac{s_{a}\cdot(1-q)\cdot(1-\epsilon)\cdot\frac{(n-1)q^{2}}{2n}}{s_{a}\cdot(1-q^{2})}
=\displaystyle= n12n(1ϵ)q21+q.\displaystyle\frac{n-1}{2n}\cdot(1-\epsilon)\cdot\frac{q^{2}}{1+q}.

For the term q21+q\frac{q^{2}}{1+q}, we know that it always increases if q[0,1]q\in[0,1]. Consequently, we have LOSS(F,FP)14OPT-W(F)\mathrm{LOSS}(F,F_{P})\geq\frac{1}{4}\cdot\textrm{OPT-W}(F) when ϵ0\epsilon\to 0, nn\to\infty and q1q\to 1. That is, the upper bound of any randomized mechanism is 3/43/4. ∎

5 Conclusion and Discussion

We improved approximation ratios of fixed-price mechanisms in several settings for the bilateral trade problem with one buyer and one seller, in terms of the optimal welfare. Our methods differs from previous work. We establish the lower bounds by solving an optimization problem. Then we try to find a nearly-optimal solution using the discretization method and dynamic programming. Ideally, we can bound the error caused by discretization within any precision. However, due to the limitation of the hardware, we still leave a gap in approximation ratios.

Several interesting questions arise. Note that we only use the knowledge of the buyer distribution and get Theorem 3.3. In fact, we can also start from the seller distribution instead and get a similar but not the same optimization problem given in Theorem 3.3. If we only use the knowledge of the seller distribution, would we have a worse approximation bound? Another challenging question is to come up with a simple mechanism to approximate the optimal gain-from-trade. We do also think the ratio of RandOff [DMSW22, Fei22] could be improved significantly, but it needs new and deep insights.

References

  • [BCWZ17] Johannes Brustle, Yang Cai, Fa Wu, and Mingfei Zhao. Approximating gains from trade in two-sided markets via simple mechanisms. In Proc. 18th ACM Conference on Economics and Computation (EC), pages 589–590, 2017.
  • [BD14] Liad Blumrosen and Shahar Dobzinski. Reallocation mechanisms. In Proc. 15th ACM Conference on Economics and Computation (EC), pages 617–617, 2014.
  • [BD21] Liad Blumrosen and Shahar Dobzinski. (almost) efficient mechanisms for bilateral trading. Games and Economic Behavior, 130:369–383, Nov 2021.
  • [BDK21] Moshe Babaioff, Shahar Dobzinski, and Ron Kupfer. A note on the gains from trade of the random-offerer mechanism. arXiv preprint arXiv:2111.07790, 2021.
  • [BM16] Liad Blumrosen and Yehonatan Mizrahi. Approximating gains-from-trade in bilateral trading. In Proc. 12th International Conference on Web and Internet Economics (WINE), pages 400–413, 2016.
  • [CBGK+17] Riccardo Colini-Baldeschi, Paul Goldberg, Bart de Keijzer, Stefano Leonardi, and Stefano Turchetta. Fixed price approximability of the optimal gain from trade. In Proc. 13th International Conference on Web and Internet Economics (WINE), pages 146–160, 2017.
  • [CBKLT16] Riccardo Colini-Baldeschi, Bart de Keijzer, Stefano Leonardi, and Stefano Turchetta. Approximately efficient double auctions with strong budget balance. In Proc. 27th ACM-SIAM Symposium on Discrete algorithms (SODA), pages 1424–1443, 2016.
  • [CGMZ21] Yang Cai, Kira Goldner, Steven Ma, and Mingfei Zhao. On multi-dimensional gains from trade maximization. In Proc. 32nd ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1079–1098, 2021.
  • [CW23] Yang Cai and Jinzhao Wu. On the optimal fixed-price mechanism in bilateral trade. arXiv preprint arXiv:2301.05167, 2023.
  • [DFL+21] Paul Dütting, Federico Fusco, Philip Lazos, Stefano Leonardi, and Rebecca Reiffenhäuser. Efficient two-sided markets with limited information. In Proc. 53rd ACM Symposium on Theory of Computing (STOC), pages 1452–1465, 2021.
  • [DK15] Moritz Drexl and Andreas Kleiner. Optimal private good allocation: The case for a balanced budget. Games and Economic Behavior, 94:169–181, Nov 2015.
  • [DMSW22] Yuan Deng, Jieming Mao, Balasubramanian Sivan, and Kangning Wang. Approximately efficient bilateral trade. In Proc. 54th ACM Symposium on Theory of Computing (STOC), pages 718–721, 2022.
  • [Fei22] Yumou Fei. Improved approximation to first-best gains-from-trade. In Proc. 18th International Conference on Web and Internet Economics (WINE), pages 204–218, 2022.
  • [HR87] Kathleen M. Hagerty and William P. Rogerson. Robust trading mechanisms. Journal of Economic Theory, 42(1):94–107, Jun 1987.
  • [KPV22] Zi Yang Kang, Francisco Pernice, and Jan Vondrák. Fixed-price approximations in bilateral trade. In Proc. 33rd ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2964–2985, 2022.
  • [KV18] Zi Yang Kang and Jan Vondrák. Strategy-proof approximations of optimal efficiency in bilateral trade. Online, 2018.
  • [LLR89] Wolfgang Leininger, Peter B. Linhart, and Roy Radner. Equilibria of the sealed-bid mechanism for bargaining with incomplete information. Journal of Economic Theory, 48(1):63–106, Jun 1989.
  • [McA08] R Preston McAfee. The gains from trade under fixed price mechanisms. Applied Economics Research Bulletin, 1(1):1–10, 2008.
  • [MS83] Roger B. Myerson and Mark A. Satterthwaite. Efficient mechanisms for bilateral trading. Journal of Economic Theory, 29(2):265–281, Apr 1983.
  • [SZ16] Ran Shao and Lin Zhou. Optimal allocation of an indivisible good. Games and Economic Behavior, 100:95–112, Nov 2016.