This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Pricing Personalized Preferences for Privacy Protection in Constant Function Market Makers

Mohak Goyal Stanford UniversityStanfordCAUSA94305 [email protected]  and  Geoffrey Ramseyer Stanford UniversityStanfordCAUSA94305 [email protected]
Abstract.

Constant function market makers (CFMMs) are a popular decentralized exchange mechanism and have recently been the subject of much research, but major CFMMs give traders no privacy. Prior work proposes randomly splitting and shuffling trades to give some privacy to all users (Chitra et al., 2022), or adding noise to the market state after each trade and charging a fixed ‘privacy fee’ to all traders (Frongillo and Waggoner, 2018). In contrast, we propose a noisy CFMM mechanism where users specify personal privacy requirements and pay personalized fees. We show that the noise added for privacy protection creates additional arbitrage opportunities. We call a mechanism priceable if there exists a privacy fee that always matches the additional arbitrage loss in expectation. We show that a mechanism is priceable if and only if the noise added is zero-mean in the asset amount. We also show that priceability and setting the right fee are necessary for a mechanism to be truthful, and that this fee is inversely proportional to the CFMM’s liquidity.

Local differential privacy; decentralized finance; automated market makers; mechanism design.
The authors thank Ashish Goel, Sahasrajit Sarmasarkar, and Karan Chadha for helpful discussions. This work was funded by the Future of Digital Currency Initiative (FDCI), Stanford University, and the Stanford IOG Research Hub.
ccs: Security and privacy Economics of security and privacyccs: Applied computing Secure online transactions

1. Introduction

Market makers are market participants who facilitate the quick exchange of assets by offering to both buy and sell an asset at any given time. In doing so, they take the risk of holding an asset whose price may change, and in exchange, they charge a fee on each transaction or make a ‘bid-ask spread’. While market making has been a long-studied topic in finance (Amihud and Mendelson, 1980; Kyle, 1985; Glosten and Milgrom, 1985), a simple market-making strategy called Constant Function Market Makers (CFMMs) (Angeris and Chitra, 2020; Goyal et al., 2023; Mohan, 2022) has risen to prominence in decentralized finance (DeFi), facilitating billions of USD in daily trade volume. CFMMs are equivalent to the cost-function-based market makers (as illustrated in, e.g., (Frongillo et al., 2023)) of prediction markets (Hanson, 2007; Chen and Pennock, 2012).

A CFMM maintains non-negative reserves (x,y)(x,y) of two assets XX and YY ( henceforth called its “state”), provided by a liquidity provider (LP). A CFMM trades according to an eponymous trading function f(x,y)f(x,y) of its state. It accepts a trade (Δx,Δy)(\Delta x,\Delta y) from reserves (x,y)(x,y) to (x+Δx,yΔy)(x+\Delta x,y-\Delta y) if and only if f(x+Δx,yΔy)=f(x,y)f(x+\Delta x,y-\Delta y)=f(x,y). Assumption 1 gives technical conditions on ff.

For simplicity, we refer to XX as a volatile asset and YY as a stable numeraire, and pp as the price of XX given in YY. The price of an infinitesimal trade on the CFMM is called its “spot price” (Definition 2.1).

One drawback of CFMMs is that they do not preserve traders’ privacy. We refer to a trader’s information as the amount of XX they buy or sell. A CFMM publicly quotes prices as a function of trade size. As a result, observing these prices before and after a trade reveals the trade amount precisely due to the curvature of the CFMM trading function (Angeris et al., 2021; Chitra et al., 2022). This implies that hiding CFMM state via cryptographic tools cannot provide privacy on individual trades (WhiteHat, 2020). Angeris et al. (2021) formalize this idea and show that the black-box use of trusted hardware or cryptographic primitives with CFMMs is unlikely to be fruitful for privacy protection. We adopt their model of the adversary and its attack capabilities (§ 3.1).

Chitra et al. (2022) argue that differential privacy (DP) is a natural notion for studying CFMM privacy. DP is a framework to collect users’ data, protecting individual privacy while approximating aggregate statistics. In a CFMM, this idea translates to aggregating individual trades with privacy protection while approximating prices. If the differentially private CFMM mechanism’s price deviates significantly from the ‘true’ price, arbitrageurs are incentivized to correct the CFMM’s price and realign it with the true price.

1.1. Our Contributions

We adopt the personalized local differential privacy (PLDP) framework (Definition 2.2) of Chen et al. (2016) for CFMMs. A user ii with a trade of selling Δi\Delta_{i} units of X (Δi<0\Delta_{i}<0 implies buying X) specifies their desired privacy level εi0\varepsilon_{i}\in\mathbb{R}_{\geq 0} (lower εi\varepsilon_{i} implies better privacy) and a masking interval τi=[li,ui]\tau_{i}=[l_{i},u_{i}] to mask their trade within, such that li,uil_{i},u_{i}\in\mathbb{R} and Δiτi\Delta_{i}\in\tau_{i}. While we adopt the privacy notion of Chen et al. (2016), our motivation is different – they use the model for spatial data collection and study the error of aggregate statistics. In contrast, we use it to mask trades in CFMMs and study the economic implications. We study mechanisms that add noise to the market state with each user’s data separately.

Privacy is not free. In the context of data collection, it causes error in the aggregate statistics, and in the context of CFMMs, it creates additional arbitrage opportunities.Consider a simple example. At a point in time, a CFMM’s spot price is pp. The price on an external market (with in infinite liquidity, say, an off-chain exchange) is p^>p\hat{p}>p. Then an arbitrageur can buy some XX from the CFMM and sell it to the external market, raising the CFMM’s spot price to p^\hat{p} under the ‘optimal’ arbitrage (Angeris and Chitra, 2020). If the CFMM uses a noise-adding privacy protection mechanism and has non-zero curvature, then its spot price is distorted from p^\hat{p} for the privacy of the trade. Then, the arbitrager has an additional arbitrage opportunity to profit off the CFMM by moving the spot price again to p^\hat{p}. To be sustainable, the CFMM must charge an additional fee for privacy preservation to account for this additional arbitrage. However, finding this fee amount without knowing p^\hat{p} is not trivial.

We call a private CFMM mechanism priceable (Definition 4.2) if there exists a fee, independent of the true price, that pays for the additional arbitrage. We show that a noisy CFMM mechanism is priceable if and only if the noise is zero-mean in the asset amount. The proof is technical, but intuitively, a zero-mean noise does not create additional liquidity in expectation. Pricing the additional arbitrage at the post-trade spot price ensures that an arbitrageur does not gain in expectation, regardless of the true price.

We also define truthfulness of a noise-adding CFMM mechanism, such that a trader without innate privacy requirements has the incentive to leave the CFMM spot price at the external market price p^\hat{p} in one trade (Definition 4.1). We show that zero-mean noise distribution is necessary for truthfulness. With the right privacy fee, a zero-mean noise distribution leads to a truthful mechanism. This ensures that the noisy CFMM’s spot price will closely follow the price on an external exchange as long as arbitragers are present.

For priceable mechanisms, we formulate the privacy fee as a function of the CFMM state, the trading function, and the trader’s privacy requirements. We show that a more liquid (Definition 4.9) CFMM requires a smaller privacy fee. In fact, the privacy fee of a trade is inversely proportional to the liquidity (Proposition 4.10). This aligns with the fee-liquidity relationship of Frongillo and Waggoner (2018) in an information aggregation context.

1.2. Related Work

1.2.1. Prediction Markets

Closest to our work is that of Frongillo and Waggoner (2018), which gives a bounded-loss differentially-private cost-function-based prediction market via a similar mechanism—adding noise to the market after each trade. Our work differs in the following ways:

  • They mandate the same privacy specification (τ,ε)(\tau,\varepsilon) and privacy fee for all traders regardless of trade size. Further, they restrict the maximum size of a trade. We give a framework for specifying personalized privacy requirements in εi\varepsilon_{i} and τi\tau_{i} for trader ii, and find a personalized privacy fee, which decreases with εi\varepsilon_{i} and increases with widening the masking interval. Thus, we do not restrict the trade size, except when it causes the CFMM to run out of an asset.

  • They use the continual observation technique of Dwork et al. (2010) and Chan et al. (2011) wherein old noise drawings are re-used cleverly. The benefit of this approach is that the total noise over TT trades is limited to O(logT)O(\log T) (informally, many noise terms cancel each other out). However, the drawback is that the noise added after each trade is also Ω(logT)\Omega(\log T). Such a mechanism is not directly useable for CFMMs in DeFi since (1) the total number of trades, T,T, is often not fixed, (2) storing O(T)O(T) noise drawings in a hidden manner for future re-use may not be possible, (3) the privacy fee which accounts for the additional arbitrage would be Ω(logT)\Omega(\log T) larger since it depends on the magnitude of noise added after individual trades.

    We do not aim to reduce the total noise variance since, per our assumptions, arbitragers keep the CFMM price close to accurate. In exchange, we use smaller noise variance on individual trades to facilitate a smaller personalized privacy fee.

Earlier, Cummings et al. (2016) showed that without a fee, noise-adding bounded-loss cost-function-based market makers must have a ‘fast’-growing ε\varepsilon as more trades are made by the market (i.e., quickly diminishing privacy guarantees).

1.2.2. Decentralized Finance

Chitra et al. (2022) give an algorithm for splitting and shuffling trades in a block to ensure privacy. They study the worst-case price discrepancy between their algorithm and a non-private CFMM, and its trade-off with DP guarantees. Our paper differs in the following:

  • They consider blockchains with “consensus rules for executing trades in a particular order.” In comparison, our mechanism does not require this capability. Most mainstream blockchains do not have this capability. We, however, require the blockchain to have verifiable randomness, the same as their mechanism.

  • Their mechanism provides the same privacy guarantees to all traders, whereas we provide a personalized mechanism.

  • Their shuffling mechanism requires many trades in a block to be able to hide trades, and their privacy guarantees depend on the number of trades in a block. While this is often true, having a mechanism invariant to the number of trades in a block, such as ours, is useful for some contexts.

Outside of CFMMs, Penumbra (2023) use batch auctions and homomorphic encryption to enable private swaps in DeFi. Davidow and Manevich (2023) designed a verifiable LDP system for payments. See Dai (2021); Zhu et al. (2022); Zhou et al. (2022) for more perspectives on privacy in DeFi.

1.2.3. Differential Privacy

We adopt the personalized local differential privacy (PLDP) framework of Chen et al. (2016). The original framework was motivated by a model of users sharing spatial data and requiring the ability to mask their trade in a specified region. For example, user AA may not mind disclosing that they are in New York City but do not want to share the location within New York City. Another user, BB, may be willing to disclose that they are in Ghana, but not any more. This framework is better suited for masking trade sizes than simple LDP. The masking interval desired by a trader buying 11 unit XX would generally be different from that desired by a trader selling 100100 XX.

See Dwork et al. (2010) for an overview of DP. See Yang et al. (2020) for a survey of the applications of LDP, which include Google Chrome Browser (Erlingsson et al., 2014) and Apple OS (mac, 2016).

2. Preliminaries

2.1. Constant Function Market Makers (CFMMs)

A CFMM is an automated market-maker parameterized by a trading function ff. We focus the scenario where a CFMM trades between a volatile asset XX and a stable numeraire asset YY, which is the context in which CFMMs are most often employed. A liquidity provider endows the CFMM with nonnegative reserves (x,y)+2(x,y)\in\mathbb{R}^{2}_{+} of each asset. A CFMM with reserves x,yx,y and trading function ff accepts any trade (Δx,Δy)(\Delta_{x},\Delta_{y}) that results in reserves x+Δx,yΔyx+\Delta_{x},y-\Delta_{y} if and only if f(x+Δx,yΔy)=f(x,y)f(x+\Delta_{x},y-\Delta_{y})=f(x,y). The following characterization of trading functions is standard in the literature and is required to ensure that the CFMM works as intended.

Assumption 1.

CFMM trading functions are quasi-concave, continuous, and non-decreasing (in both coordinates) on +2\mathbb{R}^{2}_{+}.

This characterization implies that given any level curve of the trading function ff, the amount xx of asset XX in the reserves, uniquely determines the amount of YY, which we denote 𝒴(x)\mathcal{Y}(x) when the level curve of the function ff is clear from the context.

Commonly studied CFMMs include the constant product f(x,y)=xyf(x,y)=xy, used by, for example, the exchange Uniswap (Adams et al., 2020), and f(x,y)=2exeyf(x,y)=2-e^{-x}-e^{-y} which implements the Logarithmic Market Scoring Rule (LMSR) (Hanson, 2007).

CFMMs, in practice, charge a “trading fee” to all trades which is a percentage of the trade size. For simplicity, we assume that any fee is not added to the CFMM reserves – it is taken by the LP.

An important quantity for the analysis of CFMMs is the price it provides for an infinitesimal trade, referred to as its spot price.

Definition 2.1 (Spot Price).

At state (x^,y^),(\hat{x},\hat{y}), the spot price of a CFMM with trading function ff is fx/fy\frac{\partial f}{\partial x}/\frac{\partial f}{\partial y} at (x^,y^)(\hat{x},\hat{y}).

When ff is not differentiable, the spot price is any subgradient of ff.

Assumption 1 implies the price offered to a trader is never better than the spot price. This has an important implication: when the price at the external market deviates, arbitragers align the CFMM’s spot price with it. In this sense, a CFMM is a “truthful” mechanism – that an arbitrager cannot do better by “reporting” a false price to the CFMM. We make an additional assumption.

Assumption 2.

The CFMM never runs out of an asset.

2.2. Differential Privacy

The traditional notion of DP is for a centralized model (“global” DP), where a trusted data curator holds all users’ accurate data and reveals aggregate statistics to queries with the constraint that it cannot be inverted to infer any specific user’s data. We consider here a different notion of privacy, called personalized local differential privacy (Chen et al., 2016), wherein each user’s data is privatized (usually by adding noise) before being stored with the mechanism. The privacy guarantee is given by a parameter ε\varepsilon; higher ε\varepsilon implies worse privacy protection.

Definition 2.2 (Personalized Local Differential Privacy (PLDP)).

Given the personalized privacy specification (τi,εi)(\tau_{i},\varepsilon_{i}) of user ii, a randomized algorithm 𝒜\mathcal{A} satisfies (τi,εi)(\tau_{i},\varepsilon_{i})-PLDP for ii if for any pair of values l,lτi,l,l^{\prime}\in\tau_{i}, and for any measurable subset ORange(𝒜),O\subseteq Range(\mathcal{A}),

Pr[𝒜(l)O]exp(εi)Pr[𝒜(l)O],Pr[\mathcal{A}(l)\in O]\leq\exp(\varepsilon_{i})\cdot Pr[\mathcal{A}(l^{\prime})\in O],

where the probability space is over the randomness in 𝒜.\mathcal{A}.

Observe that traditional local differential privacy (LDP) (Kasiviswanathan et al., 2011) is a special case of PLDP where all users have the same τ\tau and ϵ.\epsilon.

A larger masking interval τ\tau implies stronger privacy, and within that interval, a smaller ε\varepsilon corresponds to a better privacy guarantee. For example, ε=0\varepsilon=0 corresponds to the case that all points in masking interval τ\tau produce the same distribution of the algorithm’s output.

3. Model

We describe here the model of the adversary, our noisy CFMM mechanism, the external market price, and the associated arbitrage.

3.1. Adversary Attack Model

We adopt the following attack model, which was first given by Angeris et al. (2021), and also used by Chitra et al. (2022).

  1. (1)

    Eavesdropper Eve knows the trading function of the CFMM.

  2. (2)

    Eve knows when trader Tod makes a transaction with the CFMM.

  3. (3)

    Eve can query the spot price of CFMM.

  4. (4)

    Eve can query whether a given non-zero trade is valid.

  5. (5)

    Eve can do (3) and (4) before and after Tod’s trade.

Angeris et al. (2021) showed that this much information is sufficient for Eve to infer exactly the amount traded by Tod in a traditional CFMM.

3.2. System Model

We borrow here the model of some private smart contract systems (such as Hawk (Kosba et al., 2016)), which delegate private computation to a “manager,” that can be implemented via trusted execution environments (TEE) or multiparty computation (MPC) protocols, as in, e.g., Banerjee et al. (2021) and Baum et al. (2022). We specifically require a way for the CFMM to maintain private state, and to execute code on that private state. This private information includes not only the CFMM reserves but also a private account \mathcal{H}, used in the mechanism discussed below. We also require a source of unpredictable (pseudo)randomness, such as the output of a VRF (Micali et al., 1999).

We also assume the existence of arbitrageurs, who may trade on the CFMM and on external, highly liquid exchanges, and that arbitrageurs are aware of the market price on external exchanges.

3.3. Privacy Protection Mechanism for CFMMs

The noisy CFMM randomly distorts its state after every trade.

Noisy CFMM Mechanism

  1. (1)

    The CFMM maintains a “hidden” private account \mathcal{H} of reserves. This account is used to make “noise trades”as follows.

  2. (2)

    At some point in time, the CFMM state is (x,𝒴(x)).(x,\mathcal{Y}(x)).

  3. (3)

    Trader ii makes a trade selling Δi\Delta_{i} units of X to the CFMM and specifies their privacy requirements (τi,εi)(\tau_{i},\varepsilon_{i}) such that Δiτi.\Delta_{i}\in\tau_{i}.

  4. (4)

    Trader ii gets 𝒴(x)𝒴(x+Δi)\mathcal{Y}(x)-\mathcal{Y}(x+\Delta_{i}) units of YY in exchange for Δi\Delta_{i} units of X, and pays a fee, which is the sum of a “trading fee” and a “privacy fee” γi.\gamma_{i}. The fee is not added to the CFMM state.

  5. (5)

    The CFMM immediately makes a random trade of ηi𝒟i\eta_{i}\sim\mathcal{D}_{i} units of XX with the hidden account \mathcal{H}.

  6. (6)

    The new CFMM state is x=x+Δi+ηix^{\prime}=x+\Delta_{i}+\eta_{i}, y=𝒴(x)y^{\prime}=\mathcal{Y}(x^{\prime}) . This state can be queried by the next trader (and Eve, via the attack of §3.1).

We assume that an external operator supports the CFMM to ensure that \mathcal{H} never runs dry, but if \mathcal{H} cannot support the maximum possible noise for a trade request, the CFMM rejects the request.

We use noise trade distributions of bounded support – we will give examples in § 4.1. We consider noise trade distributions which, for a given CFMM, depend only on the trade amount Δi\Delta_{i} and privacy specification (τi,εi).(\tau_{i},\varepsilon_{i}). Importantly, it is independent of the CFMM state. This setting is standard in the DP literature. The privacy fee γi\gamma_{i} depends on the CFMM state and the noise trade distribution 𝒟i\mathcal{D}_{i}.

Since our mechanism must also supports non-private trades as traditional CFMMs do, the noise trade ηi\eta_{i} and the privacy fee γi\gamma_{i} are zero when εi=\varepsilon_{i}=\infty or τi={Δi}\tau_{i}=\{\Delta_{i}\} for user ii.

3.4. Arbitrage and Additional Arbitrage

In this paper, we study the case where an external market with infinite liquidity exists, and arbitrageurs can trade both with the CFMM and the external market. We consider the external market price as the “true” price. This setting is standard in DeFi literature on CFMMs, for example, in Goyal et al. (2023); Evans et al. (2021); Milionis et al. (2022). Our result also holds when arbitrageurs trade on private knowledge of future or true prices, which is important when a bigger external market does not exist. Whenever the CFMM spot price deviates from the true price, an arbitrageur makes a risk-less profit by aligning CFMM’s spot price with the true price.

We now illustrate the additional arbitrage problem with the noisy CFMM mechanism. Suppose for the moment that the privacy fee were zero and that Tod has exclusive access to the CFMM. Tod can exploit the noisy CFMM mechanism in the following manner.

3.4.1. Arbitrage on Noisy CFMM

  1. (1)

    Tod observes the CFMM spot price pp and the true price p^\hat{p}.

  2. (2)

    If pp^,p\neq\hat{p}, they make a trade with the CFMM, such that the post-trade CFMM spot price p~\tilde{p} satisfies p~=p^.\tilde{p}=\hat{p}. They specify some non-trivial privacy requirements. Tod makes a risk-free profit by taking the reverse trade on the external market.

  3. (3)

    The CFMM executes its noise trade and ends up with a spot price pp^{\prime}, which is not equal to p^\hat{p} with nonzero probability.

  4. (4)

    Steps 2 and 3 repeat until a zero noise trade is drawn.

Observe that the arbitrage profit of Step 2 is not due to the privacy feature of the CFMM. In this paper, we do not account for this component of the CFMM’s loss. In a traditional CFMM, arbitrage stops at Step 2. In a noisy CFMM, additional arbitrage profit is created in Step 3, which is the subject of study of this paper.

Definition 3.1 (Additional Arbitrage).

A CFMM CC and a noisy CFMM C~\tilde{C} have identical states and trading functions. For any true price p^\hat{p}, the additional arbitrage is the difference of the maximum possible profit in expectation from trading with C~\tilde{C} and CC.

4. Truthfulness and Priceability of Noisy CFMM Mechanism

The strategy above is not the only possible thing trader Tod might do. For example, they could conceivably leave the CFMM at a spot price p~\tilde{p} different from the true price p^\hat{p} and make a sequence of multiple trades with the noisy CFMM for a higher overall profit. This motivates the following property.

Definition 4.1 (Truthfulness).

A noisy CFMM mechanism is truthful if a trader with no privacy requirements, strictly increasing utility in Y, no utility for X, exclusive access to the CFMM, and access to the external market, always maximizes their utility by trading with the CFMM only once such that the post-trade spot price is equal to the true price of X, and with privacy level ε=\varepsilon=\infty.

This definition does not apply to traders who trade for intrinsic reasons without regard to price movements (so-called “uninformed traders”). It also does not apply to traders with privacy requirements since we do not quantify the value that traders attach to privacy protection. It applies to strategic and informed traders who perform arbitrage without privacy requirements. Although defined narrowly, our notion of truthfulness is important for two reasons: (1) it explains if the noisy CFMM will closely follow the true price and therefore be attractive to uninformed traders, and (2) it paves the way for us to compute correct privacy fees and give a minimal characterization of acceptable noise trade distributions.

We define a property to study the economic feasibility of the privacy feature on CFMMs.

Definition 4.2 (Priceability).

A noisy CFMM mechanism is priceable if there always exists a privacy fee which depends only on the CFMM state, its trading function, and the noise trade distribution such that the mechanism has the following property.

For any true price and CFMM state, the additional arbitrage is zero.

The crucial feature of Definition  4.2 is that the privacy fee cannot depend on the true price p^.\hat{p}. Observe that for a privacy fee of zero, the noisy CFMM creates strictly greater arbitrage profit than a traditional CFMM when the CFMM has non-zero curvature (§3.4.1).

Truthfulness and priceability have different motivations. Truthfulness ensures that the CFMM spot price follows the true price and is attractive to uninformed traders. Priceability ensures that the CFMM does not lose money to additional arbitrage created by the privacy feature. However, these two properties are closely related technically. The following observations follow from the definitions of these properties.

Observation 1.

Truthfulness implies priceability.

Observation 2.

There exists a privacy fee function under which a priceable noisy CFMM mechanism is truthful.

Before discussing our results characterizing the noise trade distribution and privacy fee, we first give an alternate formulation of a CFMM. A level curve of a trading function can be seen as a map from the amount of X in the reserves to the CFMM spot price. We denote this map by P(x)P(x), where the trading function and the level curve are clear from the context. When the trading function is not differentiable, P(x)P(x) is the largest subgradient at that point. Assumptions  1 and  2 imply that P(x)P(x) has the following properties.

Observation 3.

Spot price P(x)P(x) is monotonically decreasing, P(x)P(x)\rightarrow\infty as x0x\rightarrow 0 and P(x)0P(x)\rightarrow 0 as x.x\rightarrow\infty.

We similarly define P1(p)P^{-1}(p) as the largest xx for which the spot price is pp when the trading function and its level curve are clear from the context (since we do not re-invest fee into the reserves, our CFMM is path independent and stays at the initial level curve). We make the following assumption for technical ease.

Assumption 3.

The trading fee is zero.

This assumption is for expository clarity, and is not required. When a cc percent trading fee is charged, the truthfulness property becomes approximate such that an arbitrageur corrects the CFMM spot price only if it is more than cc percent away from the true price. In practice, cc is generally below 1%.1\%. Coming to our results, we first describe what does not work for truthfulness.

Theorem 4.3.

If the noise trade distribution 𝒟(Δ,ε,τ)\mathcal{D}(\Delta,\varepsilon,\tau) has non-zero-mean for some (Δ,ε,τ)(\Delta,\varepsilon,\tau), then the noisy CFMM mechanism is not truthful for any finite privacy fee.

Proof.

Denote the trade size, privacy level, and masking interval for which 𝒟\mathcal{D} has non-zero mean by (Δ~,ε~,τ~).(\tilde{\Delta},\tilde{\varepsilon},\tilde{\tau}). Denote the mean by μ.\mu. We consider two cases where μ\mu is either positive or negative.

Case 1: μ>0.\mu>0. Here, the situation where truthfulness is violated has the true price p^\hat{p} exceeds the initial spot price p.p.

The arbitrageur’s profit under the truthful strategy is:

(1) P1(p)P1(p^)(P(a)p^)𝑑a.\int_{P^{-1}(p)}^{P^{-1}(\hat{p})}(P(a)-\hat{p})~{}da.

Consider the following strategy for the arbitrageur:

  1. (1)

    Make a trade of (Δ~,ε~,τ~),(\tilde{\Delta},\tilde{\varepsilon},\tilde{\tau}), and pay the privacy fee γ.\gamma.

  2. (2)

    Make a non-private trade with post-trade CFMM spot price p^.\hat{p}.

When the noise trade drawn is η,\eta, the profit with this strategy is:

γ+P1(p)P1(p)+Δ~(P(a)p^)𝑑a+P1(p)+Δ~+ηP1(p^)(P(a)p^)𝑑a-\gamma+\int_{P^{-1}(p)}^{P^{-1}(p)+\tilde{\Delta}}(P(a)-\hat{p})~{}da+\int_{P^{-1}(p)+\tilde{\Delta}+\eta}^{P^{-1}(\hat{p})}(P(a)-\hat{p})~{}da

The excess gain from deviating from the truthful strategy is

γ+P1(p)+Δ~+ηP1(p)+Δ~(P(a)p^)𝑑a\displaystyle-\gamma+\int_{P^{-1}(p)+\tilde{\Delta}+\eta}^{P^{-1}(p)+\tilde{\Delta}}(P(a)-\hat{p})~{}da
=\displaystyle= γ+P1(p)+Δ~+ηP1(p)+Δ~P(a)𝑑a+ηp^\displaystyle-\gamma+\int_{P^{-1}(p)+\tilde{\Delta}+\eta}^{P^{-1}(p)+\tilde{\Delta}}P(a)~{}da+\eta\hat{p}

Recall that 𝔼(η)=μ>0\mathbb{E}(\eta)=\mu>0 in this case. The first two terms are bounded and independent of p^\hat{p}. For large enough p^\hat{p}, the positive third term dominates, and the result follows for this case.

Case 2: μ<0.\mu<0. Here we construct a scenario where the true price p^\hat{p} is less than the initial spot price p.p. The arbitrageur’s profit under the truthful strategy is same as (1).

Consider the following three-step strategy for the arbitrageur:

  1. (1)

    Make a non-private trade with the CFMM to buy X such that its spot price becomes pp^{\prime} (where pp^{\prime} is a large value and will be set precisely later).

  2. (2)

    Make a trade of (Δ~,ε~,τ~),(\tilde{\Delta},\tilde{\varepsilon},\tilde{\tau}), and pay the privacy fee γ.\gamma.

  3. (3)

    Make a non-private trade with post-trade CFMM spot price p^.\hat{p}.

When the noise trade drawn is η,\eta, the profit with this strategy is:

γ+P1(p)P1(p)(P(a)p^)𝑑a+P1(p)P1(p)+Δ~(P(a)p^)𝑑a\displaystyle-\gamma+\int_{P^{-1}(p)}^{P^{-1}(p^{\prime})}(P(a)-\hat{p})~{}da+\int_{P^{-1}(p^{\prime})}^{P^{-1}(p^{\prime})+\tilde{\Delta}}(P(a)-\hat{p})~{}da
+P1(p)+Δ~+ηP1(p^)(P(a)p^)𝑑a\displaystyle+\int_{P^{-1}(p^{\prime})+\tilde{\Delta}+\eta}^{P^{-1}(\hat{p})}(P(a)-\hat{p})~{}da

The excess gain from deviating from the truthful strategy is:

γ+P1(p)P1(p)(P(a)p^)𝑑a+P1(p)P1(p)+Δ~(P(a)p^)𝑑a\displaystyle-\gamma+\int_{P^{-1}(p)}^{P^{-1}(p^{\prime})}(P(a)-\hat{p})~{}da+\int_{P^{-1}(p^{\prime})}^{P^{-1}(p^{\prime})+\tilde{\Delta}}(P(a)-\hat{p})~{}da
+P1(p)+Δ~+ηP1(p^)(P(a)p^)𝑑aP1(p)P1(p^)(P(a)p^)𝑑a\displaystyle+\int_{P^{-1}(p^{\prime})+\tilde{\Delta}+\eta}^{P^{-1}(\hat{p})}(P(a)-\hat{p})~{}da-\int_{P^{-1}(p)}^{P^{-1}(\hat{p})}(P(a)-\hat{p})~{}da
=\displaystyle= γ+P1(p)+Δ~+ηP1(p)+Δ~(P(a)p^)𝑑a\displaystyle-\gamma+\int_{P^{-1}(p^{\prime})+\tilde{\Delta}+\eta}^{P^{-1}(p^{\prime})+\tilde{\Delta}}(P(a)-\hat{p})~{}da
=\displaystyle= γ+ηp^+P1(p)+Δ~+ηP1(p)+Δ~P(a)𝑑a\displaystyle-\gamma+\eta\hat{p}+\int_{P^{-1}(p^{\prime})+\tilde{\Delta}+\eta}^{P^{-1}(p^{\prime})+\tilde{\Delta}}P(a)~{}da
Since P()P(\cdot) is monotonically decreasing and positive,
\displaystyle\geq γ+ηp^ηP(P1(p)+Δ~)\displaystyle-\gamma+\eta\hat{p}-\eta P(P^{-1}(p^{\prime})+\tilde{\Delta})

Recall that 𝔼(η)=μ<0.\mathbb{E}(\eta)=\mu<0. We construct the case where the true price p^\hat{p} is smaller than 1|μ|\frac{1}{|\mu|}. We set pp^{\prime} in the arbitrageur’s strategy such that μP(P1(p)+Δ~)-\mu P(P^{-1}(p^{\prime})+\tilde{\Delta}) is larger than γ+μp^.-\gamma+\mu\hat{p}. Such a pp^{\prime} always exists under Assumptions 1 and 2. Since our noise trade distributions have bounded support, a large enough level curve of the CFMM exists where the noise trade is supported at spot price p.p^{\prime}. This implies that, in expectation, the arbitrageur can obtain a larger profit with our 3-step strategy than under the truthful strategy. This completes the proof.

Discussion: We need two different arbitrage strategies for μ>0\mu>0 and μ<0\mu<0 cases due to an asymmetry between X and Y. The noise trades are in units of XX, and when the true price of XX is small, the loss of these noise trades will be small when made in a state close to the true price. To show that the arbitrageur can nonetheless make a large profit, we need an extra step in their strategy. ∎

This result shows that the noise trade distribution being zero-mean is necessary for truthfulness. For non-zero-mean noise trade distributions, the proof shows that there exists a true price p^\hat{p} under which the arbitrageur is better off paying the privacy fee in exchange for the additional arbitrage that the noise trade creates. Also note the following corollary.

Corollary 4.4.

If the noise trade distribution 𝒟(Δ,ε,τ)\mathcal{D}(\Delta,\varepsilon,\tau) is not zero-mean for some (Δ,ε,τ)(\Delta,\varepsilon,\tau), the noisy CFMM is not priceable.

This result establishes that the noise trade distribution being zero-mean is necessary for priceability too. We later show that it is also sufficient (Corollary 4.7). Before discussing the result, we define a quantity termed ‘noise fee’ as a function of the CFMM state, trading function, and noise trade distribution. It captures the expected arbitrage gain from making the trade that “reverses the noise” when the CFMM spot price is the true price.

Definition 4.5 (Noise fee).

For CFMM state (x~,𝒴(x~)),(\tilde{x},\mathcal{Y}(\tilde{x})), trade Δ~\tilde{\Delta}, privacy specification (τ~,ε~)(\tilde{\tau},\tilde{\varepsilon}) and noise trade distribution 𝒟,\mathcal{D}, the noise fee Γ\Gamma is: Γ=𝒟(η)x~+Δ~+ηx~+Δ~(P(a)P(x~+Δ~))𝑑a𝑑η.\Gamma=\int_{-\infty}^{\infty}\mathcal{D}(\eta)\int_{\tilde{x}+\tilde{\Delta}+\eta}^{\tilde{x}+\tilde{\Delta}}(P(a)-P(\tilde{x}+\tilde{\Delta}))~{}da~{}d\eta.

For zero-mean noise trade distributions, the noise fee simplifies to Γ=𝒟(η)x~+Δ~+ηx~+Δ~P(a)𝑑a𝑑η\Gamma=\int_{-\infty}^{\infty}\mathcal{D}(\eta)\int_{\tilde{x}+\tilde{\Delta}+\eta}^{\tilde{x}+\tilde{\Delta}}P(a)~{}da~{}d\eta.

See that the noise fee is defined with the idea that it must account for the additional arbitrage when the post-trade spot price is equal to the true price. Importantly, it is oblivious to the actual true price of X. However, as we show in the next result (Theorem 4.6), for zero-mean 𝒟,\mathcal{D}, the noise fee is the “right” choice for the privacy fee and it makes the noisy CFMM mechanism truthful.

Theorem 4.6.

If the noise trade distribution is zero-mean, a privacy fee exists under which the noisy CFMM mechanism is truthful.

The minimum such privacy fee is the noise fee Γ\Gamma (Definition 4.5).

Appendix A gives a proof, which shows that for any finitely-bounded sequence of trades made by an arbitrageur, even adaptive to the noise, the expected gain in excess of that of the truthful strategy is equal to the noise fee. The proof invokes the Doob’s optional stopping theorem on martingales (Grimmett and Stirzaker, 2020, Chapter 12.5). Thus, risk-neutral arbitrageurs should treat a noisy CFMM the same as a traditional CFMM. Privacy therefore does not hamper the basic functionality of a CFMM of offering close to true prices in the presence of arbitrageurs.

The following corollary follows from Observation 1.

Corollary 4.7.

A noisy CFMM mechanism is priceable if the noise trade distribution 𝒟\mathcal{D} is always zero-mean.

Therefore, for zero-mean noise trade distributions, the mechanism can protect the CFMM from additional arbitrage without observing the true price. The privacy feature does not harm the returns of risk-neutral LPs when the privacy fee is set correctly.

4.1. Choice of Noise Trade Distributions

We first discuss a noise distribution from the LDP literature, which is well-suited for our application.

Definition 4.8 (Binary Mechanism of Duchi et al. (2018)).

For privacy specification (τ=[l,u],ε)(\tau=[l,u],\varepsilon) and data (trade) Δ,\Delta, add noise

  1. (1)

    (u+l2Δul2eε+1eε1)(\frac{u+l}{2}-\Delta-\frac{u-l}{2}\frac{e^{\varepsilon}+1}{e^{\varepsilon}-1}) with probability 12[1Δeε1eε+1].\frac{1}{2}[1-\Delta^{\prime}\frac{e^{\varepsilon}-1}{e^{\varepsilon}+1}].

  2. (2)

    (u+l2Δ+ul2eε+1eε1)(\frac{u+l}{2}-\Delta+\frac{u-l}{2}\frac{e^{\varepsilon}+1}{e^{\varepsilon}-1}) with probability 12[1+Δeε1eε+1].\frac{1}{2}[1+\Delta^{\prime}\frac{e^{\varepsilon}-1}{e^{\varepsilon}+1}].

Here Δ=2Δ(u+l)ul.\Delta^{\prime}=\frac{2\Delta-(u+l)}{u-l}.

Importantly, the noise is zero-mean and of bounded magnitude. Kairouz et al. (2014) showed that in the high privacy regime, i.e., for all ε\varepsilon less than some ε,\varepsilon^{*}, the binary mechanism is optimal for minimizing the ‘information loss’ due to privacy.

We are interested in designing noise distributions that minimize the privacy fee. Observe that the privacy fee is a convex (linear) function of 𝒟(η)\mathcal{D}(\eta) for all η\eta for all state x~\tilde{x} and trade Δ~\tilde{\Delta}. The privacy requirements can be expressed as convex constraints in 𝒟(η)\mathcal{D}(\eta), as in Kairouz et al. (2014). The mean being zero is also a linear constraint. Given a state and trading function, we can give a convex program to find the “cheapest” noise trade distribution that preserves (τ,ε)(\tau,\varepsilon)-PLDP. However, since noise distributions have to be independent of the CFMM state, finding a distribution that works reasonably well for all states is not obvious and is the subject of future work.

4.2. Relation Between Privacy Fee and Liquidity

Notice from Definition 4.5 that the privacy fee is zero in a region where the spot price is constant. This matches the intuition that the constant sum CFMM provides privacy for free due to its lack of curvature – it has “infinite” liquidity at a price point. We define liquidity at a price as follows.

Definition 4.9 (Liquidity).

When spot price P()P(\cdot) is differentiable and dP(x)dx|x=P1(p)0,\frac{dP(x)}{dx}|_{x=P^{-1}(p)}\neq 0, the liquidity L(p)L(p) is (1/dP(x)dx)|x=P1(p)\left(1/\frac{dP(x)}{dx}\right)\Big{|}_{x=P^{-1}(p)}.

L(p)=0L(p)=0 when P()P(\cdot) is not differentiable, and

L(p)L(p) is undefined when dP(x)dx|x=P1(p)=0.\frac{dP(x)}{dx}\Big{|}_{x=P^{-1}(p)}=0.

For a given trade, the privacy fee captures the additional arbitrage profit that can be captured by reversing the associated noise trade. Intuitively, this quantity is larger if the noise trade creates a larger “impact” on the CFMM spot price. This impact is larger when the liquidity is smaller. We illustrate this intuition with an example of the widely adopted constant product “Uniswap” CFMM and the binary mechanism in Appendix B. This intuition holds more generally and formally, as in the following result.

Proposition 4.10.

When the noise trade distribution is zero-mean, the privacy fee for a trade Δ~\tilde{\Delta} at spot price pp is approximately inversely proportional to the CFMM liquidity at pp when Δ~\tilde{\Delta} and the maximum possible size of the noise trade are o(P1(p)).o(P^{-1}(p)).

Proof.

The privacy fee equals the noise fee by Theorem 4.6. We take a first-order approximation of the inverse of the liquidity. In Definition 4.5 of the noise fee, the price difference (P(a)P(P1(p)+Δ~))(P(a)-P(P^{-1}(p)+\tilde{\Delta})) in the interval a[P(P1(p))+Δ~+η,P(P1(p))+Δ~]a\in[P(P^{-1}(p))+\tilde{\Delta}+\eta,P(P^{-1}(p))+\tilde{\Delta}] is approximately inversely proportional to the liquidity at price pp when the trade Δ~\tilde{\Delta} and noise trade η\eta are o((P1(p)))o((P^{-1}(p))). ∎

This result shows that more liquid CFMMs are better suited for the needs of privacy-seeking traders. However, these CFMMs are prone to larger divergence loss, and also require larger capital investment from LPs. A holistic analysis of the design trade-offs of a noisy CFMM mechanism, possibly with custom-made trading functions, is the subject of future work.

5. Conclusions

In this work, we develop a noisy CFMM mechanism for privacy protection. Users can specify their desired privacy level and a masking interval for their trade and pay a personalized privacy fee. We find the minimum fee required to ensure that arbitrageurs cannot exploit the CFMM privacy feature. We also show that the noise being zero-mean in trade size is a necessary and sufficient condition for such a minimal fee to exist (we call this condition the priceability of the mechanism). We also show that our noisy CFMM is truthful under this fee, i.e., arbitrageurs are incentivized to align the CFMM’s spot price with the external market price. For future work, it would be useful to design the noise trade distributions which can reduce the cost implied by the privacy fee. Efficient instantiations of our model, particularly the requirement of a private, secure account for conducting the noise trades are also avenues for future research.

References

  • (1)
  • mac (2016) 2016. Apple Differential Privacy Technical Overview. https://www.apple.com/privacy/docs/Differential_Privacy_Overview.pdf. Accessed 6/3/2023.
  • Adams et al. (2020) Hayden Adams, Noah Zinsmeister, and Dan Robinson. 2020. Uniswap v2 core. (2020).
  • Amihud and Mendelson (1980) Yakov Amihud and Haim Mendelson. 1980. Dealership market: Market-making with inventory. Journal of financial economics 8, 1 (1980), 31–53.
  • Angeris and Chitra (2020) Guillermo Angeris and Tarun Chitra. 2020. Improved price oracles: Constant function market makers. In Proceedings of the 2nd ACM Conference on Advances in Financial Technologies. 80–91.
  • Angeris et al. (2021) Guillermo Angeris, Alex Evans, and Tarun Chitra. 2021. A note on privacy in constant function market makers. arXiv preprint arXiv:2103.01193 (2021).
  • Banerjee et al. (2021) Aritra Banerjee, Michael Clear, and Hitesh Tewari. 2021. zkhawk: Practical private smart contracts from mpc-based hawk. In 2021 3rd Conference on Blockchain Research & Applications for Innovative Networks and Services (BRAINS). IEEE, 245–248.
  • Baum et al. (2022) Carsten Baum, James Hsin-yu Chiang, Bernardo David, and Tore Kasper Frederiksen. 2022. Eagle: Efficient privacy preserving smart contracts. Cryptology ePrint Archive (2022).
  • Chan et al. (2011) T-H Hubert Chan, Elaine Shi, and Dawn Song. 2011. Private and continual release of statistics. ACM Transactions on Information and System Security (TISSEC) 14, 3 (2011), 1–24.
  • Chen et al. (2016) Rui Chen, Haoran Li, A Kai Qin, Shiva Prasad Kasiviswanathan, and Hongxia Jin. 2016. Private spatial data aggregation in the local setting. In 2016 IEEE 32nd International Conference on Data Engineering (ICDE). IEEE, 289–300.
  • Chen and Pennock (2012) Yiling Chen and David M Pennock. 2012. A utility framework for bounded-loss market makers. arXiv preprint arXiv:1206.5252 (2012).
  • Chitra et al. (2022) Tarun Chitra, Guillermo Angeris, and Alex Evans. 2022. Differential Privacy in Constant Function Market Makers. In Financial Cryptography and Data Security: 26th International Conference, FC 2022, Grenada, May 2–6, 2022, Revised Selected Papers. Springer, 149–178.
  • Cummings et al. (2016) Rachel Cummings, David M Pennock, and Jennifer Wortman Vaughan. 2016. The possibilities and limitations of private prediction markets. In Proceedings of the 2016 ACM Conference on Economics and Computation. 143–160.
  • Dai (2021) Wei Dai. 2021. Flexible anonymous transactions (Flax): Towards privacy-preserving and composable decentralized finance. Cryptology ePrint Archive (2021).
  • Davidow and Manevich (2023) Danielle Movsowitz Davidow and Yacov Manevich. 2023. Privacy-Preserving Payment System With Verifiable Local Differential Privacy. Cryptology ePrint Archive (2023).
  • Duchi et al. (2018) John C Duchi, Michael I Jordan, and Martin J Wainwright. 2018. Minimax optimal procedures for locally private estimation. J. Amer. Statist. Assoc. 113, 521 (2018), 182–201.
  • Dwork et al. (2010) Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N Rothblum. 2010. Differential privacy under continual observation. In Proceedings of the forty-second ACM symposium on Theory of computing. 715–724.
  • Erlingsson et al. (2014) Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. 2014. Rappor: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC conference on computer and communications security. 1054–1067.
  • Evans et al. (2021) Alex Evans, Guillermo Angeris, and Tarun Chitra. 2021. Optimal fees for geometric mean market makers. In International Conference on Financial Cryptography and Data Security. Springer, 65–79.
  • Frongillo et al. (2023) Rafael Frongillo, Maneesha Papireddygari, and Bo Waggoner. 2023. An Axiomatic Characterization of CFMMs and Equivalence to Prediction Markets. arXiv preprint arXiv:2302.00196 (2023).
  • Frongillo and Waggoner (2018) Rafael Frongillo and Bo Waggoner. 2018. Bounded-loss private prediction markets. Advances in Neural Information Processing Systems 31 (2018).
  • Glosten and Milgrom (1985) Lawrence R Glosten and Paul R Milgrom. 1985. Bid, ask and transaction prices in a specialist market with heterogeneously informed traders. Journal of financial economics 14, 1 (1985), 71–100.
  • Goyal et al. (2023) Mohak Goyal, Geoffrey Ramseyer, Ashish Goel, and David Mazières. 2023. Finding the right curve: Optimal design of constant function market makers. In Proceedings of the 24th ACM Conference on Economics and Computation. 783–812.
  • Grimmett and Stirzaker (2020) Geoffrey Grimmett and David Stirzaker. 2020. Probability and random processes. Oxford university press.
  • Hanson (2007) Robin Hanson. 2007. Logarithmic markets scoring rules for modular combinatorial information aggregation. The Journal of Prediction Markets 1, 1 (2007), 3–15.
  • Kairouz et al. (2014) Peter Kairouz, Sewoong Oh, and Pramod Viswanath. 2014. Extremal mechanisms for local differential privacy. Advances in neural information processing systems 27 (2014).
  • Kasiviswanathan et al. (2011) Shiva Prasad Kasiviswanathan, Homin K Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. 2011. What can we learn privately? SIAM J. Comput. 40, 3 (2011), 793–826.
  • Kosba et al. (2016) Ahmed Kosba, Andrew Miller, Elaine Shi, Zikai Wen, and Charalampos Papamanthou. 2016. Hawk: The blockchain model of cryptography and privacy-preserving smart contracts. In 2016 IEEE symposium on security and privacy (SP). IEEE, 839–858.
  • Kyle (1985) Albert S Kyle. 1985. Continuous auctions and insider trading. Econometrica: Journal of the Econometric Society (1985), 1315–1335.
  • Micali et al. (1999) Silvio Micali, Michael Rabin, and Salil Vadhan. 1999. Verifiable random functions. In 40th annual symposium on foundations of computer science (cat. No. 99CB37039). IEEE, 120–130.
  • Milionis et al. (2022) Jason Milionis, Ciamac C Moallemi, Tim Roughgarden, and Anthony Lee Zhang. 2022. Automated Market Making and Loss-Versus-Rebalancing. arXiv preprint arXiv:2208.06046 (2022).
  • Mohan (2022) Vijay Mohan. 2022. Automated market makers and decentralized exchanges: A DeFi primer. Financial Innovation 8, 1 (2022), 20.
  • Penumbra (2023) Penumbra. 2023. ZSwap by Penumbra. https://protocol.penumbra.zone/main/zswap.html. Accessed 6/3/2023.
  • WhiteHat (2020) Barry WhiteHat. 2020. Why you can’t build a private uniswap with ZKPs. https://ethresear.ch/t/why-you-cant-build-a-private-uniswap-with-zkps/7754 Accessed 07/22/2023.
  • Yang et al. (2020) Mengmeng Yang, Lingjuan Lyu, Jun Zhao, Tianqing Zhu, and Kwok-Yan Lam. 2020. Local differential privacy and its applications: A comprehensive survey. arXiv preprint arXiv:2008.03686 (2020).
  • Zhou et al. (2022) Liyi Zhou, Xihan Xiong, Jens Ernstberger, Stefanos Chaliasos, Zhipeng Wang, Ye Wang, Kaihua Qin, Roger Wattenhofer, Dawn Song, and Arthur Gervais. 2022. Sok: Decentralized finance (defi) attacks. Cryptology ePrint Archive (2022).
  • Zhu et al. (2022) Jiawei Zhu, Zhuangtong Huang, Yixin Xu, Jerome Yen, and Ye Wang. 2022. Data Privacy Protection in DeFi Protocols. arXiv preprint arXiv:2211.16082 (2022).

Appendix A Proof of Theorem 4.6

Proof.

The arbitrageur may deploy a strategy of making a sequence of trades with the CFMM, and their strategy may be adaptive to the noise drawings on previous trades.

Say the arbitrageur makes a bounded sequence SS of trades with the noisy CFMM. Denote the post-trade amounts of X in the CFMM reserves by sis_{i} for trade iS.i\in S. ηi\eta_{i} denotes the noise trade after trade ii. sis_{i} may depend on ηj\eta_{j} and sjs_{j} for all trades jj done before trade ii.

Let the true price be p^\hat{p}, and the initial spot price be pp. The profit of the truthful strategy is:

P1(p)P1(p^)(P(a)p^)𝑑a.\int_{P^{-1}(p)}^{P^{-1}(\hat{p})}(P(a)-\hat{p})~{}da.

Denote P1(p)P^{-1}(p) by s0s_{0} by and η0=0\eta_{0}=0 for notational convenience. The profit of the alternate strategy is:

iSγi+iSsi1+ηi1si(P(a)p^)𝑑a.-\sum_{i\in S}\gamma_{i}+\sum_{i\in S}~{}\int_{s_{i-1}+\eta_{i-1}}^{s_{i}}(P(a)-\hat{p})~{}da.

γi\gamma_{i} is the privacy fee corresponding to trade ii.

The excess profit over the truthful strategy is:

iSγi+iSsi1+ηi1si(P(a)p^)𝑑aP1(p)P1(p^)(P(a)p^)𝑑a.\displaystyle-\sum_{i\in S}\gamma_{i}+\sum_{i\in S}~{}\int_{s_{i-1}+\eta_{i-1}}^{s_{i}}(P(a)-\hat{p})~{}da-\int_{P^{-1}(p)}^{P^{-1}(\hat{p})}(P(a)-\hat{p})~{}da.
This profit is less than the case the arbitrageur leaves the CFMM at spot price p^,\hat{p}, since if not, then another trade which moves the spot price to p^\hat{p} makes non-zero profit. Further, since the integrands are all the same, we consider the remaining terms only,
\displaystyle\leq iSγi+iSsi+ηisi(P(a)p^)𝑑a.\displaystyle-\sum_{i\in S}\gamma_{i}+\sum_{i\in S}\int_{s_{i}+\eta_{i}}^{s_{i}}(P(a)-\hat{p})~{}da.
=\displaystyle= iSγi+iSsi+ηisiP(a)iSsi+ηisip^𝑑a.\displaystyle-\sum_{i\in S}\gamma_{i}+\sum_{i\in S}\int_{s_{i}+\eta_{i}}^{s_{i}}P(a)-\sum_{i\in S}\int_{s_{i}+\eta_{i}}^{s_{i}}\hat{p}~{}da.

For each trade i,i, Γi=𝔼(si+ηisiP(a)si+ηisip^𝑑a).\Gamma_{i}=\mathbb{E}(\int_{s_{i}+\eta_{i}}^{s_{i}}P(a)-\int_{s_{i}+\eta_{i}}^{s_{i}}\hat{p}~{}da). This follows by definition of Γi\Gamma_{i} and the fact that 𝔼(si+ηisip^𝑑a)=p^𝔼(ηi)=0.\mathbb{E}(\int_{s_{i}+\eta_{i}}^{s_{i}}\hat{p}~{}da)=\hat{p}\mathbb{E}(\eta_{i})=0. Therefore the process of the arbitrageur’s profit forms a martingle with expected values of the increments equal to 0. For any bounded trade sequence, the Doob’s optional stopping theorem (Grimmett and Stirzaker, 2020, Chapter 12.5) states that the expected value of the martingale is equal to its initial expected value.

Therefore setting γi=Γi\gamma_{i}=\Gamma_{i} ensures that the truthfulness of the mechanism. This is also minimal, since if γi<Γi,\gamma_{i}<\Gamma_{i}, the arbitrageur can do better than the truthful strategy in only one step in expectation. ∎

Appendix B Privacy Fee on the constant product CFMM

For f(x,y)=xyf(x,y)=xy: the spot price P(x)P(x) is yx=Kx2,\frac{y}{x}=\frac{K}{x^{2}}, where KK specifies the level curve of ff. For initial state x~\tilde{x} and trade Δ~,\tilde{\Delta}, the privacy fee is given by:

Γ\displaystyle\Gamma =𝒟(η)x~+Δ~+ηx~+Δ~Ka2𝑑a𝑑η=𝒟(η)ηK(x~+Δ~)(x~+Δ~+η)𝑑η.\displaystyle=\int_{-\infty}^{\infty}\mathcal{D}(\eta)\int_{\tilde{x}+\tilde{\Delta}+\eta}^{\tilde{x}+\tilde{\Delta}}\frac{K}{a^{2}}~{}da~{}d\eta=-\int_{-\infty}^{\infty}\frac{\mathcal{D}(\eta)\eta K}{(\tilde{x}+\tilde{\Delta})(\tilde{x}+\tilde{\Delta}+\eta)}~{}d\eta.

Say, for the given privacy specification, the noise η\eta takes value η1\eta_{1} with probability p1p_{1} and η2\eta_{2} with probability p2p_{2}. By the zero mean condition, we have p1η1+p2η2=0.p_{1}\eta_{1}+p_{2}\eta_{2}=0. We have the privacy fee:

Γ\displaystyle\Gamma =K(x~+Δ~)(p1η1x~+Δ~+η1+p2η2x~+Δ~+η2)\displaystyle=\frac{-K}{(\tilde{x}+\tilde{\Delta})}\left(\frac{p_{1}\eta_{1}}{\tilde{x}+\tilde{\Delta}+\eta_{1}}+\frac{p_{2}\eta_{2}}{\tilde{x}+\tilde{\Delta}+\eta_{2}}\right)
=K(x~+Δ~)(p1η1(x~+Δ~+η2)+p2η2(x~+Δ~+η1)(x~+Δ~+η1)(x~+Δ~+η2))\displaystyle=\frac{-K}{(\tilde{x}+\tilde{\Delta})}\left(\frac{p_{1}\eta_{1}(\tilde{x}+\tilde{\Delta}+\eta_{2})+p_{2}\eta_{2}(\tilde{x}+\tilde{\Delta}+\eta_{1})}{(\tilde{x}+\tilde{\Delta}+\eta_{1})(\tilde{x}+\tilde{\Delta}+\eta_{2})}\right)
By the zero mean condition and since p1+p2=1p_{1}+p_{2}=1 we have,
Γ\displaystyle\Gamma =Kη1η2(x~+Δ~)(x~+Δ~+η1)(x~+Δ~+η2).\displaystyle=\frac{-K\eta_{1}\eta_{2}}{(\tilde{x}+\tilde{\Delta})(\tilde{x}+\tilde{\Delta}+\eta_{1})(\tilde{x}+\tilde{\Delta}+\eta_{2})}.

Observe that doubling the liquidity at all prices will quadruple KK and double x~\tilde{x} where x~\tilde{x} is P1(p)P^{-1}(p) for spot price pp. For trade Δ~\tilde{\Delta} much smaller than x~\tilde{x}, that is, of order o(x~)o(\tilde{x}) and maximum noise magnitude max(|η1|,|η2|)\max(|\eta_{1}|,|\eta_{2}|) of order o(x~)o(\tilde{x}), the fee Γ\Gamma will approximately halve on doubling the liquidity. The approximation is closer for larger x~\tilde{x}.

Consider the following numerical example of the privacy fee for a typical trade size. Let the trade be Δ~=1.\tilde{\Delta}=1. The privacy requirements are τ=[0,2]\tau=[0,2] and ε=2.\varepsilon=2. Let the trade be made when the CFMM spot price is 11. Let the pool have 100100 units of X in this state. The privacy fee for this trade of 1 unit of X is 1.67×1021.67\times 10^{-2} units of Y. If the pool liquidity were twice as much, that is, if it had 200200 units of X at spot price 11, then the privacy fee for this trade of 1 unit of XX would be 0.849×1020.849\times 10^{-2} units of Y. Of course, the binary mechanism is not optimized for minimizing the privacy fee, and we can likely obtain much smaller fee for different noise distributions.