This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\newtheoremrep

theoremTheorem[section] \newtheoremrepproposition[theorem]Proposition \newtheoremreplemma[theorem]Lemma \newtheoremrepclaim[theorem]Claim \newtheoremrepobservation[theorem]Observation \newtheoremrepopen[theorem]Open Question

It’s Not All Black and White: Degree of Truthfulness for Risk-Avoiding Agents

Submission 1843
Abstract.

The classic notion of truthfulness requires that no agent has a profitable manipulation – an untruthful report that, for some combination of reports of the other agents, increases her utility. This strong notion implicitly assumes that the manipulating agent either knows what all other agents are going to report, or is willing to take the risk and act as-if she knows their reports.

Without knowledge of the others’ reports, most manipulations are risky – they might decrease the manipulator’s utility for some other combinations of reports by the other agents. Accordingly, a recent paper (Bu, Song and Tao, “On the existence of truthful fair cake cutting mechanisms”, Artificial Intelligence 319 (2023), 103904) suggests a relaxed notion, which we refer to as risk-avoiding truthfulness (RAT), which requires only that no agent can gain from a safe manipulation – one that is sometimes beneficial and never harmful.

Truthfulness and RAT are two extremes: the former considers manipulators with complete knowledge of others, whereas the latter considers manipulators with no knowledge at all. In reality, agents often know about some — but not all — of the other agents. This paper introduces the RAT-degree of a mechanism, defined as the smallest number of agents whose reports, if known, may allow another agent to safely manipulate, or nn if there is no such number. This notion interpolates between classic truthfulness (degree nn) and RAT (degree at least 11): a mechanism with a higher RAT-degree is harder to manipulate safely.

To illustrate the generality and applicability of this concept, we analyze the RAT-degree of prominent mechanisms across various social choice settings, including auctions, indivisible goods allocations, cake-cutting, voting, and stable matchings.

conference: ; ;

1. Introduction

The Holy Grail of mechanism design is the truthful mechanism — a mechanism in which the (weakly) dominant strategy of each agent is truthfully reporting her type. But in most settings, there is provably no truthful mechanism that satisfies other desirable properties such as budget-balance, efficiency or fairness. Practical mechanisms are thus manipulable in the sense that some agent aia_{i} has a profitable manipulation – for some combination of reports by the other agents, agent aia_{i} can induce the mechanism to yield an outcome (strictly) better for her by reporting non-truthfully.

This notion of manipulability implicitly assumes that the manipulating agent either knows the reports made by all other agents, or is willing to take the risk and act as-if she knows their reports. Without knowledge of the others’ reports, most manipulations are risky – they might decrease the manipulator’s utility for some other combinations of reports by the other agents. In practice, many agents are risk-avoiding and will not manipulate in such cases. This highlights a gap between the standard definition and the nature of such agents.

To illustrate, consider a simple example in the context of voting. Under the Plurality rule, agents vote for their favorite candidate, and the candidate with the most votes wins. If an agent knows that her preferred candidate has no chance of winning, she may find it beneficial to vote for her second-choice candidate to prevent an even less preferred candidate from winning. However, if the agent lacks precise knowledge of the other votes and decides to vote for her second choice, it may backfire – she might inadvertently cause an outcome worse than if she had voted truthfully. For instance, if the other agents vote in a way that makes the agent the tie-breaker.

Indeed, the literature on cake-cutting (e.g. (brams2006better; BU2023Rat)), voting (e.g. (slinko2008nondictatorial; slinko2014ever; hazon2010complexity)) and recently also stable matching (e.g. regret2018Fernandez; chen2024regret) studies truthfulness among such agents. In particular, BU2023Rat introduced a weaker notion of truthfulness, suitable for risk-avoiding agents, for cake-cutting. Their definition can be adapted to any problem as follows.

Let us first define a safe manipulation as a non-truthful report that may never harm the agent’s utility. Based on that, a mechanism is safely manipulable if some agent aia_{i} has a manipulation that is both profitable and safe; otherwise, the mechanism is Risk-Avoiding Truthful (RAT).

Standard truthfulness and RAT can be seen as two extremes with respect to safe-and-profitable manipulations: the former considers manipulators with complete knowledge of others, whereas the latter considers manipulators with no knowledge at all. In reality, agents often know about some — but not all — of the other agents.

This paper introduces the RAT-Degree — a new measurement that quantifies how robust a mechanism is to such safe-and-profitable manipulations. The RAT-Degree of a mechanism is an integer d{0,,n}d\in\{0,\ldots,n\}, which represents — roughly — the smallest number of agents whose reports, if known, may allow another agent to safely manipulate; or nn if there is no such number. (See Section 3 for formal definition).

This measure allows us to position mechanisms along a spectrum. A higher degree implies that an agent has to work harder in order to collect the information required for a successful manipulation; therefore it is less likely that mechanism will be manipulated. On one end of the spectrum are truthful mechanisms – where no agent can safely manipulate even with complete knowledge of all the other agents. The RAT-degree of such mechanisms is nn. While on the other end are mechanisms that are safely manipulable – no knowledge about other agents is required to safely manipulate. The RAT-degree of such mechanisms is 0.

Importantly, the RAT-degree is determined by the worst-case scenario for the mechanism designer, which corresponds to the best-case scenario for the manipulating agents. The way we measure the amount of knowledge is based on a general objective applicable to all social choice settings.

Contributions.

Our main contribution is the definition of the RAT-degree.

To illustrate the generality and usefulness of this concept, we selected several different social choice domains, and analyzed the RAT-degree of some prominent mechanisms in each domain. As our goal is mainly to illustrate the new concepts, we did not attempt to analyze all mechanisms and all special case of each mechanism, but rather focused on some cases that allowed for a more simple analysis. To prove an upper bound on the RAT-degree, we need to show a profitable manipulation. However, in contrast to usual proofs of manipulability, we also have to analyze more carefully, how much knowledge on other agents is sufficient in order to guarantee the safety of the manipulation.

Organization.

Section 2 introduces the model and required definitions. Section 3 presents the definition of RAT-degree. Section 4 explores auctions for a single good. Section 5 examines indivisible goods allocations. Section 5 focuses on cake cutting. Section 7 addresses single-winner ranked voting. Section 8 considers stable matchings. Section 9 concludes with some future work directions.

Due to space constraints, most proofs are delegated to appendices, but we have attempted to provide intuitive proof sketches in the paper body.

1.1. Related Word

There is a vast body of work on truthfulness relaxations and alternative measurements of manipulability. Due to space constraints, we provide only a brief overview here; a more in-depth discussion of these works and their relation to RAT-degree can be found in Appendix A.

Truthfulness Relaxations.

Various truthfulness relaxations focus on a certain subset of all possible manipulations, which are considered more “likely”. It requires that none of the manipulations from this subset is profitable. Different relaxations consider different subsets of “likely” manipulations.

BU2023Rat introduce the definition on which this paper is build upon: risk-avoiding truthfulness (RAT),111The original term of BU2023Rat is risk-averse truthfulness. However, since the definition assumes that agents completely avoid any element of risk, we adopt this new name, aiming to more accurately reflect this assumption. assuming agents manipulate only when it is sometimes beneficial but never harmful. We extend their work by generalizing RAT from cake cutting to any social choice problem, and by suggesting a quantitative measure of the robustness of a mechanism to such manipulations.

brams2006better propose maximin strategy-proofness, where an agent manipulates only if it is always beneficial, making it a weaker condition than RAT. troyan2020obvious introduce not-obvious manipulability (NOM), which assumes agents consider only extreme best or worst cases. RAT and NOM are independent notions. regret2018Fernandez define regret-free truth-telling (RFTT), where agents never regret truth-telling after observing the outcome. RAT and RFTT do not imply each other. Additionally, slinko2008nondictatorial; slinko2014ever; hazon2010complexity study ”safe manipulations” in voting, but they consider coalition of voters and a different type of risk - that too many or too few participants will perform the exact safe manipulation.

Alternative Measurements.

There are many approaches quantify manipulability from different perspectives. One approach considers the computational complexity of finding a profitable manipulation — e.g., (bartholdi1989computational; bartholdi1991single) (see (faliszewski2010ai; veselova2016computational) for surveys). Another measurement is the number of bits an agent needs to know in order to have a safe manipulation - spirit to the concepts of communication complexity - e.g., (nisan2002communication; grigorieva2006communication; Communication2019Branzei; Babichenko2019communication) and compilation complexity - e.g., (chevaleyre2009compiling; xia2010compilation; karia2021compilation). A third approach evaluates the probability that a profitable manipulation exists —e.g., (barrot2017manipulation; lackner2018approval; lackner2023free). The incentive ratio, which measures how much an agent can improve their utility by manipulating, is also widely studied—e.g., (chen2011profitable; chen2022incentive; li2024bounding; cheng2022tight; cheng2019improved). Other metrics include assessing the average and maximum gain per manipulation (aleskerov1999degree) and counting the number of agents who benefit from manipulating (andersson2014budget; andersson2014least).

2. Preliminaries

We consider a generic social choice setting, with a set of nn agents N={a1,,an}N=\{a_{1},\ldots,a_{n}\}, and a set of potential outcomes XX. Each agent, aiNa_{i}\in N, has preferences over the set of outcomes XX, that can be described in one of two ways: (1) a linear ordering of the outcomes, or (2) a utility function from XX to \mathbb{R}. The set of all possible preferences for agent aia_{i} is denoted by 𝒟i\mathcal{D}_{i}, and is referred to as the agent’s domain. We denote the agent’s true preferences by Ti𝒟iT_{i}\in\mathcal{D}_{i}. Unless otherwise stated, when agent aia_{i} weakly prefers the outcome x1x_{1} over x2x_{2}, it is denoted by x1ix2x_{1}\succeq_{i}x_{2}; and when she strictly prefers x1x_{1} over x2x_{2}, it is denoted by x1ix2x_{1}\succ_{i}x_{2}.

A mechanism or rule is a function f:𝒟1××𝒟nXf:\mathcal{D}_{1}\times\cdots\times\mathcal{D}_{n}\to X, which takes as input a list of reported preferences P1,,PnP_{1},\ldots,P_{n} (which may differ from the true preferences), and returns the chosen outcome. In this paper, we focus on deterministic and single-valued mechanisms.

For any agent aiNa_{i}\in N, we denote by (Pi,𝐏i)(P_{i},\mathbf{P}_{-i}) the preference profile in which agent aia_{i} reports PiP_{i}, and the other agents report 𝐏i\mathbf{P}_{-i}.

Truthfulness

A manipulation for a mechanism ff and agent aiNa_{i}\in N is an untruthful report Pi𝒟i{Ti}P_{i}\in\mathcal{D}_{i}\setminus\{T_{i}\}. A manipulation is profitable if there exists a set of preferences of the other agents for which it increases the manipulator’s utility:

(1) 𝐏i𝒟i:f(Pi,𝐏i)if(Ti,𝐏i)\displaystyle\exists\mathbf{P}_{-i}\in\mathbf{\mathcal{D}}_{-i}:\quad f(P_{i},\mathbf{P}_{-i})\succ_{i}f(T_{i},\mathbf{P}_{-i})

A mechanism ff is called manipulable if some agent aia_{i} has a profitable manipulation; otherwise ff is called truthful.

RAT

A manipulation is safe if it never harms the manipulator’s utility – it is weakly preferred over telling the truth for any possible preferences of the other agents:

(2) 𝐏i𝒟i:f(Pi,𝐏i)if(Ti,𝐏i)\displaystyle\forall\mathbf{P}_{-i}\in\mathbf{\mathcal{D}}_{-i}:\quad f(P_{i},\mathbf{P}_{-i})\succeq_{i}f(T_{i},\mathbf{P}_{-i})

A mechanism ff is called safely-manipulable if some agent aia_{i} has a manipulations that is profitable and safe; otherwise ff is called risk-avoiding truthful (RAT).

3. The RAT-Degree

Let k{0,,n1}k\in\{0,\ldots,n-1\}, KN{ai}K\subseteq N\setminus\{a_{i}\} with |K|=k|K|=k and \widebarK:=N({ai}K)\widebar{K}:=N\setminus(\{a_{i}\}\cup K). We denote by (Pi,𝐏K,𝐏\widebarK)(P_{i},\mathbf{P}_{K},\mathbf{P}_{\widebar{K}}) the preference profile in which the preferences of agent aia_{i} are PiP_{i}, the preferences of the agents in KK are 𝐏K\mathbf{P}_{K}, and the preferences of the agents in \widebarK\widebar{K} are 𝐏\widebarK\mathbf{P}_{\widebar{K}}.

Definition 3.1.

A manipulation, PiP_{i}, is profitable-and-safe-given-kk-known-agents if for some subset KN{ai}K\subseteq N\setminus\{a_{i}\} with |K|=k|K|=k and some preferences for them 𝐏K𝐃K\mathbf{P}_{K}\in\mathbf{D}_{K}, the following holds:

(3) 𝐏\widebarK𝐃\widebarK:f(Pi,𝐏K,𝐏\widebarK)if(Ti,𝐏K,𝐏\widebarK)\displaystyle\quad\exists\mathbf{P}_{\widebar{K}}\in\mathbf{D}_{\widebar{K}}\colon\quad f(P_{i},\mathbf{P}_{K},\mathbf{P}_{\widebar{K}})\succ_{i}f(T_{i},\mathbf{P}_{K},\mathbf{P}_{\widebar{K}})
(4) and 𝐏\widebarK𝐃\widebarK:f(Pi,𝐏K,𝐏\widebarK)if(Ti,𝐏K,𝐏\widebarK)\displaystyle\text{ and }\quad\forall\mathbf{P}_{\widebar{K}}\in\mathbf{D}_{\widebar{K}}\colon\quad f(P_{i},\mathbf{P}_{K},\mathbf{P}_{\widebar{K}})\succeq_{i}f(T_{i},\mathbf{P}_{K},\mathbf{P}_{\widebar{K}})

In words: The agents in KK are those whose preferences are Known to aia_{i}; the agents in \widebarK\widebar{K} are those whose preferences are unknown to aia_{i}. Given that the preferences of the known-agents are 𝐏K\mathbf{P}_{K}, (3) says that there exist a preference profile of the unknown-agents that makes the manipulation profitable for agent aia_{i}; while (4) says that the manipulation is safe – it is weakly preferred over telling the truth for any preference profile of the unknown-agents.

A profitable-and-safe manipulation (with no known-agents) is a special case in which K=K=\emptyset.

Definition 3.2.

A mechanism ff is called kk-known-agents safely-manipulable if some agent aia_{i} has a profitable-and-safe-manipulation-given-kk-known-agents.

{propositionrep}

Let k{0,,n2}k\in\{0,\ldots,n-2\}. If a mechanism is kk-known-agents safely-manipulable, then it is also (k+1)(k+1)-known-agents safely-manipulable.

Proof.

By definition, some agent aia_{i} has a profitable-and-safe-manipulation-given-kk-known-agents. That is, there exists a subset KN{ai}K\subseteq N\setminus\{a_{i}\} with |K|=k|K|=k and some preference profile for them 𝐏K𝐃K\mathbf{P}_{K}\in\mathbf{D}_{K}, such that (3) and (4) hold. Let aj\widebarKa_{j}\in\widebar{K}. Consider the preferences PjP_{j} that aja_{j} has in some profile satisfying (3) (profitable). Define K+:=K{aj}K^{+}:=K\cup\{a_{j}\} and construct a preference profile where the preferences of the agents in KK remain 𝐏K\mathbf{P}_{K}, and aja_{j}’s preferences are set to PjP_{j}. Since (3) holds for PjP_{j}, the same manipulation remains profitable given the new set of known-agents. Moreover, (4) continues to hold, as the set of unknown agents has only shrunk. Thus, the mechanism is also (k+1)(k+1)-known-agents safely manipulable. ∎

Definition 3.3.

The RAT-degree of a mechanism ff is the minimum kk for which the mechanism is kk-known-agent safely manipulable, or nn if there is no such kk.

{observation}

A mechanism is truthful  if-and-only-if its RAT-degree is nn.

{observation}

A mechanism is RAT  if-and-only-if its RAT-degree is at least 11.

Figure 1 illustrates the relation between classes of different RAT-degree.

Refer to caption
Figure 1. Hierarchy of the Manipulability and RAT-Degree Classes. KA stands for Known-Agents.
𝐏i1\mathbf{P}_{-i}^{1} 𝐏i2\mathbf{P}_{-i}^{2} 𝐏i3\mathbf{P}_{-i}^{3}
TiT_{i}
PiTiP_{i}\neq T_{i}
Table 1. A Safe-And-Profitable Manipulation from an Agent Perspective.

3.1. An Intuitive Point of View

Consider LABEL:tab:safe-manip-i. When the risk-avoiding agent has no information (0-known-agents), a profitable-and-safe manipulation is a row in the table that represents an alternative report PiTiP_{i}\neq T_{i}, that dominates TiT_{i} – this means that for each one of the columns, the outcome in the corresponding row is better than the outcome of the first row. When the risk-avoiding agent has more information (kk-known-agents, when k>0k>0), it is equivalent to considering a strict subset of the columns. Lastly, when the risk-avoiding agent has a full information ((n1)(n-1)-known-agents), it is equivalent to consider only one column.

4. Auction for a Single Good

We consider a seller owning a single good, and nn potential buyers (the agents). The true preferences TiT_{i} are given by real values vi0v_{i}\geq 0, representing the happiness of agent ii from receiving the good. The reported preferences PiP_{i} are the “bids” bi0b_{i}\geq 0. A mechanism in this context has to determine the winner — the agent who will receive the good, and the price — how much the winner will pay.

We assume that agents are quasi-linear – meaning their valuations can be interpreted in monetary units. Accordingly, the utility of the winning agent is the valuation minus the price; while the utility of the other agents is zero.

Results.

The two most well-known mechanisms in this context are first-price auction and second-price auctions. First-price auction is known to be manipulable; moreover, it is easy to show that it is safely-manipulable, so its RAT-degree is 0. We show that a first-price auction with a positive discount has RAT-degree 11. Second-price auction is known to be truthful, so its RAT-degree is nn. However, it has some decisive practical disadvantages (ausubel2006lovely), in particular, when buyers are risk-averse, the expected revenue of a second-price auction is lower than that of a first-price auction (nisan2007algorithmic); even when the buyers are risk-neutral, a risk-averse seller would prefer the revenue distribution of a first-price auction (krishna2009auction).

This raises the question of whether it is possible to combine the advantages of both auction types. Indeed, we prove that any auction that applies a weighted average between the first-price and the second-price achieves a RAT-degree of of n1n-1, which is very close to being fully truthful (RAT-degree nn). This implies that a manipulator agent would need to obtain information about all n1n-1 other agents to safely manipulate – which is a very challenging task. Importantly, the seller’s revenue from such auction is higher compared to the second-price auction, giving this mechanism a significant advantage in this context. This result opens the door to exploring new mechanisms that are not truthful but come very close it. Such mechanisms may enable desirable properties that are unattainable with truthful mechanisms.

4.1. First-Price Auction

In first-price auction, the agent who bids the highest price wins the good and pays her bid; the other agents get and pay nothing.

It in well-known that the first-price auction is not truthful. We start by proving that the situation is even worse: first-price auction is safely-manipulable, meaning that its RAT-degree is 0.

{theoremrep}

First-price auction is safely manipulable (RAT-degree = 0). {proofsketch} A truthful agent always gets a utility of 0, whether he wins or loses. On the other hand, an agent who manipulates by bidding slightly less than viv_{i} gets a utility of at least 0 when he loses and a strictly positive utility when he wins. Thus, this manipulation is safe and profitable.

Proof.

To prove the mechanism is safely manipulable, we need to show an agent and an alternative bid, such that the agent always weakly prefers the outcome that results from reporting the alternative bid over reporting her true valuation, and strictly prefers it in at least one scenario. We prove that the mechanism is safely-manipulable by all agents with positive valuations.

Let iNi\in N be an agent with valuation vi>0v_{i}>0, we shall now prove that bidding bi<vib_{i}<v_{i} is a safe manipulation.

We need to show that for any combination of bids of the other agents, bidding bib_{i} does not harm agent ii, and that there exists a combination where bidding bib_{i} strictly increases her utility. To do so, we consider the following cases according to the maximum bid of the other agents:

  • The maximum bid is smaller than bib_{i}: if agent ii bids her valuation viv_{i}, she wins the good and pays viv_{i}, results in utility 0. However, by bidding bi<vib_{i}<v_{i}, she still wins but pays only bib_{i}, yielding a positive utility.

    Thus, in this case, agent ii strictly increases her utility by lying.

  • The maximum bid is between bib_{i} and viv_{i}: if agent ii bids her valuation viv_{i}, she wins the good and pays viv_{i}, resulting in utility 0. By bidding bi<vib_{i}<v_{i}, she loses the good but pays noting, also resulting in utility 0.

    Thus, in this case, bidding bib_{i} does not harm agent ii.

  • The maximum bid is higher than viv_{i}: Regardless of whether agent ii bids her valuation viv_{i} or bi<vib_{i}<v_{i}, she does not win the good, resulting in utility 0.

    Thus, in this case, bidding bib_{i} does not harm agent ii.

4.2. First-Price Auction with Discount

In first-price auction with discount, the agent ii with the highest bid bib_{i} wins the item and pays (1t)bi(1-t)b_{i}, where t(0,1)t\in(0,1) is a constant. As before, the other agents get and pay nothing.

We prove that, although this minor change does not make the mechanism truthful, it increases the degree of trustfulness for risk-averse agents. However, it is still quite vulnerable to manipulation as knowing the strategy of one other agent might be sufficient to safely-manipulate it.

Theorem 4.1.

The RAT-degree of the First-Price Auction with Discount is 11.

To prove that the RAT-degree is 11, we need to show that the mechanism is (a) not safely-manipulable, and (b) 11-known-agents safely-manipulable. We prove each of these in a lemma.

{lemmarep}

First-Price Auction with Discount is not safely-manipulable. {proofsketch} Here, unlike in the first-price auction, whenever the agent wins the good, she gains positive utility. This implies that no manipulation can be safe. If she under-bids her value, she risks losing the item, which strictly decreases her utility. If she over-bids, she might end up paying more than necessary, reducing her utility without increasing her chances of winning.

Proof.

We need to show that, for each agent ii and any bid bivib_{i}\neq v_{i}, at least one of the following is true: either (1) for any combination of bids of the other agents, agent ii weakly prefers the outcome from bidding viv_{i}; or (2) there exists such a combination for which ii strictly prefers the outcome from bidding viv_{i}. We consider two cases.

Case 1: vi=0v_{i}=0. In this case condition (1) clearly holds, as bidding viv_{i} guarantees the agent a utility of 0, and no potential outcome of the auction can give ii a positive utility.

Case 2: vi>0v_{i}>0. In this case we prove that condition (2) holds. We consider two sub-cases:

  • Under-bidding (bi<vib_{i}<v_{i}): whenever maxjibj(bi,vi)\displaystyle\max_{j\neq i}b_{j}\in(b_{i},v_{i}), when agent ii bids truthfully she wins the good and pays (1t)vi(1-t)v_{i}, resulting in utility tvi>0tv_{i}>0; but when she bids bib_{i} she does not win, resulting in utility 0.

  • Over-bidding (vi<biv_{i}<b_{i}): whenever maxji<vi\displaystyle\max_{j\neq i}<v_{i}, when agent ii bids truthfully her utility is tvitv_{i} as before; when she bids bib_{i} she wins and pays (1t)bi>(1t)vi(1-t)b_{i}>(1-t)v_{i}, so her utility is less than tvitv_{i}.

In both cases lying may harm agent ii. Thus, she has no safe manipulation. ∎

{lemmarep}

First-Price Auction with Discount is 11-known-agents safely-manipulable. {proofsketch} Consider the case where the manipulator aia_{i} knows there is another agent aja_{j} bidding bj(vi,vi/(1t))b_{j}\in(v_{i},v_{i}/(1-t)). When biding truthfully, she loses and gets zero utility. However, by bidding any value bi(bj,vi/(1t))b_{i}\in(b_{j},v_{i}/(1-t)), she either wins and gains positive utility or loses and remains with zero utility. Thus it is safe and profitable.

Proof.

we need to identify an agent ii, for whom there exists another agent jij\neq i and a bid bjb_{j}, such that if jj bids bjb_{j}, then agent ii has a safe manipulation.

Indeed, let iNi\in N be an agent with valuation vi>0v_{i}>0 and let jij\neq i be another agent who bids some value bj(vi,vi/(1t))b_{j}\in(v_{i},v_{i}/(1-t)). We prove that bidding any value bi(bj,vi/(1t))b_{i}\in(b_{j},v_{i}/(1-t)) is a safe manipulation for ii.

If ii truthfully bids viv_{i}, she loses the good (as bj>vib_{j}>v_{i}), and gets a utility of 0.

If ii manipulates by bidding some bi(bj,vi/(1t))b_{i}\in(b_{j},v_{i}/(1-t)), then she gets either the same or a higher utility, depending on the maximum bid among the unknown agents (bmax:=maxi,ibb^{\max}:=\displaystyle\max_{\ell\neq i,\ell\neq i}b_{\ell}):

  • If bmax<bib^{\max}<b_{i}, then ii wins and pays (1t)bi(1-t)b_{i}, resulting in a utility of vi(1t)bi>0v_{i}-(1-t)b_{i}>0, as (1t)bi<vi(1-t)b_{i}<v_{i}. Thus, ii strictly gains by lying.

  • If bmax>bib^{\max}>b_{i}, then ii does not win the good, resulting in utility 0. Thus, in this case, bidding bib_{i} does not harm ii.

  • If bmax=bib^{\max}=b_{i}, then one of the above two cases happens (depending on the tie-breaking rule).

In all cases bib_{i} is a safe manipulation, as claimed. ∎

4.3. Average-First-Second-Price Auction

In the Average-First-Second-Price (AFSP) Auction, the agent ii with the highest bid bib_{i} wins the item and pays wbi+(1w)bimaxwb_{i}+(1-w)b^{\max}_{-i}, where bimax:=maxjibj\displaystyle b^{\max}_{-i}:=\max_{j\neq i}b_{j} is the second-highest bid, and w(0,1)w\in(0,1) is a fixed constant. That is, the price is a weighted average between the first price and the second price.

We show that this simple change makes a significantly difference – the RAT-degree increases to n1n-1. This means that a manipulator agent would need to obtain information about all other agents to safely manipulate the mechanism — a very challenging task in practice.

Theorem 4.2.

The RAT-degree of the Average-Price Auction is (n1)(n-1).

The theorem is proved using the following two lemmas.

{lemmarep}

The AFSP mechanism is not (n2)(n-2)-known-agents safely-manipulable.

{proofsketch}

Let aia_{i} be a potential manipulator, bib_{i} a potential manipulation, KK be a set of known agents, with |K|=n2|K|=n-2, and 𝐛K\mathbf{b}_{K} be a vector that represents their bids. We prove that the only unknown-agent aja_{j} can make any manipulation either not profitable or not safe. We denote by bKmaxb^{\max}_{K} the maximum bid among the agents in KK and consider each of the six possible orderings of viv_{i}, bib_{i} and bKmaxb^{\max}_{K}. In two cases (vi<bi<bKmaxv_{i}<b_{i}<b^{\max}_{K} or bi<vi<bKmaxb_{i}<v_{i}<b^{\max}_{K}) the manipulation is not profitable; in the other four cases, the manipulation is not safe.

Proof.

Let iNi\in N be an agent with true value vi>0v_{i}>0, and a manipulation bivib_{i}\neq v_{i}. We show that this manipulation is unsafe even when knowing the bids of n2n-2 of the other agents.

Let KK be a subset of (n2)(n-2) of the remaining agents (the agents in KK are the “known agents”), and let 𝐛K\mathbf{b}_{K} be a vector that represents their bids. Lastly, let jj be the only agent in N(K{i})N\setminus(K\cup\{i\}). We need to prove that at least one of the following is true: either (1) the manipulation is not profitable — for any possible bid of agent jj, agent ii weakly prefers the outcome from bidding viv_{i} over the outcome from bidding bib_{i}; or (2) the manipulation is not safe — there exists a bid for agent jj, such that agent ii strictly prefers the outcome from bidding viv_{i}.

Let bKmax:=maxKb\displaystyle b^{\max}_{K}:=\max_{\ell\in K}b_{\ell}. We consider each of the six possible orderings of viv_{i}, bib_{i} and bKmaxb^{\max}_{K} (cases with equalities are contained in cases with inequalities, according to the tie-breaking rule):

  • vi<bi<bKmaxv_{i}<b_{i}<b^{\max}_{K} or bi<vi<bKmaxb_{i}<v_{i}<b^{\max}_{K}: In these cases (1) holds, as for any bid of agent jj, agent ii never wins. Therefore the manipulation is not profitable.

  • vi<bKmax<biv_{i}<b^{\max}_{K}<b_{i}: We show that (2) holds. Assume that jj bids any value bj(vi,bi)b_{j}\in(v_{i},b_{i}). When ii bids truthfully, she does not win the good so her utility is 0. But when ii bids bib_{i}, she wins and pays a weighted average between bib_{i} and max(bj,bKmax)\max(b_{j},b_{K}^{\max}). As both these numbers are strictly greater than viv_{i}, the payment is larger than viv_{i} as well, resulting in a negative utility. Hence, the manipulation is not safe.

  • bKmax<vi<bib^{\max}_{K}<v_{i}<b_{i}: We show that (2) holds. Assume that jj bids any value bj<bKmaxb_{j}<b^{\max}_{K}. When ii tells the truth, she wins and pays wvi+(1w)bKmaxwv_{i}+(1-w)b^{\max}_{K}; but when ii bids bib_{i}, she still wins but pays a higher price, wbi+(1w)bKmaxwb_{i}+(1-w)b^{\max}_{K}, so her utility decreases. Hence, the manipulation is not safe.

  • bi<bKmax<vib_{i}<b^{\max}_{K}<v_{i}: We show that (2) holds. Assume that agent jj bids any value bj<bib_{j}<b_{i}. When ii tells the truth, she wins and pays wvi+(1w)bKmax<viwv_{i}+(1-w)b^{\max}_{K}<v_{i}, resulting in a positive utility. But when ii bids bib_{i}, she does not win and her utility is 0. Hence, the manipulation is not safe.

  • bKmax<bi<vib^{\max}_{K}<b_{i}<v_{i}: We show that (2) holds. Assume that jj bids any value bj(bi,vi)b_{j}\in(b_{i},v_{i}). When ii tells the truth, she wins and pays wvi+(1w)bj<viwv_{i}+(1-w)b_{j}<v_{i}, resulting in a positive utility. But when ii bids bib_{i}, she does not win and her utility is 0. Hence, the manipulation is not safe.

  • bi<bKmax<vib_{i}<b^{\max}_{K}<v_{i} or bKmax<bi<vib^{\max}_{K}<b_{i}<v_{i}: We show that (2) holds. Assume that jj bids any value bj(bi,vi)b_{j}\in(b_{i},v_{i}). When ii tells the truth, she wins and pays wvi+(1w)max(bj,bKmax)wv_{i}+(1-w)\max(b_{j},b^{\max}_{K}), which is smaller than viv_{i} as both bjb_{j} and bKmaxb^{\max}_{K} are smaller than viv_{i}. Therefore, ii’s utility is positive. But when ii bids bib_{i}, she does not win and her utility is 0. Hence, the manipulation is not safe.

{lemmarep}

The AFSP mechanism is (n1)(n-1)-known-agents safely-manipulable. {proofsketch} Consider any combination of bids of the other agents in which all the bids are strictly smaller than viv_{i}. Let bimaxb^{\max}_{-i} be the highest bid among the other agents. Then any alternative bid bi(bimax,vi)b_{i}\in(b^{\max}_{-i},v_{i}) is a safe manipulation.

Proof.

given an agent iNi\in N, we need to show an alternative bid bivib_{i}\neq v_{i} and a combination of (n1)(n-1) bids of the other agents, such that the agent strictly prefers the outcome resulting from its untruthful bid over her true valuation.

Consider any combination of bids of the other agents in which all the bids are strictly smaller than viv_{i}. Let bimaxb^{\max}_{-i} be the highest bid among the other agents. We prove that any alternative bid bi(bimax,vi)b_{i}\in(b^{\max}_{-i},v_{i}) is a safe manipulation.

When agent ii bids her valuation viv_{i}, she wins the good and pays wvi+(1w)bimaxwv_{i}+(1-w)b^{\max}_{-i}, yielding a (positive) utility of

viwvi(1w)bimax=(1w)(vibimax).\displaystyle v_{i}-wv_{i}-(1-w)b^{\max}_{-i}=(1-w)(v_{i}-b^{\max}_{-i}).

But when ii bids bib_{i}, as bi>bimaxb_{i}>b^{\max}_{-i}, she still wins the good but pays wbi+(1w)bimaxwb_{i}+(1-w)b^{\max}_{-i}, which is smaller as bi<vib_{i}<v_{i}; therefore her utility is higher. ∎

Conclusion.

By choosing a high value for the parameter ww, the Average-Price Auction becomes similar to the first-price auction, and therefore may attain a similar revenue in practice, but with better strategic properties. The average-price auction is sufficiently simple to test in practice; we find it very interesting to check how it fares in comparison to the more standard auction types.

5. Indivisible Goods Allocations

In this section, we consider several mechanisms to allocate mm indivisible goods G={g1,,gm}G=\{g_{1},\ldots,g_{m}\} among the nn agents. Here, the true preferences TiT_{i} are given by mm real values: vi,0v_{i,\ell}\geq 0 for any gGg_{\ell}\in G, representing the happiness of agent aia_{i} from receiving the good gg_{\ell}. The reported preferences PiP_{i} are real values ri,0r_{i,\ell}\geq 0. We assume the agents have additive valuations over the goods. Given a bundle SGS\subseteq G, let vi(S)=gSvi,v_{i}(S)=\sum_{g_{\ell}\in S}v_{i,\ell} be agent aia_{i}’s utility upon receiving the bundle SS. A mechanism in this context gets nn (potentially untruthful) reports from all the agents and determines the allocation – a partition (A1,,An)(A_{1},\ldots,A_{n}) of GG, where AiA_{i} is the bundle received by agent aia_{i}.

Results.

We start by considering a simple mechanism, the utilitarian goods allocation – which assigns each good to the agent who reports the highest value for it. We prove that this mechanism is safely manipulable (RAT-degree = 0). We then show that the RAT-degree can be increased to 11 by requiring normalization — the values reported by each agent are scaled such that the set of all items has the same value for all agents. The RAT-degree of the famous round-robin mechanism is also at most 11. In contrast, we design a new mechanism that satisfies the common fairness notion called EF1 (envy-freeness up to one good), that attains a RAT-degree of n1n-1.

5.1. Utilitarian Goods Allocation

The utilitarian rule aims to maximize the sum of agents’ utilities. When agents have additive utilities, this goal can be achieved by assigning each good to an agent who reports the highest value for it. This section analyzes this mechanism.

For simplicity, we assume that agents’ reports are bounded from above by some maximum possible value rmax>0r_{max}>0. Also, we assume that in cases where multiple agents report the same highest value for some good, the mechanism employs some tie-breaking rule to allocate the good to one of them. However, the tie-breaking rule must operate independently for each good, meaning that the allocation for one good cannot depend on the tie-breaking outcomes of other goods.

We further assume that there is at least one agent who has a value different than 0 and rmaxr_{max} for at least one of the goods. Otherwise, for each good, all agents’ (true) values are 0 or rmaxr_{max}; and in this case the mechanism is trivially truthful.

{theoremrep}

The Utilitarian allocation rule is safely manipulable (RAT-degree = 0).

{proofsketch}

Manipulating by reporting the highest possible value, rmaxr_{max}, for all goods is both profitable and safe. It is profitable because if the maximum report among the other agents for a given good lies between the manipulator’s true value and their alternative bid, rmaxr_{max}, the manipulator’s utility strictly increases. It is safe because, in all other cases, the utility remains at least as high.

Proof.

To prove the mechanism is safely manipulable, we need to show one agent that has an alternative report, such that the agent always weakly prefers the outcome that results from reporting the alternative report over reporting her true valuations, and strictly prefers it in at least one scenario.

Let a1Na_{1}\in N be an agent who has a value different than 0 and rmaxr_{max} for at least one of the goods. Let g1g_{1} be such good – that is, 0<v1,1<rmax0<v_{1,1}<r_{max}. We prove that reporting the highest possible value, rmaxr_{max}, for all goods is a safe manipulation. Notice that this report is indeed a manipulation as it is different than the true report in at least one place.

We need to show that for any combination of reports of the other agents, this report does not harm agent aia_{i}, and that there exists a combination where it strictly increases her utility.

Noting that since the utilities are additive and tie-breaking is performed separately for each good, we can analyze each good independently. The following proves that for all goods, agent a1a_{1} always weakly prefers to report truthfully. Case 4 (marked by *) proves that for the good g1g_{1}, there exists a combination of reports of the others, for which agent a1a_{1} strictly prefers the outcome from report truthfully. Which proves that it is indeed a safe manipulation.

Let gGg_{\ell}\in G be a good. We consider the following cases according to the value of agent a1a_{1} for the good gg_{\ell}, v1,v_{1,\ell}; and the maximum report among the other agents for this good:

  • v1,=rmaxv_{1,\ell}=r_{max}: In this case, both the truthful and the untruthful reports are the same.

  • v1,=0v_{1,\ell}=0: Agent a1a_{1} does not care about this good, so regardless of the reports which determines whether or not agent a1a_{1} wins the good, her utility from it is 0.

    Thus, in this case, agent a1a_{1} is indifferent between telling the truth and manipulating.

  • 0<v1,<rmax0<v_{1,\ell}<r_{max} and the maximum report of the others is strictly smaller than v1,v_{1,\ell}: Agent a1a_{1} wins the good in both cases – when she reports her true value or bids rmaxr_{max}.

    Thus, in this case, agent a1a_{1} is indifferent between telling the truth and manipulating.

  • (*) 0<v1,<rmax0<v_{1,\ell}<r_{max} and the maximum report of the others is greater than vi,jv_{i,j} and smaller than rmaxr_{max}: when agent a1a_{1} reports her true value for the good gg_{\ell}, then she does not win it. However, by bidding rmaxr_{max}, she does.

    Thus, in this case, agent a1a_{1} strictly increases her utility by lying (as her value for this good is positive).

  • 0<v1,<rmax0<v_{1,\ell}<r_{max} and the maximum report of the others equals rmaxr_{max}: when agent a1a_{1} reports her true value for the good, she does not win it. However, by bidding rmaxr_{max}, she gets a chance to win the good. Even for risk-averse agent, a chance of winning is better than no chance.

    Thus, in this case, agent a1a_{1} strictly increases her (expected) utility by lying.

5.2. Normalized Utilitarian Goods Allocation

In the normalized utilitarian allocation rule, the agents’ reports are first normalized such that each agent’s values sum to a given constant V>0V>0. Then, each good is given to the agent with the highest normalized value. We focus on the case of at least three goods.

Theorem 5.1.

For m3m\geq 3 goods, the RAT-degree of the Normalized Utilitarian allocation rule is 11.

We prove Theorem 5.1 using several lemmas that analyze different cases.

The first two lemmas prove that the rule is not safely-manipulable, so its RAT-degree is at least 11. Section 5.2 addresses agents who value only one good positively, while Section 5.2 covers agents who value at least two goods positively.

{lemmarep}

An agent who values only one good positively cannot safely manipulate the Normalized Utilitarian allocation rule.

{proofsketch}

Due to the normalization requirement, any manipulation by such an agent involves reporting a lower value for the only good she values positively, while reporting a higher values for some goods she values at 0. This reduces her chances of winning her desired good and raises her chances of winning goods she values at zero, ultimately decreasing her utility. Thus, the manipulation is neither profitable nor safe.

Proof.

Let a1a_{1} be an agent who values only one good and let g1g_{1} be the only good she likes. That is, her true valuation is v1,1=Vv_{1,1}=V and v1,=0v_{1,\ell}=0 for any 1\ell\neq 1 (any good gg_{\ell} different than g1g_{1}).

To prove that agent a1a_{1} does not have a safe manipulation, we need to show that for any report for her either (1) for any reports of the other agents, agent a1a_{1} weakly prefers the outcome from telling the truth; or (2) there exists a reports of the other agents, for which agent a1a_{1} strictly prefers the outcome from telling the truth.

Let (r1,1,,r1,m)(r_{1,1},\ldots,r_{1,m}) be an alternative report for the mm goods. We assume that the values are already normalized. We shall now prove that the second condition holds (lying may harm the agent).

First, as the alternative report is different, we can conclude that r1,1<Vr_{1,1}<V. We denote the difference by ϵ=Vr1,j\epsilon=V-r_{1,j}. Next, consider the following reports of the other agents (all agents except a1a_{1}): V12ϵV-\frac{1}{2}\epsilon for item g1g_{1} and 12ϵ\frac{1}{2}\epsilon for some other good.

When agent a1a_{1} reports her true value she wins her desired good g1g_{1}, which gives her utility V>0V>0. However, when she lies, she loses good g1g_{1} and her utility decreases to 0 (winning goods different than g1g_{1} does not increases her utility).

That is, lying may harm the agent. ∎

{lemmarep}

An agent who values positively at least two goods cannot safely manipulate the Normalized Utilitarian allocation rule.

{proofsketch}

Since values are normalized, any manipulation by such an agent must involve increasing the reported value of at least one good gincg_{\mathrm{inc}} while decreasing the value of at least one other good gdecg_{\mathrm{dec}}. We show that such a manipulation is not safe, by considering the case where all other agents report as follows: they assign a value of 0 to gincg_{\mathrm{inc}}, a value between the manipulator’s true and reported value for gdecg_{\mathrm{dec}}, and a value slightly higher than the manipulator’s report for all other goods (it is possible to construct such reports that are non-negative and normalized).

With these reports, the manipulation causes the manipulator to lose the good gdecg_{\mathrm{dec}} which has a positive value for her, whereas she wins the good gincg_{\mathrm{inc}} with or without the manipulation, and does not win any other good. Hence, the manipulation strictly decreases her total value.

Proof.

Let a1a_{1} be an agent who values at least two good, v1,1,,v1,mv_{1,1},\ldots,v_{1,m} her true values for the mm goods, and r1,1,,r1,mr_{1,1},\ldots,r_{1,m} a manipulation for a1a_{1}. We need to show that the manipulation is either not safe – there exists a combination of the other agents’ reports for which agent a1a_{1} strictly prefers the outcome from telling the truth; or not profitable – for any combination of reports of the other agents, agent a1a_{1} weakly prefers the outcome from telling the truth. We will show that the manipulation is not safe by providing an explicit combination of other agents’ reports.

First, notice that since the the true values and untruthful report of agent a1a_{1} are different and they sum to the same constant VV, there must be a good gincg_{\mathrm{inc}} whose value was increased (i.e., v1,inc<r1,incv_{1,\mathrm{inc}}<r_{1,\mathrm{inc}}), and a good gdecg_{\mathrm{dec}} whose value was decreased (i.e., v1,dec>r1,decv_{1,\mathrm{dec}}>r_{1,\mathrm{dec}}).

Next, let ϵ:=min{1m1r1,inc,12(v1,decr1,dec)}\epsilon:=\min\left\{\frac{1}{m-1}r_{1,\mathrm{inc}},~{}\frac{1}{2}(v_{1,\mathrm{dec}}-r_{1,\mathrm{dec}})\right\}, notice that ϵ>0\epsilon>0. Also, let c:=r1,incϵc:=r_{1,\mathrm{inc}}-\epsilon, notice that c>0c>0 as well (here we use the condition m3m\geq 3).

We consider the combination of reports in which all agents except a1a_{1} report the following values, denoted by r(1),,r(m)r(1),\ldots,r(m):

  • For good gincg_{\mathrm{inc}} they report r(inc):=0r(\mathrm{inc}):=0.

  • For good gdecg_{\mathrm{dec}} they report r(dec):=r1,dec+ϵr(\mathrm{dec}):=r_{1,\mathrm{dec}}+\epsilon.

  • For the rest of the goods, gG{ginc,gdec}g_{\ell}\in G\setminus\{g_{\mathrm{inc}},g_{\mathrm{dec}}\}, they report r():=r1,+1m2cr(\ell):=r_{1,\ell}+\frac{1}{m-2}c.

We prove that the above values constitute a legal report — they are non-negative and normalized to VV.

First, we show that the sum of values in this report is VV:

=1mr()\displaystyle\sum_{\ell=1}^{m}r(\ell) =r(inc)+r(dec)+gG{ginc,gdec}r()\displaystyle=r(\mathrm{inc})+r(\mathrm{dec})+\sum_{g_{\ell}\in G\setminus\{g_{\mathrm{inc}},g_{\mathrm{dec}}\}}r(\ell)
=0+(r1,dec+ϵ)+gG{ginc,gdec}(r1,+1m2c)\displaystyle=0+(r_{1,\mathrm{dec}}+\epsilon)+\sum_{g_{\ell}\in G\setminus\{g_{\mathrm{inc}},g_{\mathrm{dec}}\}}\left(r_{1,\ell}+\frac{1}{m-2}c\right)
=(r1,dec+ϵ)+gG{ginc,gdec}r1,+(m2)1m2c\displaystyle=(r_{1,\mathrm{dec}}+\epsilon)+\sum_{g_{\ell}\in G\setminus\{g_{\mathrm{inc}},g_{\mathrm{dec}}\}}r_{1,\ell}+(m-2)\frac{1}{m-2}c
=(r1,dec+ϵ)+gG{ginc,gdec}r1,+(r1,incϵ)==1mr1,=V.\displaystyle=(r_{1,\mathrm{dec}}+\epsilon)+\sum_{g_{\ell}\in G\setminus\{g_{\mathrm{inc}},g_{\mathrm{dec}}\}}r_{1,\ell}+(r_{1,\mathrm{inc}}-\epsilon)=\sum_{\ell=1}^{m}r_{1,\ell}=V.

Second, we show that all the values are non-negative:

  • Good gincg_{\mathrm{inc}}: it is clear as r(inc)=0r(\mathrm{inc})=0.

  • Good gdecg_{\mathrm{dec}}: since r(dec)r(\mathrm{dec}) is strictly higher than the (non-negative) report of agent a1a_{1} by ϵ>0\epsilon>0, it is clearly non-negative.

  • Rest of the goods, gG{ginc,gdec}g_{\ell}\in G\setminus\{g_{\mathrm{inc}},g_{\mathrm{dec}}\}: since ϵ=min{1m1r1,inc,12(v1,decr1,dec)}\epsilon=\min\{\frac{1}{m-1}r_{1,\mathrm{inc}},~{}\frac{1}{2}(v_{1,\mathrm{dec}}-r_{1,\mathrm{dec}})\}, it is clear that ϵ1m1r1,inc\epsilon\leq\frac{1}{m-1}r_{1,\mathrm{inc}}. As m3m\geq 3 and r1,inc>0r_{1,\mathrm{inc}}>0, we get that c=r1,incϵ=m2m1r1,incc=r_{1,\mathrm{inc}}-\epsilon=\frac{m-2}{m-1}r_{1,\mathrm{inc}} is higher than 0. As r()r(\ell) is strictly higher than the (non-negative) report of agent a1a_{1} by c>0c>0, it is clearly non-negative.

Now, we prove that, given these reports for the n1n-1 unknown agents, agent a1a_{1} strictly prefers the outcome from reporting truthfully to the outcome from manipulating.

We look at the two possible outcomes for each good – the one from telling and truth and the other from lying, and show that the outcome of telling the truth is always either the same or better, and that for at least one of the goods that agent a1a_{1} wants (specifically, gdecg_{\mathrm{dec}}) it is strictly better.

  • For good gincg_{\mathrm{inc}} we consider two cases.

    1. (1)

      If v1,inc=0v_{1,\mathrm{inc}}=0: when agent a1a_{1} is truthful we have a tie for this good as r(inc)=0r(\mathrm{inc})=0. When agent a1a_{1} manipulates, she wins the good (as r1,inc>v1,inc=0=r(inc)r_{1,\mathrm{inc}}>v_{1,\mathrm{inc}}=0=r(\mathrm{inc})). However, as v1,inc=0v_{1,\mathrm{inc}}=0, in both cases, her utility from this good is 0.

    2. (2)

      If v1,inc>0v_{1,\mathrm{inc}}>0: Whether agent a1a_{1} says is truthful or not, she wins the good as r1,inc>v1,inc>0=r(inc)r_{1,\mathrm{inc}}>v_{1,\mathrm{inc}}>0=r(\mathrm{inc}). Thus, for this good, the agent receives the same utility (of v1,incv_{1,\mathrm{inc}}) when telling the truth or lying.

  • For good gdecg_{\mathrm{dec}}: when agent a1a_{1} is truthful, she wins the good since r(dec)<v1,decr(\mathrm{dec})<v_{1,\mathrm{dec}}:

    r(dec)\displaystyle r(\mathrm{dec}) =r1,dec+ϵ\displaystyle=r_{1,\mathrm{dec}}+\epsilon
    =r1,dec+min{1m1r1,inc,12(v1,decr1,dec)}\displaystyle=r_{1,\mathrm{dec}}+\min\{\frac{1}{m-1}r_{1,\mathrm{inc}},~{}\frac{1}{2}(v_{1,\mathrm{dec}}-r_{1,\mathrm{dec}})\}
    r1,dec+12(v1,decr1,dec)\displaystyle\leq r_{1,\mathrm{dec}}+\frac{1}{2}(v_{1,\mathrm{dec}}-r_{1,\mathrm{dec}})
    =12(v1,dec+r1,dec)<12(v1,dec+v1,dec)=v1,dec\displaystyle=\frac{1}{2}(v_{1,\mathrm{dec}}+r_{1,\mathrm{dec}})<\frac{1}{2}(v_{1,\mathrm{dec}}+v_{1,\mathrm{dec}})=v_{1,\mathrm{dec}} (as r1,dec<v1,decr_{1,\mathrm{dec}}<v_{1,\mathrm{dec}})

    But when agent a1a_{1} manipulates, she loses the good since r(dec)>r1,decr(\mathrm{dec})>r_{1,\mathrm{dec}} (as r(dec)=r1,dec+ϵr(\mathrm{dec})=r_{1,\mathrm{dec}}+\epsilon and ϵ>0\epsilon>0).

    As the real value of agent a1a_{1} for this good is positive, the agent strictly prefers telling the truth for this good.

  • Rest of the goods, gG{ginc,gdec}g_{\ell}\in G\setminus\{g_{\mathrm{inc}},g_{\mathrm{dec}}\}: When agent a1a_{1} is truthful, all the outcomes are possible – the agent either wins or loses or that there is a tie.

    However, as for this set of goods the reports of the other agents are r()=r1,q+1n2c>r1,r(\ell)=r_{1,q}+\frac{1}{n-2}c>r_{1,\ell}, when agent a1a_{1} manipulates, she always loses the good. Thus, her utility from lying is either the same or smaller (since losing the good is the worst outcome).

Thus, the manipulation may harm the agent. ∎

The last lemma shows that the RAT-degree is at most 11, thus completing the proof of the theorem. {lemmarep} With m3m\geq 3 goods, Normalized Utilitarian is 11-known-agent safely-manipulable.

{proofsketch}

Consider a scenario where there is a known agent who reports 0 for some good gg that the manipulator wants and slightly more than the manipulator’s true values for all other goods. In this case, by telling the truth, the manipulator has no chance to win any good except gg. Therefore, reporting a value of VV for gg and a value of 0 for all other goods is a safe manipulation.

The same manipulation is also profitable, since it is possible that the reports of all n2n-2 unknown agents for gg are between the manipulator’s true value and VV. In such a case, the manipulation causes the manipulator to win gg, which strictly increases her utility.

Proof.

Let a1a_{1} be an agent and let v1,1,,v1,mv_{1,1},\ldots,v_{1,m} be her values for the mm goods. We need to show (1) an alternative report for agent a1a_{1}, (2) another agent a2a_{2}, and (3) a report agent  a2a_{2}; such that for any combination of reports of the remaining n2n-2 (unknown) agents, agent a1a_{1} weakly prefers the outcome from lying, and that there exists a combination for which agent a1a_{1} strictly prefers the outcome from lying.

Let g+g_{\ell^{+}} be a good that agent a1a_{1} values (i.e., v1,+>0v_{1,\ell^{+}}>0).

Let a2a_{2} be an agent different than a1a_{1}. We consider the following report for agent a2a_{2}: first, r2,+:=0r_{2,\ell^{+}}:=0, and r2,:=r1,+ϵr_{2,\ell}:=r_{1,\ell}+\epsilon for any good gg_{\ell} different than g+g_{\ell^{+}}, where ϵ:=1m1v1,+\epsilon:=\frac{1}{m-1}v_{1,\ell^{+}}. Notice that ϵ>0\epsilon>0.

We shall now prove that reporting VV for the good g+g_{\ell^{+}} (and 0 for the rest of the goods) is a safe manipulation for a1a_{1} given that a2a_{2} reports the described above.

When agent a1a_{1} reports her true values, then she does not win the goods different than g+g_{\ell^{+}} – this is true regardless of the reports of the remaining n2n-2 agents, as agent a2a_{2} reports a higher value r1,<r1,+ϵ=r2,r_{1,\ell}<r_{1,\ell}+\epsilon=r_{2,\ell}. For the good g+g_{\ell^{+}}, we only know that r2,+=0>v1,+>r1,+=Vr_{2,\ell^{+}}=0>v_{1,\ell^{+}}>r_{1,\ell^{+}}=V, meaning that it depends on the reports of the (n2)(n-2) remaining agents. We consider the following cases, according to the maximum report for g+g_{\ell^{+}} among the remaining agents:

  • If the maximum is smaller than v1,+v_{1,\ell^{+}}: agent a1a_{1} wins the good v1,+v_{1,\ell^{+}} in both cases (when telling the truth or lies).

    (the same)

  • If the maximum is greater than v1,+v_{1,\ell^{+}} but smaller than r1,+=Vr_{1,\ell^{+}}=V: when agent a1a_{1} tells the truth, she does not win the good. However, when she lies, she does.

    (lying is strictly better).

  • If the maximum equals r1,+=Vr_{1,\ell^{+}}=V: when agent a1a_{1} tells the truth, she does not win the good. However, when she lies, we have tie for this good. Although agent a1a_{1} is risk-averse, having a chance to win the good is strictly better than no chance at all.

    (lying is strictly better).

5.3. Round-Robin Goods Allocation

In round-robin goods allocation, the agents are arranged according to some predetermined order π\pi. There are mm rounds, corresponding to the number of goods. In each round, the agent whose turn it is (based on π\pi) takes their most preferred good from the set of goods that remain unallocated at that point. When there are multiple goods that are equally most preferred, the agent takes the good with the smallest index. Note that normalization is not important for Round-Robin.

If there are at most nn goods, then the rule is clearly truthful. Thus, we assume that there are mn+1m\geq n+1 goods. Even the slight increment to m=n+1m=n+1 makes a significant difference:

Lemma 5.2.

With mn+1m\geq n+1 goods, round-robin is 11-known-agent safely-manipulable.

Proof.

Let π\pi be the order according to agents’ indices: a1,a2,,ana_{1},a_{2},\ldots,a_{n}. Let agent a1a_{1}’s valuation be such that v1,1=v1,2=1v_{1,1}=v_{1,2}=1 and v1,3==v1,m=0v_{1,3}=\cdots=v_{1,m}=0. Suppose agent a1a_{1} knows that agent a2a_{2} will report the valuation with v2,1=0v_{2,1}=0, v2,2=1v_{2,2}=1, and v2,3==v2,m=0.5v_{2,3}=\cdots=v_{2,m}=0.5. We show that agent a1a_{1} has a safe and profitable manipulation by reporting v1,1=0.5v_{1,1}^{\prime}=0.5, v1,2=1v_{1,2}^{\prime}=1, and v1,3==v1,m=0v_{1,3}^{\prime}=\cdots=v_{1,m}^{\prime}=0.

Firstly, we note that agent a1a_{1}’s utility is always 11 when reporting her valuation truthfully, regardless of the valuations of agents a3,,ana_{3},\ldots,a_{n}. This is because agent a1a_{1} will receive item g1g_{1} (by the item-index tie-breaking rule) and agent a2a_{2} will receive item g2g_{2} in the first two rounds. The allocation of the remaining items does not affect agent a1a_{1}’s utility.

Secondly, after misreporting, agent a1a_{1} will receive item g2g_{2} in the first round, which already secures agent a1a_{1} a utility of at least 11. Therefore, agent a1a_{1}’s manipulation is safe.

Lastly, if the remaining n2n-2 agents report the same valuations as agent a2a_{2} does, it is easy to verify that agent a1a_{1} will receive item g1g_{1} in the (n+1)(n+1)-th round. In this case, agent a1a_{1}’s utility is 22. Thus, the manipulation is profitable. ∎

We show a partial converse. {lemmarep} With m=n+1m=n+1 goods, round-robin is not safely-manipulable.

Proof.

Clearly, the only agent that could have a profitable manipulation is a1a_{1}.

Order the goods by descending order of a1a_{1}’s values, and subject to that, by ascending index. W.l.o.g., assume that v1,1v1,2v1,mv_{1,1}\geq v_{1,2}\geq\cdots\geq v_{1,m}. If all valuations are equal (v1,1=v1,mv_{1,1}=v_{1,m}), then a1a_{1} always gets the same total value (two goods), so no manipulation is profitable. Therefore we assume that v1,1>v1,mv_{1,1}>v_{1,m}.

If a1a_{1} is truthful, he gets g1g_{1} first, and then the last good remaining after all other agents picked a good. If, after the manipulation, g1g_{1} still has the highest value, then a1a_{1} still gets g1g_{1} first, and has exactly the same bundle, so the manipulation is not profitable.

Hence, we assume that a1a_{1} manipulates such that some other good, say gtg1g_{t}\neq g_{1}, becomes the most valuable good: v1,tv1,jv^{\prime}_{1,t}\geq v^{\prime}_{1,j} for all goods gjg_{j}, and v1,t>v1,1v^{\prime}_{1,t}>v^{\prime}_{1,1}. We prove that the manipulation is not safe.

Suppose the n1n-1 unknown agents all assign the lowest value to gtg_{t}, and the next-lowest value to vmv_{m}. If a1a_{1} is truthful, he gets g1g_{1} and then gtg_{t}, so his value is v1,1+v1,tv_{1,1}+v_{1,t}. But if a1a_{1} manipulates, he gets gtg_{t} and then gmg_{m}, so his value is v1,m+v1,tv_{1,m}+v_{1,t}, which is strictly lower. ∎

Hence, The RAT-degree of round-robin is 11 when m=n+1m=n+1 and at most 11 when mn+1m\geq n+1.

The proof of Lemma 5.2 uses weak preferences (preferences with ties). In particular, if a1a_{1}’s valuations for the top two goods are different, then picking g2g_{2} first is risky. With strict preferences, we could only prove a much weaker upper bound of n2n-2. This raises the following question. {open} What is the RAT-degree of round-robin when agents report strict preferences?

5.4. An EF1 Mechanism with RAT-degree n1n-1

In this section, we focus on mechanisms that always output fair allocations (with respect to the reported valuation profile). We consider the widely used fairness criterion envy-freeness up to one item (EF1), which intuitively means that no agent envies another agent if one item is (hypothetically) removed from that agent’s bundle.

Definition 5.3.

Given a valuation profile (v1,,vn)(v_{1},\ldots,v_{n}), an allocation (A1,,An)(A_{1},\ldots,A_{n}) is envy-free up to one item (EF1) if for every pair of i,j[n]i,j\in[n] there exists a good gg such that vi(Ai)vi(Aj{g})v_{i}(A_{i})\geq v_{i}(A_{j}\setminus\{g\}).

It is well-known and easy to see that the round-robin mechanism always outputs an EF1 allocation. However, as we have seen in the previous section, the round-robin mechanism has a very low RAT-degree. In this section, we propose a new EF1 mechanism that has RAT-degree n1n-1.

5.4.1. Description of Mechanism

The mechanism has two components: an agent selection rule Γ\Gamma and an allocation rule Ψ\Psi. The agent selection rule Γ\Gamma takes the valuation profile (v1,,vn)(v_{1},\ldots,v_{n}) as an input and outputs the indices i+,ii^{+},i^{-} of two agents, where agent ai+a_{i^{+}} is called the mechanism-favored agent and agent aia_{i^{-}} is called the mechanism-unfavored agent (the reason of both names will be clear soon). The allocation rule Ψ\Psi takes the valuation profile (v1,,vn)(v_{1},\ldots,v_{n}) and the indices of the two agents i+,ii^{+},i^{-} output by Γ\Gamma as inputs and then outputs an EF1 allocation (A1,,An)(A_{1},\ldots,A_{n}). We will complete the description of our mechanism by defining Γ\Gamma and Ψ\Psi.

We first define Ψ\Psi. Given the input valuation profile (v1,,vn)(v_{1},\ldots,v_{n}), let 𝒳iEF1\mathcal{X}^{\mathrm{EF1}}_{i^{-}} be the set of all EF1 allocations (A1,,An)(A_{1},\ldots,A_{n}) in which agent aia_{i^{-}} is not envied by anyone else, i.e., for any agent aia_{i}, we have vi(Ai)vi(Ai)v_{i}(A_{i})\geq v_{i}(A_{i^{-}}). Notice that 𝒳iEF1\mathcal{X}^{\mathrm{EF1}}_{i^{-}} is nonempty: the allocation output by the round-robin mechanism with ii^{-} being the last agent under π\pi satisfies both (1) and (2) above.

The rule Ψ\Psi then outputs an allocation (A1,,An)(A_{1},\ldots,A_{n}) in 𝒳iEF1\mathcal{X}^{\mathrm{EF1}}_{i^{-}} that maximizes vi+(Ai+)v_{i^{+}}(A_{i^{+}}). When there are multiple maximizers, the rule breaks the tie in an arbitrary consistent way. This finishes the description of Ψ\Psi.

To describe Γ\Gamma, we first state a key property called volatility that we want from Γ\Gamma, and then show that a volatile rule can be constructed. Informally, volatility says the following. If an arbitrary agent aia_{i} changes the reported valuation profile from viv_{i} to viv_{i}^{\prime}, we can construct a valuation profile vjv_{j} for another agent aja_{j} such that vjv_{j} has a positive value on only one pre-specified good and the two agents output by Γ\Gamma switch from some pre-specified i+,ii^{+},i^{-} to some pre-specified i+,ii^{+^{\prime}},i^{-^{\prime}}.

Definition 5.4.

A selection rule Γ\Gamma is called volatile if for any six indices of agents i,j,i+,i,i+,ii,j,i^{+},i^{-},i^{+^{\prime}},i^{-^{\prime}} with iji\neq j, i+ii^{+}\neq i^{-}, and i+ii^{+^{\prime}}\neq i^{-^{\prime}}, any good gGg_{\ell^{\ast}}\in G, any set of n2n-2 valuation profiles {vk}k{i,j}\{v_{k}\}_{k\notin\{i,j\}}, and any two reported valuation profiles vi,viv_{i},v_{i}^{\prime} of agent aia_{i} with viviv_{i}\neq v_{i}^{\prime} (i.e., vi,vi,v_{i,\ell}\neq v_{i,\ell}^{\prime} for at least one good gg_{\ell}), there exists a valuation function vjv_{j} of agent aja_{j} such that

  • vj,>0v_{j,\ell^{\ast}}>0, and vj,=0v_{j,\ell}=0 for any \ell\neq\ell^{\ast}, and

  • Γ\Gamma outputs i+i^{+} and ii^{-} for the valuation profile {vk}k{i,j}{vi}{vj}\{v_{k}\}_{k\notin\{i,j\}}\cup\{v_{i}\}\cup\{v_{j}\}, and Γ\Gamma outputs i+i^{+^{\prime}} and ii^{-^{\prime}} for the valuation profile {vk}k{i,j}{vi}{vj}\{v_{k}\}_{k\notin\{i,j\}}\cup\{v_{i}^{\prime}\}\cup\{v_{j}\}.

In other words, a manipulation of agent ii from viv_{i} to viv_{i}^{\prime} can affect the output of Γ\Gamma in any possible way (from any pair i+,ii^{+},i^{-} to any other pair i+,ii^{+^{\prime}},i^{-^{\prime}}), depending on the report of agent jj.

We will use an arbitrary volatile rule Γ\Gamma for our mechanism. We conclude the description of Γ\Gamma by proving (in the appendix) that such a rule exists.

{propositionrep}

There exists a volatile agent selection rule Γ\Gamma.

Proof.

The rule Γ\Gamma does the following. It first finds the maximum value among all agents and all goods: v:=maxi[n],[m]vi,\displaystyle v^{\ast}:=\max_{i\in[n],\ell\in[m]}v_{i,\ell}. It then views the value vv^{\ast} as a binary string that encodes the following information:

  • the index ii of an agent aia_{i};

  • a non-negative integer tt,

  • two non-negative integers a,ba,b, between 0 and (n2){n\choose 2}.

We append 0’s as most significant bits to vv^{\ast} if the length of the binary string is not long enough to support the format of the encoding. If the encoding of vv^{\ast} is longer than the length enough for encoding the above-mentioned information, we take only the least significant bits in the amount required for the encoding.

The mechanism-favored agent ai+a_{i^{+}} and the mechanism-unfavored agent aia_{i^{-}} are then decided in the following way. Let s{0,1}s\in\{0,1\} be the bit at the tt-th position of the binary encoding of the value vi(g)v_{i}(g_{\ell}).

Let p:=(as+b)mod(n2)p:=(a\cdot s+b)\bmod{n\choose 2}. Each value of pp corresponds to a pair of different agents (i+,i)(i^{+},i^{-}).

To see that Γ\Gamma is volatile, suppose viv_{i} and viv_{i}^{\prime} are different in the tt-th bits of their binary encoding. We construct a value vv^{*} that encodes the integers i,t,a,bi,t,a,b where

  1. (1)

    the tt-th bit of viv_{i} is ss and the tt-th bit of viv_{i}^{\prime} is ss^{\prime} for sss\neq s^{\prime};

  2. (2)

    The pair (i+,i)(i^{+},i^{-}) corresponds to the integer (as+b)mod(n2)(a\cdot s+b)\bmod{n\choose 2}.

  3. (3)

    The pair (i+,i)(i^{+^{\prime}},i^{-^{\prime}}) corresponds to the integer (as+b)mod(n2)(a\cdot s^{\prime}+b)\bmod{n\choose 2}.

(1) can always be achieved by some encoding rule. To see (2) and (3) can always be achieved, assume s=1s=1 and s=0s^{\prime}=0 without loss of generality. We can then take b:=b:= the integer corresponding to the pair (i+,i)(i^{+^{\prime}},i^{-^{\prime}}), and a:=b+a:=-b+ the integer corresponding to the pair (i+,i)(i^{+},i^{-}), modulo (n2){n\choose 2}.

We then construct a valuation vjv_{j} such that vj,v_{j,\ell^{\ast}} is the largest and is equal to vv^{*}. In case vv^{*} is not large enough, we increase it as needed by adding nmost significant digits. ∎

5.4.2. Proving RAT-degree of n1n-1

Before we proceed to the proof, we first define some additional notions. We say that (A1,,An)(A_{1},\ldots,A_{n}) is a partial allocation if AiAj=A_{i}\cap A_{j}=\emptyset for any pair of i,j[n]i,j\in[n] and i=1nAiG\bigcup_{i=1}^{n}A_{i}\subseteq G. The definition of EF1 can be straightforwardly extended to partial allocations. Given a possibly partial allocation (A1,,An)(A_{1},\ldots,A_{n}), we say that agent aia_{i} strongly envies agent aja_{j} if vi(Ai)<vi(Aj{g})v_{i}(A_{i})<v_{i}(A_{j}\setminus\{g\}) for any gAjg\in A_{j}, i.e., the EF1 criterion from aia_{i} to aja_{j} fails. Given t[n]t\in[n], we say that a (possibly partial) allocation (A1,,An)(A_{1},\ldots,A_{n}) is EF1 except for tt if for any pair i,j[n]i,j\in[n] with iti\neq t we have vi(Ai)vi(Aj{g})v_{i}(A_{i})\geq v_{i}(A_{j}\setminus\{g\}) for some gGg\in G. In words, the allocation is EF1 except that agent ata_{t} is allowed to strongly-envy others.

We first prove some lemmas which will be used later.

Lemma 5.5.

Fix a valuation profile. Let (A1,,An)(A_{1},\ldots,A_{n}) be a partial EF1 allocation. There exists a complete EF1 allocation (A1+,,An+)(A_{1}^{+},\ldots,A_{n}^{+}) such that vi(Ai+)vi(Ai)v_{i}(A_{i}^{+})\geq v_{i}(A_{i}) for each i[n]i\in[n].

Proof.

Construct the envy-graph for the partial allocation (A1,,An)(A_{1},\ldots,A_{n}) and then perform the envy-graph procedure proposed by lipton2004approximately to obtain a complete allocation (A1+,,An+)(A_{1}^{+},\ldots,A_{n}^{+}). The monotonic property of the procedure directly implies this proposition. ∎

{lemmarep}

Fix a valuation profile and an arbitrary agent ata_{t}. Let 𝒳EF1\mathcal{X}^{\mathrm{EF1}} be the set of all complete EF1 allocations. Let 𝒳EF1t{\mathcal{X}^{\mathrm{EF1}}}^{-t} be the set of all possibly partial allocations that are EF1 except for possibly ata_{t}. The allocation in 𝒳EF1\mathcal{X}^{\mathrm{EF1}} that maximizes agent ata_{t}’s utility is also the one in 𝒳EF1t{\mathcal{X}^{\mathrm{EF1}}}^{-t} that maximizes ata_{t}’s utility. In other words, if ata_{t} gets the maximum possible value subject to EF1, he cannot get a higher value by agreeing to give up the EF1 guarantee for himself. This claim is trivially true for share-based fairness notions such as proportionality, but quite challenging to prove for EF1; see appendix.

Proof.

Assume t=1t=1 without loss of generality. The allocation space 𝒳EF1\mathcal{X}^{\mathrm{EF1}} is clearly a subset of 𝒳EF11{\mathcal{X}^{\mathrm{EF1}}}^{-1}. Assume for the sake of contradiction that, for all possibly partial allocations in 𝒳EF11{\mathcal{X}^{\mathrm{EF1}}}^{-1}, agent a1a_{1} strongly envies someone else. Let (A1,,An)(A_{1},\ldots,A_{n}) be an allocation that minimizes |i=1nAi||\bigcup_{i=1}^{n}A_{i}| (minimizes the total number of goods allocated) among all allocations in 𝒳EF11{\mathcal{X}^{\mathrm{EF1}}}^{-1}.

For each i1i\neq 1, agent aia_{i} will strongly envy some other agent if an arbitrary good is removed from AiA_{i}, for otherwise, the minimality is violated. We say that an agent aia_{i} champions agent aja_{j} if the following holds.

  • aia_{i} strongly envies aja_{j} for i=1i=1;

  • for i1i\neq 1, let agent aia_{i} removes the most valuable good from each AkA_{k} (for kik\neq i) and let AkA_{k}^{-} be the resultant bundle; then the championed agent, agent aja_{j}, is defined by the index kk with the maximum vi(Ak)v_{i}(A_{k}^{-}).

We then construct a champion graph which is a directed graph with nn vertices where the vertices represent the nn agents and an edge from aia_{i} to aja_{j} represents that agent aia_{i} champions agent aja_{j}. By our definition, each vertex in the graph has at least one outgoing edge, so the graph must contain a directed cycle CC.

Consider a new allocation (A1,,An)(A_{1}^{\prime},\ldots,A_{n}^{\prime}) defined as follows. For every edge (ai,aj)(a_{i},a_{j}) in CC, let agent aia_{i} remove the most valuable good from AjA_{j} and then take the bundle. We will show that v1(A1)v1(A1)v_{1}(A_{1}^{\prime})\geq v_{1}(A_{1}) and (A1,,An)𝒳EF11(A_{1}^{\prime},\ldots,A_{n}^{\prime})\in{\mathcal{X}^{\mathrm{EF1}}}^{-1}, which will contradict the minimality of (A1,,An)(A_{1},\ldots,A_{n}).

It is easy to see v1(A1)v1(A1)v_{1}(A_{1}^{\prime})\geq v_{1}(A_{1}). If agent a1a_{1} is not in the cycle CC, then her utility is unchanged. Otherwise, she receives a bundle that she previously strongly envies, and one good is then removed from the bundle. The property of strong envy guarantees that v1(A1)>v1(A1)v_{1}(A_{1}^{\prime})>v_{1}(A_{1}).

To show that (A1,,An)𝒳EF11(A_{1}^{\prime},\ldots,A_{n}^{\prime})\in{\mathcal{X}^{\mathrm{EF1}}}^{-1}, first consider any agent aia_{i} with i1i\neq 1 that is not in CC. Agent aia_{i}’s bundle is unchanged, and she will not strongly envy anyone else as before (as only item-removals happen during the update).

Next consider any agent aia_{i} with i1i\neq 1 that is in CC. Let aja_{j} be the agent such that (ai,aj)(a_{i},a_{j}) is an edge in CC. Let AjA_{j}^{-} be the bundle with the most valuable good (according to viv_{i}) removed from AjA_{j}. By our definition, agent aia_{i} receives AjA_{j}^{-} in the new allocation. We will prove that agent aia_{i}, by receiving AjA_{j}^{-}, does not strongly-envy any of the original bundles AkA_{k}, for any k[n]k\in[n].

Our definition of championship ensures that this is true for any kik\neq i, as the new bundle of aia_{i} is at least as valuable for aia_{i} than every other bundle with an item removed.

It remains to show that this holds for k=ik=i. As we have argued at the beginning, in the original allocation (A1,,An)(A_{1},\ldots,A_{n}), removing any item from AiA_{i} would make agent aia_{i} strongly envy some other agent. By our definition of championship, when one good gg^{\prime} is removed from AiA_{i}, agent aia_{i} strongly envies aja_{j}, which implies aia_{i} thinks AjA_{j}^{-} is more valuable than Ai{g}A_{i}\setminus\{g^{\prime}\}. Therefore, in the new allocation, by receiving AjA_{j}^{-}, agent aia_{i} does not strongly envy AiA_{i}.

We have proved that agent aia_{i}, by receiving the bundle AjA_{j}^{-}, does not strongly envy any of the nn original bundles A1,,AnA_{1},\ldots,A_{n}. Since the new allocation only involves item removals, agent aia_{i} does not strongly envy anyone else in the new allocation.

Hence, the new allocation is in 𝒳1EF1\mathcal{X}^{\mathrm{EF1}}_{1^{-}}, which contradicts the minimality of (A1,,An)(A_{1},\ldots,A_{n}). ∎

{theoremrep}

The Γ\Gamma-Ψ\Psi mechanism in Sect. 5.4.1 has a RAT-degree of n1n-1. {proofsketch} For every profitable manipulation by aia_{i}, and for every unknown agent aja_{j}, the volatility of Γ\Gamma implies that, for some possible valuation vjv_{j}, a truthful report by aia_{i} leads to aia_{i} being the favored agent and aja_{j} being the unfavored agent, whereas the manipulation leads to aia_{i} being the unfavored agent and aja_{j} being the favored agent. We use this fact, combined with Lemma 5.5 and Section 5.4.2, to prove that the manipulation may be harmful for aia_{i}.

Proof.

Let i,ji,j be two arbitrary agents. Fix n2n-2 arbitrary valuations {vk}k{i,j}\{v_{k}\}_{k\notin\{i,j\}} for the remaining n2n-2 agents. Consider two arbitrary valuations for agent aia_{i}, viv_{i} and viv_{i}^{\prime}, with viviv_{i}\neq v_{i}^{\prime}, where viv_{i} is aia_{i}’s true valuation. We will show that switching from viv_{i} to viv_{i}^{\prime} is not a safe manipulation.

Let \ell^{\ast} be some good that aia_{i} values positively, that is, vi,>0v_{i,\ell^{\ast}}>0. By the volatility of Γ\Gamma, we can construct the valuation of agent aja_{j} such that

  • vj,>0v_{j,\ell^{\ast}}>0, and vj,=0v_{j,\ell}=0 for any \ell\neq\ell^{\ast};

  • if agent aia_{i} truthfully reports viv_{i}, then agent aia_{i} is mechanism-favored and agent aja_{j} is mechanism-unfavored; if agent aia_{i} reports viv_{i}^{\prime} instead, then agent aja_{j} is mechanism-favored and agent aia_{i} is mechanism-unfavored.

Let (A1,,An)(A_{1},\ldots,A_{n}) be the allocation output by Ψ\Psi when agent aia_{i} reports viv_{i} truthfully, and (A1,,An)(A_{1}^{\prime},\ldots,A_{n}^{\prime}) be the allocation output by Ψ\Psi when agent aia_{i} reports viv_{i}^{\prime}. Our objective is to show that vi(Ai)>vi(Ai)v_{i}(A_{i})>v_{i}(A_{i}^{\prime}).

Let us consider (A1,,An)(A_{1}^{\prime},\ldots,A_{n}^{\prime}) first. We know that gAjg_{\ell^{\ast}}\in A_{j}^{\prime}. To see this, notice that AjA_{j}^{\prime} maximizes agent aja_{j}’s utility as long as gAjg_{\ell^{\ast}}\in A_{j}^{\prime}. In addition, there exists a valid allocation (A1,,An)(A_{1}^{\prime},\ldots,A_{n}^{\prime}) output by Ψ\Psi with gAjg_{\ell^{\ast}}\in A_{j}^{\prime}: consider the round-robin mechanism with agent aja_{j} be the first and agent aia_{i} be the last under the order π\pi.

Consider a new allocation (A1′′,,An′′)(A_{1}^{\prime\prime},\ldots,A_{n}^{\prime\prime}) in which glg_{l^{*}} is moved from aja_{j} to aia_{i}, that is,

  • Ai′′=Ai{g}A_{i}^{\prime\prime}=A_{i}^{\prime}\cup\{g_{\ell^{\ast}}\},

  • Aj′′=Aj{g}A_{j}^{\prime\prime}=A_{j}^{\prime}\setminus\{g_{\ell^{\ast}}\},

  • Ak′′=AkA_{k}^{\prime\prime}=A_{k}^{\prime} for k{i,j}k\notin\{i,j\}.

Notice that (A1′′,,An′′)(A_{1}^{\prime\prime},\ldots,A_{n}^{\prime\prime}) is EF1 except for ii:

  • agent aja_{j} will not envy any agent aka_{k} with kik\neq i (as each bundle Ak′′A_{k}^{\prime\prime} has a zero value for jj), and agent aja_{j} will not envy agent aia_{i} upon removing the item gg_{\ell^{\ast}} from Ai′′A_{i}^{\prime\prime};

  • no other agent kk strongly envies agent ii: given that no one envies agent ii in (A1,,An)(A_{1}^{\prime},\ldots,A_{n}^{\prime}) (as agent ii is mechanism-unfavored), no one strongly envies agent ii in (A1′′,,An′′)(A_{1}^{\prime\prime},\ldots,A_{n}^{\prime\prime});

  • no agent strongly envies agent aja_{j}, as Aj′′AjA_{j}^{\prime\prime}\subsetneq A_{j}^{\prime};

  • any two agents in N{ai,aj}N\setminus\{a_{i},a_{j}\} do not strongly envy each other, as their allocations are not changed.

Now, consider (A1,,An)(A_{1},\ldots,A_{n}), which is the allocation that favors agent aia_{i} when agent aia_{i} truthfully reports viv_{i}. We can assume Aj=A_{j}=\emptyset without loss of generality. If not, we can reallocate goods in AjA_{j} to the remaining n1n-1 agents while keeping the EF1 property among the remaining n1n-1 agents (Lemma 5.5). Agent aja_{j} will not strongly envy anyone, as removing the good gg_{\ell^{\ast}} kills the envy. Thus, the resultant allocation is still EF1 and no one envies the empty bundle AjA_{j}. In addition, by Lemma 5.5, the utility of each agent aka_{k} with kjk\neq j does not decrease after the reallocation of AjA_{j}.

Let 𝒴EF1{\mathcal{Y}^{\mathrm{EF1}}} be the set of all EF1 allocations of the item-set GG to the agent-set N{aj}N\setminus\{a_{j}\}. Let 𝒴EF1i{\mathcal{Y}^{\mathrm{EF1}}}^{-i} be the set of all possibly partial allocations of the item-set GG to the agent-set N{aj}N\setminus\{a_{j}\} that are EF1 except for agent ii. The above argument shows that (A1,,Aj1,Aj+1,,An)(A_{1},\ldots,A_{j-1},A_{j+1},\ldots,A_{n}) is an allocation in 𝒴EF1{\mathcal{Y}^{\mathrm{EF1}}} that maximizes agent aia_{i}’s utility. We have also proved that (A1′′,,Aj1′′,Aj+1′′,,An′′)𝒴EF1i(A_{1}^{\prime\prime},\ldots,A_{j-1}^{\prime\prime},A_{j+1}^{\prime\prime},\ldots,A_{n}^{\prime\prime})\in{\mathcal{Y}^{\mathrm{EF1}}}^{-i}. By Section 5.4.2, we have vi(Ai)vi(Ai′′)v_{i}(A_{i})\geq v_{i}(A_{i}^{\prime\prime}). In addition, we have Ai′′=Ai{g}A_{i}^{\prime\prime}=A_{i}^{\prime}\cup\{g_{\ell^{\ast}}\}, gAig_{\ell^{\ast}}\notin A_{i}^{\prime}, and vi,>0v_{i,\ell^{\ast}}>0 (by our assumption), which imply vi(Ai′′)>vi(Ai)v_{i}(A_{i}^{\prime\prime})>v_{i}(A_{i}^{\prime}). Therefore, vi(Ai)>vi(Ai)v_{i}(A_{i})>v_{i}(A_{i}^{\prime}). ∎

6. Cake Cutting

In this section, we study the cake cutting problem: the allocation of divisible heterogeneous resources to nn agents. The cake cutting problem was proposed by Steinhaus48; Steinhaus49, and it is a widely studied subject in mathematics, computer science, economics, and political science.

In the cake cutting problem, the resource/cake is modeled as an interval [0,1][0,1], and it is to be allocated among a set of nn agents N={a1,,an}N=\{a_{1},\ldots,a_{n}\}. An allocation is denoted by (A1,,An)(A_{1},\ldots,A_{n}) where Ai[0,1]A_{i}\subseteq[0,1] is the share allocated to agent aia_{i}. We require that each AiA_{i} is a union of finitely many closed non-intersecting intervals, and, for each pair of i,j[n]i,j\in[n], AiA_{i} and AjA_{j} can only intersect at interval endpoints, i.e., the measure of AiAjA_{i}\cap A_{j} is 0. We say an allocation is complete if i=1nAi=[0,1]\bigcup_{i=1}^{n}A_{i}=[0,1]. Otherwise, it is partial.

The true preferences TiT_{i} of agent aia_{i} are given by a value density function vi:[0,1]0v_{i}:[0,1]\to\mathbb{R}_{\geq 0} that describes agent aia_{i}’s preference over the cake. To enable succinct encoding of the value density function, we adopt the widely considered assumption that each viv_{i} is piecewise constant: there exist finitely many points xi0,xi1,xi2,,xikix_{i0},x_{i1},x_{i2},\ldots,x_{ik_{i}} with 0=xi0<xi1<xi2<<xiki=10=x_{i0}<x_{i1}<x_{i2}<\cdots<x_{ik_{i}}=1 such that viv_{i} is a constant on every interval (xi,xi(+1))(x_{i\ell},x_{i(\ell+1)}), =0,1,,ki1\ell=0,1,\ldots,k_{i}-1. Given a subset S[0,1]S\subseteq[0,1] that is a union of finitely many closed non-intersecting intervals, agent aia_{i}’s value for receiving SS is then given by Vi(S)=Svi(x)𝑑x.V_{i}(S)=\int_{S}v_{i}(x)dx.

Fairness and efficiency are two natural goals for allocating the cake. For efficiency, we consider two commonly used criteria: social welfare and Pareto-optimality. Given an allocation (A1,,An)(A_{1},\ldots,A_{n}), its social welfare is given by i=1nVi(Ai)\sum_{i=1}^{n}V_{i}(A_{i}). This is a natural measurement of efficiency that represents the overall happiness of all agents. Pareto-optimality is a yes-or-no criterion for efficiency. An allocation (A1,,An)(A_{1},\ldots,A_{n}) is Pareto-optimal if there does not exist another allocation (A1,,An)(A_{1}^{\prime},\ldots,A_{n}^{\prime}) such that Vi(Ai)Vi(Ai)V_{i}(A_{i}^{\prime})\geq V_{i}(A_{i}) for each agent aia_{i} and at least one of these nn inequalities is strict.

For fairness, we study two arguably most important notions: envy-freeness and proportionality. An allocation (A1,,An)(A_{1},\ldots,A_{n}) is proportional if each agent receives her average share, i.e., for each i[n]i\in[n], Vi(Ai)1nVi([0,1])V_{i}(A_{i})\geq\frac{1}{n}V_{i}([0,1]). An allocation (A1,,An)(A_{1},\ldots,A_{n}) is envy-free is every agent weakly prefers her own allocated share, i.e., for every pair i,j[n]i,j\in[n], Vi(Ai)Vi(Aj)V_{i}(A_{i})\geq V_{i}(A_{j}). A complete envy-free allocation is always proportional , but this implication does not hold for partial allocations.

Before we discuss our results, we define an additional notion, uniform segment, which will be used throughout this section. Given nn value density functions v1,,vnv_{1},\ldots,v_{n} (that are piecewise constant by our assumptions), we identify the set of points of discontinuity for each viv_{i} and take the union of the nn sets. Sorting these points by ascending order, we let x1,,xm1x_{1},\ldots,x_{m-1} be all points of discontinuity for all the nn value density functions. Let x0=0x_{0}=0 and xm=1x_{m}=1. These points define kk intervals, (x0,x1),(x1,x2),,(xm1,xm)(x_{0},x_{1}),(x_{1},x_{2}),\ldots,(x_{m-1},x_{m}), such that each viv_{i} is a constant on each of these intervals. We will call each of these intervals a uniform segment, and we will denote Xt=(xt1,xt)X_{t}=(x_{t-1},x_{t}) for each t=1,,mt=1,\ldots,m. For each agent aia_{i}, we will slightly abuse the notation by using vi(Xt)v_{i}(X_{t}) to denote vi(x)v_{i}(x) with xXtx\in X_{t}.

Since all agents’ valuations on each uniform segment are uniform, it is tempting to think about the cake cutting problem as the problem of allocating mm divisible homogeneous goods. However, this interpretation is inaccurate when concerning agents’ strategic behaviors, as, in the cake cutting setting, an agent can manipulate her value density function with a different set of points of discontinuity, which affects how the divisible goods are defined. To see a significant difference between these two models, in the divisible goods setting, the equal division rule that allocates each divisible good evenly to the nn agents is truthful (with RAT-degree nn), envy-free and proportional, while, in the cake cutting setting, it is proved in tao2022existence that truthfulness and proportionality are incompatible even for two agents.

Results

In Sect. 6.1, we start by considering the simple mechanism that outputs allocation with the maximum social welfare. We show that the RAT-degree of this mechanism is 0. Similar as it is in the case of indivisible goods, we also consider the normalized variant of this mechanism, and we show that the RAT-degree is 11. In Sect. 6.2, we consider mechanisms that output fair allocations. We review the mechanisms studied in BU2023Rat by studying their RAT-degrees. We will see that one of those mechanisms, which always outputs envy-free allocation, has a RAT-degree of n1n-1. However, this mechanism has a very poor performance on efficiency. Finally, in Sect. 6.5, we propose a new mechanism with RAT-degree n1n-1 that always outputs proportional and Pareto-optimal allocations.

6.1. Maximum Social Welfare Mechanisms

It is easy to find an allocation that maximizes the social welfare: for each uniform segment XtX_{t}, allocate it to an agent aia_{i} with the maximum vi(Xt)v_{i}(X_{t}). When multiple agents have equally largest value of vi(Xt)v_{i}(X_{t}) on the segment XtX_{t}, we need to specify a tie-breaking rule. However, as we will see later, the choice of the tie-breaking rule does not affect the RAT-degree of the mechanism.

It is easy to see that, whatever the tie-breaking rule is, the maximum social welfare mechanism is safely manipulable. It is safe for an agent to report higher values on every uniform segment. For example, doubling the values on all uniform segments is clearly a safe manipulation.

{observation}

Utilitarian Cake-Cutting with any tie-breaking rule has RAT-degree 0.

We next consider the following variant of the maximum social welfare mechanism: first rescale each viv_{i} such that Vi([0,1])=01vi(x)𝑑x=1V_{i}([0,1])=\int_{0}^{1}v_{i}(x)dx=1, and then output the allocation with the maximum social welfare. We will show that the RAT-degree is 11. The proof is similar to the one for indivisible items (Theorem 5.1) and is given in the appendix.

{theoremrep}

When there are at least three agents, Normalized Utilitarian Cake-Cutting with any tie-breaking rule has RAT-degree 11.

Proof.

We assume without loss of generality that the value density function reported by each agent is normalized (as, otherwise, the mechanism will normalize the function for the agent).

We first show that the mechanism is not 0-known-agents safely-manipulable. Consider an arbitrary agent aia_{i} and let viv_{i} be her true value density function. Consider an arbitrary misreport viv_{i}^{\prime} of agent aia_{i} with viviv_{i}^{\prime}\neq v_{i}. Since the value density functions are normalized, there must exist an interval (a,b)(a,b) where viv_{i}^{\prime} and viv_{i} are constant and vi(x)<vi(x)v_{i}^{\prime}(x)<v_{i}(x) for x(a,b)x\in(a,b). Choose ε>0\varepsilon>0 such that vi(x)>vi(x)+εv_{i}(x)>v_{i}^{\prime}(x)+\varepsilon. Consider the following two value density functions (note that both are normalized):

v(1)(x)={vi(x)+εif x(a,b)vi(x)εba1+abotherwiseandv(2)(x)={vi(x)εif x(a,b)vi(x)+εba1+abotherwise.v^{(1)}(x)=\left\{\begin{array}[]{ll}v_{i}^{\prime}(x)+\varepsilon&\mbox{if }x\in(a,b)\\ v_{i}^{\prime}(x)-\varepsilon\cdot\frac{b-a}{1+a-b}&\mbox{otherwise}\end{array}\right.\quad\mbox{and}\quad v^{(2)}(x)=\left\{\begin{array}[]{ll}v_{i}^{\prime}(x)-\varepsilon&\mbox{if }x\in(a,b)\\ v_{i}^{\prime}(x)+\varepsilon\cdot\frac{b-a}{1+a-b}&\mbox{otherwise}\end{array}\right..

Suppose the remaining n1n-1 agents’ reported value density functions are either v(1)v^{(1)} or v(2)v^{(2)} and each of v(1)v^{(1)} and v(2)v^{(2)} is reported by at least one agent (here we use the assumption n3n\geq 3). In this case, agent aia_{i} will receive the empty set by reporting viv_{i}^{\prime}. On the other hand, when reporting viv_{i}, agent aia_{i} will receive an allocation that at least contains (a,b)(a,b) as a subset. Since viv_{i} has a positive value on (a,b)(a,b), reporting viv_{i}^{\prime} is not a safe manipulation.

We next show that the mechanism is 11-known-agent safely-manipulable. Suppose agent a1a_{1}’s true value density function is

v1(x)={1.5if x[0,0.5]0.5otherwise,v_{1}(x)=\left\{\begin{array}[]{ll}1.5&\mbox{if }x\in[0,0.5]\\ 0.5&\mbox{otherwise}\end{array}\right.,

and agent a1a_{1} knows that agent a2a_{2} reports the uniform value density function v2(x)=1v_{2}(x)=1 for x[0,1]x\in[0,1]. We will show that the following manipulation of agent a1a_{1} is safe and profitable.

v1(x)={2if x[0,0.5]0otherwisev_{1}^{\prime}(x)=\left\{\begin{array}[]{ll}2&\mbox{if }x\in[0,0.5]\\ 0&\mbox{otherwise}\end{array}\right.

Firstly, regardless of the reports of the remaining n2n-2 agents, the final allocation received by agent a1a_{1} must be a subset of [0,0,5][0,0,5], as agent a2a_{2}’s value is higher on the other half (0.5,1](0.5,1]. Since v1v_{1}^{\prime} is larger than v1v_{1} on [0,0,5][0,0,5], any interval received by agent a1a_{1} when reporting v1v_{1} will also be received if v1v_{1}^{\prime} were reported. Thus, the manipulation is safe.

Secondly, if the remaining n2n-2 agents’ value density functions are

v3(x)=v4(x)==vn(x)={1.75if x[0,0.5]0.25otherwise,v_{3}(x)=v_{4}(x)=\cdots=v_{n}(x)=\left\{\begin{array}[]{ll}1.75&\mbox{if }x\in[0,0.5]\\ 0.25&\mbox{otherwise}\end{array}\right.,

it is easy to verify that agent a1a_{1} receives the empty set when reporting truthfully and she receives [0,0.5][0,0.5] by reporting v1v_{1}^{\prime}. Therefore, the manipulation is profitable. ∎

6.2. Fair Mechanisms

In this section, we focus on mechanisms that always output fair (envy-free or proportional) allocations. As we have mentioned earlier, it is proved in tao2022existence that truthfulness and proportionality are incompatible even for two agents and even if partial allocations are allowed. This motivates the search for fair cake-cutting algorithms with a high RAT-degree.

The mechanisms discussed in this section have been considered in BU2023Rat. However, they are only studied by whether or not they are risk-averse truthful (in our language, whether the RAT-degree is positive). With our new notion of RAT-degree, we are now able to provide a more fine-grained view of their performances on strategy-proofness.

One natural envy-free mechanism is to evenly allocate each uniform segment XtX_{t} to all agents. Specifically, each XtX_{t} is partitioned into nn intervals of equal length, and each agent receives exactly one of them. It is easy to see that Vi(Aj)=1nVi([0,1])V_{i}(A_{j})=\frac{1}{n}V_{i}([0,1]) for any i,j[n]i,j\in[n] under this allocation, so the allocation is envy-free and proportional.

To completely define the mechanism, we need to specify the order of evenly allocating each Xt=(xt1,xt)X_{t}=(x_{t-1},x_{t}) to the nn agents. A natural tie-breaking rule is to let agent a1a_{1} get the left-most interval and agent ana_{n} get the right-most interval. Specifically, agent aia_{i} receives the ii-th interval of XtX_{t}, which is [xt1+i1n(xtxt1),xt1+in(xtxt1)][x_{t-1}+\frac{i-1}{n}(x_{t}-x_{t-1}),x_{t-1}+\frac{i}{n}(x_{t}-x_{t-1})]. However, it was proved in BU2023Rat that the equal division mechanism under this ordering rule is safely-manipulable, i.e., its RAT-degree is 0. In particular, agent a1a_{1}, knowing that she will always receive the left-most interval in each XtX_{t}, can safely manipulate by deleting a point of discontinuity in her value density function if her value on the left-hand side of this point is higher.

To avoid this type of manipulation, a different ordering rule was considered by BU2023Rat (See Mechanism 3 in their paper): at the tt-th segment, the nn equal-length subintervals of XtX_{t} are allocated to the nn agents with the left-to-right order at,at+1,,an,a1,a2,,at1a_{t},a_{t+1},\ldots,a_{n},a_{1},a_{2},\ldots,a_{t-1}. By using this ordering rule, an agent does not know her position in the left-to-right order of XtX_{t} without knowing others’ value density functions. Indeed, even if only one agent’s value density function is unknown, an agent cannot know the index tt of any segment XtX_{t}. This suggests that the mechanism has a RAT-degree of n1n-1.

{theoremrep}

Consider the mechanism that evenly partitions each uniform segment XtX_{t} into nn equal-length subintervals and allocates these nn subintervals to the nn agents with the left-to-right order at,at+1,,an,a1,a2,,at1a_{t},a_{t+1},\ldots,a_{n},a_{1},a_{2},\ldots,a_{t-1}. It has RAT-degree n1n-1 and always outputs envy-free allocations. {proofsketch} Envy-freeness is trivial: for any i,j[n]i,j\in[n], we have Vi(Aj)=1nVi([0,1])V_{i}(A_{j})=\frac{1}{n}V_{i}([0,1]). The general impossibility result in tao2022existence shows that no mechanism with the envy-freeness guarantee can be truthful, so the RAT-degree is at most n1n-1.

To show that the RAT-degree is exactly n1n-1, we show that, if even a single agent is not known to the manipulator, it is possible that this agent’s valuation adds discontinuity points in a way that the ordering in each uniform segment is unfavorable for the manipulator.

Proof.

Envy-freeness is trivial: for any i,j[n]i,j\in[n], we have Vi(Aj)=1nVi([0,1])V_{i}(A_{j})=\frac{1}{n}V_{i}([0,1]). The general impossibility result in tao2022existence shows that no mechanism with the envy-freeness guarantee can be truthful, so the RAT-degree is at most n1n-1.

To show that the RAT-degree is exactly n1n-1, consider an arbitrary agent aia_{i} with true value density function viv_{i} and an arbitrary agent aja_{j} whose report is unknown to agent aia_{i}. Fix n2n-2 arbitrary value density functions {vk}k{i,j}\{v_{k}\}_{k\notin\{i,j\}} that are known by agent aia_{i} to be the reports of the remaining n2n-2 agents. For any viv_{i}^{\prime}, we will show that agent aia_{i}’s reporting viv_{i}^{\prime} is either not safe or not profitable.

Let TT be the set of points of discontinuity for v1,v2,,vj1,vj+1,,vnv_{1},v_{2},\ldots,v_{j-1},v_{j+1},\ldots,v_{n}, and TT^{\prime} be the set of points of discontinuity with viv_{i} replaced by viv_{i}^{\prime}. If TTT\subseteq T^{\prime} (i.e., the uniform segment partition defined by TT^{\prime} is “finer” than the partition defined by TT), the manipulation is not profitable, as agent aia_{i} will receive her proportional share 1nVi([0,1])\frac{1}{n}V_{i}([0,1]) in both cases.

It remains to consider the case where there exists a point of discontinuity yy of viv_{i} such that yTy\in T and yTy\notin T^{\prime}. This implies that yy is a point of discontinuity in viv_{i}, but not in viv_{i}^{\prime} nor in the valuation of any other agent. We will show that the manipulation is not safe in this case.

Choose a sufficiently small ε>0\varepsilon>0 such that (yε,y+ε)(y-\varepsilon,y+\varepsilon) is contained in a uniform segment defined by TT^{\prime}. We consider two cases, depending on whether the “jump” of viv_{i} in its discontinuity point yy is upwards or downwards.

Case 1: limxyvi(x)<limxy+vi(x)\lim_{x\to y^{-}}v_{i}(x)<\lim_{x\to y^{+}}v_{i}(x). We can construct vjv_{j} such that: 1) yεy-\varepsilon and y+εy+\varepsilon are points of discontinuity of vjv_{j}, and 2) the uniform segment (yε,y+ε)(y-\varepsilon,y+\varepsilon) under the profile (v1,,vi1,vi,vi+1,,vn)(v_{1},\ldots,v_{i-1},v_{i}^{\prime},v_{i+1},\ldots,v_{n}) is the tt-th segment where nn divides tit-i (i.e., agent aia_{i} receives the left-most subinterval of this uniform segment). Notice that 2) is always achievable by inserting a suitable number of points of discontinuity for vjv_{j} before yεy-\varepsilon. Given that limxyvi(x)<limxy+vi(x)\lim_{x\to y^{-}}v_{i}(x)<\lim_{x\to y^{+}}v_{i}(x), agent aia_{i}’s allocated subinterval on the segment (yε,y+ε)(y-\varepsilon,y+\varepsilon) has value strictly less than 1nVi([(yε,y+ε])\frac{1}{n}V_{i}([(y-\varepsilon,y+\varepsilon]).

Case 2: limxyvi(x)>limxy+vi(x)\lim_{x\to y^{-}}v_{i}(x)>\lim_{x\to y^{+}}v_{i}(x). We can construct vjv_{j} such that 1) (yε,y+ε)(y-\varepsilon,y+\varepsilon) is a uniform segment under the profile (v1,,vi1,vi,vi+1,,vn)(v_{1},\ldots,v_{i-1},v_{i}^{\prime},v_{i+1},\ldots,v_{n}), and 2) agent aia_{i} receives the right-most subinterval on this segment. In this case, agent aia_{i} again receives a value of strictly less than 1nVi([(yε,y+ε])\frac{1}{n}V_{i}([(y-\varepsilon,y+\varepsilon]) on the segment (yε,y+ε)(y-\varepsilon,y+\varepsilon).

We can do this for every point yy of discontinuity of viv_{i} that is in TTT\setminus T^{\prime}. By a suitable choice of vjv_{j} (with a suitable number of points of discontinuity of vjv_{j} inserted in between), we can make sure agent aia_{i} receives a less-than-average value on every such segment (yε,y+ε)(y-\varepsilon,y+\varepsilon). Moreover, agent aia_{i} receives exactly the average value on each of the remaining segments, because the remaining discontinuity points of TT are contained in TT^{\prime}. Therefore, the overall utility of aia_{i} by reporting viv_{i}^{\prime} is strictly less than 1nVi([0,1])\frac{1}{n}V_{i}([0,1]). Given that aia_{i} receives value exactly 1nVi([0,1])\frac{1}{n}V_{i}([0,1]) for truthfully reporting viv_{i}, reporting viv_{i}^{\prime} is not safe. ∎

Although the equal division mechanism with the above-mentioned carefully designed ordering rule is envy-free and has a high RAT-degree of n1n-1, it is undesirable in at least two aspects:

  1. (1)

    it requires quite many cuts on the cake by making n1n-1 cuts on each uniform segment; this is particularly undesirable if piecewise constant functions are used to approximate more general value density functions.

  2. (2)

    it is highly inefficient: each agent aia_{i}’s utility is never more than her minimum proportionality requirement 1nVi([0,1])\frac{1}{n}V_{i}([0,1]);

Regarding point (1), researchers have been looking at allocations with connected pieces, i.e., allocations with only n1n-1 cuts on the cake. A well-known mechanism in this category is the moving-knife procedure, which always outputs proportional allocations. This mechanism was first proposed by dubins1961cut. It always returns a proportional connected allocation. Unfortunately, it was shown by BU2023Rat that Dubins and Spanier’s moving-knife procedure is safely-manipulable for some very subtle reasons.

BU2023Rat proposed a variant of the moving-knife procedure that is RAT. In addition, they showed that another variant of moving-knife procedure proposed by ortega2022obvious is also RAT.222It should be noticed that, when viv_{i} is allowed to take 0 value, tie-breaking needs to be handled very properly to ensure RAT. See BU2023Rat for more details. Here, for simplicity, we assume vi(x)>0v_{i}(x)>0 for each i[n]i\in[n] and x[0,1]x\in[0,1]. In the appendix, we describe both mechanisms and show that both of them have RAT-degree 11. {toappendix}

6.3. Moving-knife mechanisms: descriptions and proofs

Hereafter, we assume vi(x)>0v_{i}(x)>0 for each i[n]i\in[n] and x[0,1]x\in[0,1].

Dubins and Spanier’s moving-knife procedure

Let ui=1nVi([0,1])u_{i}=\frac{1}{n}V_{i}([0,1]) be the value of agent aia_{i}’s proportional share. In the first iteration, each agent aia_{i} marks a point xi(1)x_{i}^{(1)} on [0,1][0,1] such that the interval [0,xi(1)][0,x_{i}^{(1)}] has value exactly uiu_{i} to agent aia_{i}. Take x(1)=mini[n]xi(1)x^{(1)}=\min_{i\in[n]}x_{i}^{(1)}, and the agent ai1a_{i_{1}} with xi1(1)=x(1)x_{i_{1}}^{(1)}=x^{(1)} takes the piece [0,x(1)][0,x^{(1)}] and leaves the game. In the second iteration, let each of the remaining n1n-1 agents aia_{i} marks a point xi(2)x_{i}^{(2)} on the cake such that [x(1),xi(2)][x^{(1)},x_{i}^{(2)}] has value exactly uiu_{i}. Take x(2)=mini[n]{i1}xi(2)x^{(2)}=\min_{i\in[n]\setminus\{i_{1}\}}x_{i}^{(2)}, and the agent ai2a_{i_{2}} with xi2(2)=x(2)x_{i_{2}}^{(2)}=x^{(2)} takes the piece [x(1),x(2)][x^{(1)},x^{(2)}] and leave the game. This is done iteratively until n1n-1 agents have left the game with their allocated pieces. Finally, the only remaining agent takes the remaining part of the cake. It is easy to see that each of the first n1n-1 agents receives exactly her proportional share, while the last agent receives weakly more than her proportional share; hence the procedure always returns a proportional allocation.

Notice that, although the mechanism is described in an iterative interactive way that resembles an extensive-form game, we will consider the direct-revelation mechanisms in this paper, where the nn value density functions are reported to the mechanism at the beginning. In the above description of Dubins and Spanier’s moving-knife procedure, as well as its two variants mentioned later, by saying “asking an agent to mark a point”, we refer to that the mechanism computes such a point based on the reported value density function. In particular, we do not consider the scenario where agents can adaptively choose the next marks based on the allocations in the previous iterations.

Unfortunately, it was shown by BU2023Rat that Dubins and Spanier’s moving-knife procedure is safely-manipulable for some very subtle reasons.

BU2023Rat proposed a variant of the moving-knife procedure that is risk-averse truthful. In addition, BU2023Rat shows that another variant of moving-knife procedure proposed by ortega2022obvious is also risk-averse truthful.333It should be noticed that, when viv_{i} is allowed to take 0 value, tie-breaking needs to be handled very properly to ensure risk-averse truthfulness. See BU2023Rat for more details. Below, we will first describe both mechanisms and then show that both of them have RAT-degree 11.

Ortega and Segal-Halevi’s moving knife procedure

The first iteration of Ortega and Segal-Halevi’s moving knife procedure is the same as it is in Dubins and Spanier’s. After that, the interval [x(1),1][x^{(1)},1] is then allocated recursively among the n1n-1 agents [n]{ai1}[n]\setminus\{a_{i_{1}}\}. That is, in the second iteration, each agent aia_{i} marks a point xi(2)x_{i}^{(2)} such that the interval [x(1),xi(2)][x^{(1)},x_{i}^{(2)}] has value exactly 1n1Vi([x(1),1])\frac{1}{n-1}V_{i}([x^{(1)},1]) (instead of 1nV1([0,1])\frac{1}{n}V_{1}([0,1]) as it is in Dubins and Spanier’s moving-knife procedure). The remaining part is the same: the agent with the left-most mark takes the corresponding piece and leaves the game. After the second iteration, the remaining part of the cake is again recursively allocated to the remaining n2n-2 agents. This is continued until the entire cake is allocated.

Bu, Song, and Tao’s moving knife procedure

Each agent aia_{i} is asked to mark all the n1n-1 “equal-division-points” xi(1),,xi(n1)x_{i}^{(1)},\ldots,x_{i}^{(n-1)} at the beginning such that Vi([xi(t1),xi(t)])=1nVi([0,1])V_{i}([x_{i}^{(t-1)},x_{i}^{(t)}])=\frac{1}{n}V_{i}([0,1]) for each t=1,,nt=1,\ldots,n, where we set xi(0)=0x_{i}^{(0)}=0 and xi(n)=1x_{i}^{(n)}=1. The remaining part is similar to Dubins and Spanier’s moving-knife procedure: in the first iteration, agent i1i_{1} with the minimum xi1(1)x_{i_{1}}^{(1)} takes [0,xi1(1)][0,x_{i_{1}}^{(1)}] and leave the game; in the second iteration, agent i2i_{2} with the minimum xi2(2)x_{i_{2}}^{(2)} among the remaining n1n-1 agents takes [xi1(1),xi2(2)][x_{i_{1}}^{(1)},x_{i_{2}}^{(2)}] and leave the game; and so on. The difference to Dubins and Spanier’s moving-knife procedure is that each xi(t)x_{i}^{(t)} is computed at the beginning, instead of depending on the position of the previous cut.

Theorem 6.1.

The RAT-degree of Ortega and Segal-Halevi’s moving knife procedure is 11.

Proof.

It was proved in BU2023Rat that the mechanism is not 0-known-agent safely-manipulable. It remains to show that it is 11-known-agents safely-manipulable. Suppose agent a1a_{1}’s value density function is uniform, v1(x)=1v_{1}(x)=1 for x[0,1]x\in[0,1], and agent a1a_{1} knows that agent a2a_{2} will report v2v_{2} such that v2(x)=1v_{2}(x)=1 for x[1ε,1]x\in[1-\varepsilon,1] and v2(x)=0v_{2}(x)=0 for x[0,1ε)x\in[0,1-\varepsilon) for some very small ε>0\varepsilon>0 with ε1n\varepsilon\ll\frac{1}{n}. We show that the following v1v_{1}^{\prime} is a safe manipulation.

v1(x)={1x[0,n2n]2nεx[12ε,1ε]0otherwisev_{1}^{\prime}(x)=\left\{\begin{array}[]{ll}1&x\in[0,\frac{n-2}{n}]\\ \frac{2}{n\varepsilon}&x\in[1-2\varepsilon,1-\varepsilon]\\ 0&\mbox{otherwise}\end{array}\right.

Before we move on, note an important property of v1v_{1}^{\prime}: for any tn2nt\leq\frac{n-2}{n}, we have V1([t,1])=V1([t,1])V_{1}([t,1])=V_{1}^{\prime}([t,1]).

Let [a,b][a,b] be the piece received by agent a1a_{1} when she reports v1v_{1} truthfully. If bn2nb\leq\frac{n-2}{n}, the above-mentioned property implies that she will also receive exactly [a,b][a,b] for reporting v1v_{1}^{\prime}. If b>n2nb>\frac{n-2}{n}, then we know that agent a1a_{1} is the (n1)(n-1)-th agent in the procedure. To see this, we have V1([a,b])1n([0,1])=1nV_{1}([a,b])\geq\frac{1}{n}([0,1])=\frac{1}{n} by the property of Ortega and Segal-Halevi’s moving knife procedure, and we also have V1([b,1])=1b<2nV_{1}([b,1])=1-b<\frac{2}{n}. This implies bb cannot be the 1/k1/k cut point of [a,1][a,1] for k3k\geq 3. On the other hand, it is obvious that agent a2a_{2} takes a piece after agent a1a_{1}. Thus, by the time agent a1a_{1} takes [a,b][a,b], the only remaining agent is agent a2a_{2}.

Since there are exactly two remaining agents in the game before agent a1a_{1} takes [a,b][a,b], we have V1([a,1])2nV1([0,1])=2nV_{1}([a,1])\geq\frac{2}{n}V_{1}([0,1])=\frac{2}{n}. This implies an2na\leq\frac{n-2}{n} and b=a+12n1nb=\frac{a+1}{2}\leq\frac{n-1}{n}. On the other hand, by reporting v1v_{1}^{\prime}, agent a1a_{1} can then get the piece [a,b][a,b^{\prime}] with b[12ε,1ε]b^{\prime}\in[1-2\varepsilon,1-\varepsilon]. We see that b>bb^{\prime}>b. Thus, the manipulation is safe and profitable. ∎

Theorem 6.2.

The RAT-degree of Bu, Song, and Tao’s moving knife procedure is 11.

Proof.

It was proved in BU2023Rat that the mechanism is not 0-known-agent safely-manipulable. The proof that it is 11-known-agents safely-manipulable is similar to the proof for Ortega and Segal-Halevi’s moving knife procedure, with the same v1,v1v_{1},v_{1}^{\prime} and v2v_{2}. It suffices to notice that the first n2n-2 equal-division-points are the same for v1v_{1} and v1v_{1}^{\prime}, where the last equal-division-point of v1v_{1}^{\prime} is to the right of v1v_{1}’s. Given that agent a2a_{2} will always receive a piece after agent a1a_{1}, the same analysis in the previous proof can show that the manipulation is safe and profitable. ∎

6.4. Additional proofs

These results invoke the following question. {open} Is there a proportional connected cake-cutting rule with RAT-degree at least 22?

We handle point (2) from above in the following subsection.

6.5. A Proportional and Pareto-Optimal Mechanism with RAT-degree n1n-1

In this section, we provide a mechanism with RAT-degree n1n-1 that always outputs proportional and Pareto-optimal allocations. In addition, we show that the mechanism can be implemented in polynomial time. The mechanism uses some similar ideas as the one in Section 5.4.

6.5.1. Description of Mechanism

The mechanism has two components: an order selection rule Γ\Gamma and an allocation rule Ψ\Psi. The order selection rule Γ\Gamma takes the valuation profile (v1,,vn)(v_{1},\ldots,v_{n}) as an input and outputs an order π\pi of the nn agents. We use πi\pi_{i} to denote the ii-th agent in the order. The allocation rule Ψ\Psi then outputs an allocation based on π\pi.

We first define the allocation rule Ψ\Psi. Let 𝒳PROP\mathcal{X}^{\mathrm{PROP}} be the set of all proportional allocations. Then Ψ\Psi outputs an allocation in 𝒳PROP\mathcal{X}^{\mathrm{PROP}} in the following “leximax” way:

  1. (1)

    the allocation maximizes agent π1\pi_{1}’s utility;

  2. (2)

    subject to (1), the allocation maximizes agent π2\pi_{2}’s utility;

  3. (3)

    subject to (1) and (2), the allocation maximizes agent π3\pi_{3}’s utility;

  4. (4)

    \cdots

We next define Γ\Gamma. We first adapt the volatility property of Γ\Gamma (defined in Sect. 5.4) to the cake-cutting setting.

Definition 6.3.

A function Γ\Gamma (from the set of valuation profiles to the set of orders on agents) is called volatile if for any two agents aiaja_{i}\neq a_{j} and any two orders π\pi and π\pi^{\prime}, any set of n2n-2 value density functions {vk}k{i,j}\{v_{k}\}_{k\notin\{i,j\}}, any value density function v¯j\bar{v}_{j}, and any two reported valuation profiles vi,viv_{i},v_{i}^{\prime} of agent aia_{i} with viviv_{i}\neq v_{i}^{\prime}, there exists a valuation function vjv_{j} of agent aja_{j} such that

  • vjv_{j} is a rescaled version of v¯j\bar{v}_{j}, i.e., there exists α\alpha such that vj(x)=αv¯j(x)v_{j}(x)=\alpha\bar{v}_{j}(x) for all x[0,1]x\in[0,1];

  • Γ\Gamma outputs π\pi for the valuation profile {vk}k{i,j}{vi}{vj}\{v_{k}\}_{k\notin\{i,j\}}\cup\{v_{i}\}\cup\{v_{j}\}, and Γ\Gamma outputs π\pi^{\prime} for the valuation profile {vk}k{i,j}{vi}{vj}\{v_{k}\}_{k\notin\{i,j\}}\cup\{v_{i}^{\prime}\}\cup\{v_{j}\}.

In other words, a manipulation of agent ii from viv_{i} to viv_{i}^{\prime} can affect the output of Γ\Gamma in any possible way (from any order π\pi to any order π\pi^{\prime}), depending on the report of agent jj.

{propositionrep}

There exists a volatile function Γ\Gamma.

Proof.

The function does the following. It first finds the maximum value among all the value density functions (overall all uniform segments): v=maxi[n],[m]vi(X)v^{\ast}=\max_{i\in[n],\ell\in[m]}v_{i}(X_{\ell}). It then views vv^{\ast} as a binary string that encodes the following information:

  • the index ii of an agent aia_{i},

  • a non-negative integer tt,

  • two non-negative integers aa and bb that are at most n!1n!-1.

We append 0’s as most significant bits to vv^{\ast} if the length of the binary string is not long enough to support the format of the encoding. If the encoding of vv^{\ast} is longer than the length enough for encoding the above-mentioned information, we take only the least significant bits in the amount required for the encoding.

The order π\pi is chosen in the following way. Firstly, we use an integer between 0 and n!1n!-1 to encode an order. Then, let ss be the tt-th bit that encodes agent aia_{i}’s value density function. The order is defined to be as+bmod(n!)as+b\bmod(n!).

We now prove that Γ\Gamma is volatile. Suppose viv_{i} and viv_{i}^{\prime} differ at their tt-th bits, so that that the tt-th bit of viv_{i} is ss and the tt-th bit of viv_{i}^{\prime} is sss^{\prime}\neq s. We construct a number vv^{*} that encodes the index ii, the integer tt, and two integers a,ba,b such that as+bmod(n!)as+b\bmod(n!) encodes π\pi and as+bmod(n!)as^{\prime}+b\bmod(n!) encodes π\pi^{\prime}.

Then, we construct vjv_{j} by rescaling v¯j\bar{v}_{j} such that the maximum value among all density functions is attained by vjv_{j}, and this number is exactly vv^{\ast}, that is, v=vj(X)v^{\ast}=v_{j}(X_{\ell}) for some uniform segment XX_{\ell}. If the encoded vv^{*} is not large enough to be a maximum value, we enlarge it as needed by adding most significant bits.

By definition, Γ\Gamma returns π\pi when aia_{i} reports viv_{i} and returns π\pi^{\prime} when aia_{i} reports viv_{i}^{\prime}.

6.5.2. Properties of the Mechanism

The mechanism always outputs a proportional allocation by definition. It is straightforward to check that it outputs a Pareto-efficient allocation. {propositionrep} The Γ\Gamma-Ψ\Psi mechanism for cake-cutting always returns a Pareto-efficient allocation.

Proof.

Suppose for the sake of contradiction that (A1,,An)(A_{1},\ldots,A_{n}) output by the mechanism is Pareto-dominated by (A1,,An)(A_{1}^{\prime},\ldots,A_{n}^{\prime}), i.e., we have

  1. (1)

    Vi(Ai)Vi(Ai)V_{i}(A_{i}^{\prime})\geq V_{i}(A_{i}) for each agent aia_{i}, and

  2. (2)

    for at least one agent aia_{i}, Vi(Ai)>Vi(Ai)V_{i}(A_{i}^{\prime})>V_{i}(A_{i}).

Property (1) above ensures (A1,,An)(A_{1}^{\prime},\ldots,A_{n}^{\prime}) is proportional and thus is also in 𝒳PROP\mathcal{X}^{\mathrm{PROP}}: for each i[n]i\in[n], Vi(Ai)Vi(Ai)1nVi([0,1])V_{i}(A_{i}^{\prime})\geq V_{i}(A_{i})\geq\frac{1}{n}V_{i}([0,1]) (as the allocation (A1,,An)(A_{1},\ldots,A_{n}) is proportional). Based on the property (2), find the smallest index ii such that Vπi(Ai)>Vπi(Ai)V_{\pi_{i}}(A_{i}^{\prime})>V_{\pi_{i}}(A_{i}). We see that (A1,,An)(A_{1},\ldots,A_{n}) does not maximize the ii-th agent in the order π\pi, which contradicts the definition of the mechanism. ∎

It then remains to show that the mechanism has RAT-degree n1n-1. We need the following proposition; it follows from known results on super-proportional cake-cutting (dubins1961cut; woodall1986note); for completeness we provide a proof in the appendix.

{propositionrep}

Let 𝒳PROP\mathcal{X}^{\mathrm{PROP}} be the set of all proportional allocations for the valuation profile (v1,,vn)(v_{1},\ldots,v_{n}). Let (A1,,An)(A_{1},\ldots,A_{n}) be the allocation in 𝒳PROP\mathcal{X}^{\mathrm{PROP}} that maximizes agent aia_{i}’s utility. If there exists j[n]{i}j\in[n]\setminus\{i\} such that viv_{i} and vjv_{j} are not identical up to scaling, then Vi(Ai)>1nVi([0,1])V_{i}(A_{i})>\frac{1}{n}V_{i}([0,1]).

Proof.

We will explicitly construct a proportional allocation (B1,,Bn)(B_{1},\ldots,B_{n}) where Vi(Bi)>1nVi([0,1])V_{i}(B_{i})>\frac{1}{n}V_{i}([0,1]) if the pre-condition in the statement is satisfied. Notice that this will imply the proposition, as we are finding the allocation maximizing aia_{i}’s utility. To construct such an allocation, we assume viv_{i} and vjv_{j} are normalized without loss of generality (then vivjv_{i}\neq v_{j}), and consider the equal division allocation where each uniform segment XtX_{t} is evenly divided. This already guarantees that agent aia_{i} receives a value of 1nVi([0,1])\frac{1}{n}V_{i}([0,1]). Since viv_{i} and vjv_{j} are normalized and vivjv_{i}\neq v_{j}, there exist two uniform segments Xt1X_{t_{1}} and Xt2X_{t_{2}} such that vi(Xt1)>vj(Xt1)v_{i}(X_{t_{1}})>v_{j}(X_{t_{1}}) and vi(Xt2)<vj(Xt2)v_{i}(X_{t_{2}})<v_{j}(X_{t_{2}}). Agent aia_{i} and aja_{j} can then exchange parts of their allocations on Xt1X_{t_{1}} and Xt2X_{t_{2}} to improve the utility for both of them, which guarantees the resultant allocation is still proportional. For example, set ε>0\varepsilon>0 be a very small number. Agent aia_{i} can give a length of εvi(Xt2)+vj(Xt2)\frac{\varepsilon}{v_{i}(X_{t_{2}})+v_{j}(X_{t_{2}})} from Xt2X_{t_{2}} to agent aja_{j}, in exchange of a length of εvi(Xt1)+vj(Xt1)\frac{\varepsilon}{v_{i}(X_{t_{1}})+v_{j}(X_{t_{1}})} from Xt1X_{t_{1}}. This describes the allocation (B1,,Bn)(B_{1},\ldots,B_{n}). ∎

The proof of the following theorem is similar to the one for indivisible goods (Section 5.4.2). {theoremrep} The Γ\Gamma-Ψ\Psi mechanism for cake-cutting has RAT-degree n1n-1.

Proof.

Consider an arbitrary agent aia_{i} with the true value density function viv_{i}, and an arbitrary agent aja_{j} whose reported value density function is unknown to aia_{i}. Fix n2n-2 arbitrary value density function {vk}k{i,j}\{v_{k}\}_{k\notin\{i,j\}} for the remaining n2n-2 agents. Consider an arbitrary manipulation viviv_{i}^{\prime}\neq v_{i}.

Choose a uniform segment XtX_{t} with respect to (v1,,vj1,vj+1,,vn)(v_{1},\ldots,v_{j-1},v_{j+1},\ldots,v_{n}), satisfying vi(Xt)>0v_{i}(X_{t})>0. Choose a very small interval EXtE\subseteq X_{t}, such that the value density function

v¯j={0if xEvi(x)otherwise\bar{v}_{j}=\left\{\begin{array}[]{ll}0&\mbox{if }x\in E\\ v_{i}(x)&\mbox{otherwise}\end{array}\right.

is not a scaled version of some vkv_{k} with k[n]{i,j}k\in[n]\setminus\{i,j\}. Apply the volatility of Γ\Gamma to find a value density function vjv_{j} for agent aja_{j} that rescales v¯j\bar{v}_{j} such that

  1. (1)

    when agent aia_{i} reports viv_{i}, agent aia_{i} is the first in the order output by Γ\Gamma;

  2. (2)

    when agent aia_{i} reports viv_{i}^{\prime}, agent aja_{j} is the first in the order output by Γ\Gamma.

Let (A1,,An)(A_{1},\ldots,A_{n}) and (A1,,An)(A_{1}^{\prime},\ldots,A_{n}^{\prime}) be the output allocation for the profiles {vk}k{i,j}{vi}{vj}\{v_{k}\}_{k\notin\{i,j\}}\cup\{v_{i}\}\cup\{v_{j}\} and {vk}k{i,j}{vi}{vj}\{v_{k}\}_{k\notin\{i,j\}}\cup\{v_{i}^{\prime}\}\cup\{v_{j}\} respectively. Since v¯j\bar{v}_{j} is not a scaled version of some vkv_{k}, its rescaled version vjv_{j} is also different. By Proposition 6.5.2, Vj(Aj)>1nVj([0,1])V_{j}(A_{j}^{\prime})>\frac{1}{n}V_{j}([0,1]), as aja_{j} is the highest-priority agent when aia_{i} reports viv^{\prime}_{i}. Let DD be some subset of AjA_{j}^{\prime} with Vj(D)>0V_{j}(D)>0 and Vj(AjD)1nVj([0,1])V_{j}(A_{j}^{\prime}\setminus D)\geq\frac{1}{n}V_{j}([0,1]), and consider the allocation (A1+,,An+)(A_{1}^{+},\ldots,A_{n}^{+}) in which DD is moved from aja_{j} to aia_{i}, that is,

  • for k{i,j}k\notin\{i,j\}, Ak+=AkA_{k}^{+}=A_{k}^{\prime};

  • Ai+=AiDA_{i}^{+}=A_{i}^{\prime}\cup D;

  • Aj+=AjDA_{j}^{+}=A_{j}^{\prime}\setminus D.

It is clear by our construction that the new allocation is still proportional with respect to {vk}k{i,j}{vi}{vj}\{v_{k}\}_{k\notin\{i,j\}}\cup\{v_{i}^{\prime}\}\cup\{v_{j}\}. In addition, by the relation between v¯j\bar{v}_{j} and viv_{i} (and thus the relation between vjv_{j} and viv_{i}), we have Vi(D)>0V_{i}(D)>0 based on agent aia_{i}’s true value density function viv_{i}. Therefore, under agent aia_{i}’s true valuation, Vi(Ai+)>Vi(Ai)V_{i}(A_{i}^{+})>V_{i}(A_{i}^{\prime}).

If the allocation (A1+,,An+)(A_{1}^{+},\ldots,A_{n}^{+}) is not proportional under the profile {vk}k{i,j}{vi}{vj}\{v_{k}\}_{k\notin\{i,j\}}\cup\{v_{i}\}\cup\{v_{j}\} (where viv_{i}^{\prime} is changed to viv_{i}), then the only agent for whom proportionality is violated must be agent ii, that is,Vi(Ai+)<1nVi([0,1])V_{i}(A_{i}^{+})<\frac{1}{n}V_{i}([0,1]). It then implies Vi(Ai)<1nVi([0,1])V_{i}(A_{i}^{\prime})<\frac{1}{n}V_{i}([0,1]). On the other hand, agent aia_{i} receives at least her proportional share when reporting truthfully her value density function viv_{i}. This already implies the manipulation is not safe.

If the allocation (A1+,,An+)(A_{1}^{+},\ldots,A_{n}^{+}) is proportional under the profile {vk}k{i,j}{vi}{vj}\{v_{k}\}_{k\notin\{i,j\}}\cup\{v_{i}\}\cup\{v_{j}\}, then it is in 𝒳PROP\mathcal{X}^{\mathrm{PROP}}. Since agent aia_{i} is the first agent in the order when reporting viv_{i} truthfully, we have Vi(Ai)Vi(Ai+)V_{i}(A_{i})\geq V_{i}(A_{i}^{+}), which further implies Vi(Ai)>Vi(Ai)V_{i}(A_{i})>V_{i}(A_{i}^{\prime}). Again, the manipulation is not safe. ∎

Finally, we analyze the run-time of our mechanism. {propositionrep} The Γ\Gamma-Ψ\Psi mechanism for cake-cutting can be computed in polynomial time.

Proof.

We first note that Γ\Gamma can be computed in polynomial time. Finding vv^{\ast} and reading the information of i,t,ai,t,a, and bb can be performed in linear time, as it mostly only requires reading the input of the instance. In particular, the lengths of aa and bb are both less than the input length, so as+bas+b is of at most linear length and can also be computed in linear time. Finally, the length of n!n! is Θ(nlogn)\Theta(n\log n), so as+bmod(n!)as+b\bmod(n!) can be computed in polynomial time. We conclude that Γ\Gamma can be computed in polynomial time.

We next show that Ψ\Psi can be computed by solving linear programs. Let xitx_{it} be the length of the tt-th uniform segment allocated to agent aia_{i}. Then an agent aia_{i}’s utility is a linear expression t=1mvi(Xt)xit\sum_{t=1}^{m}v_{i}(X_{t})x_{it}, and requiring an agent’s utility is at least some value (e.g., her proportional share) is a linear constraint. We can use a linear program to find the maximum possible utility uπ1u_{\pi_{1}}^{\ast} for agent π1\pi_{1} among all proportional allocations. In the second iteration, we write the constraint t=1mvπ1(Xt)xituπ1\sum_{t=1}^{m}v_{\pi_{1}}(X_{t})x_{it}\geq u_{\pi_{1}}^{\ast} for agent π1\pi_{1}, the proportionality constraints for the n2n-2 agents [n]{π1,π2}[n]\setminus\{\pi_{1},\pi_{2}\}, and maximize agent π2\pi_{2}’s utility. This can be done by another linear program and gives us the maximum possible utility uπ2u_{\pi_{2}}^{\ast} for agent π2\pi_{2}. We can iteratively do this to figure out all of uπ1,uπ2,,uπnu_{\pi_{1}}^{\ast},u_{\pi_{2}}^{\ast},\ldots,u_{\pi_{n}}^{\ast} by linear programs. ∎

6.6. Towards An Envy-Free and Pareto-Optimal Mechanism with RAT-degree n1n-1

Given the result in the previous section, it is natural to ask if the fairness guarantee can be strengthened to envy-freeness. A compelling candidate is the mechanism that always outputs allocations with maximum Nash welfare. The Nash welfare of an allocation (A1,,An)(A_{1},\ldots,A_{n}) is defined by the product of agents utilities: i=1nVi(Ai).\displaystyle\prod_{i=1}^{n}V_{i}(A_{i}).

It is well-known that such an allocation is envy-free and Pareto-optimal. However, computing its RAT-degree turns out to be very challenging for us. We conjecture the answer is n1n-1. {open} What is the RAT-degree of the maximum Nash welfare mechanism?

7. Single-Winner Ranked Voting

We consider nn voters (the agents) who need to elect one winner from a set CC of mm candidates. The agents’ preferences are given by strict linear orderings i\succ_{i} over the candidates.

When there are only two candidates, the majority rules and its variants (weighted majority rules) are truthful. With three or more candidates, the Gibbard–Satterthwaite Theorem implies that the only truthful rules are dictatorships. Our goal is to find non-dictatorial rules with a high RAT-degree.

Throughout the analysis, we consider a specific agent Alice, who looks for a safe profitable manipulation. Her true ranking is cmAAc1c_{m}\succ_{A}\cdots\succ_{A}c_{1}. We assume that, for any j>ij>i, Alice strictly prefers a victory of cjc_{j} to a tie between cjc_{j} and cic_{i}, and strictly prefers this tie to a victory of cic_{i}.444We could also assume that ties are broken at random, but this would require us to define preferences on lotteries, which we prefer to avoid in this paper.

7.1. Positional voting rules: general upper bound

A positional voting rule is parameterized by a vector of scores, 𝐬=(s1,,sm)\mathbf{s}=(s_{1},\ldots,s_{m}), where s1sms_{1}\leq\cdots\leq s_{m} and s1<sms_{1}<s_{m}. Each voter reports his entire ranking of the mm candidates. Each such ranking is translated to an assignment of a score to each candidate: the lowest-ranked candidate is given a score of s1s_{1}, the second-lowest candidate is given s2s_{2}, etc., and the highest-ranked candidate is given a score of sms_{m}. The total score of each candidate is the sum of scores he received from the rankings of all nn voters. The winner is the candidate with the highest total score.

Formally, for any subset NNN^{\prime}\subseteq N and any candidate cCc\in C, we denote by scoreN(c)\operatorname{score}_{N^{\prime}}(c) the total score that cc receives from the votes of the agents in NN^{\prime}. Then the winner is argmaxcCscoreN(c)\arg\max_{c\in C}\operatorname{score}_{N}(c). If there are several agents with the same maximum score, then the outcome is considered a tie.

Common special cases of positional voting are plurality voting, in which 𝐬=(0,0,0,,0,1)\mathbf{s}=(0,0,0,\ldots,0,1), and anti-plurality voting, in which 𝐬=(0,1,1,,1,1)\mathbf{s}=(0,1,1,\ldots,1,1). By the Gibbard–Satterthwaite theorem, all positional voting rules are manipulable, so their RAT-degree is smaller than nn. But, as we will show next, some positional rules have a higher RAT-degree than others.

In the upcoming lemmas, we identify the manipulations that are safe and profitable for Alice under various conditions on the score vector 𝐬\mathbf{s}. We assume throughout that there are m3m\geq 3 candidates, and that nn is sufficiently large. We allow an agent to abstain, which means that his vote gives the same score to all candidates.555 We need the option to abstain in order to avoid having different constructions for even nn and odd nn; see the proofs for details.

{lemmarep}

Let m3m\geq 3 and n4n\geq 4. If sm>sm1s_{m}>s_{m-1} and there are kn/2+1k\geq\left\lceil n/2\right\rceil+1 known agents, then switching the top two candidates (cmc_{m} and cm1c_{m-1}) may be a safe profitable manipulation for Alice. {proofsketch} For some votes by the known agents, the manipulation is safe since cmc_{m} has no chance to win, and it is profitable as it may help cm1c_{m-1} win over worse candidates.

Proof.

Suppose there is a subset KK of n/2+1\left\lceil n/2\right\rceil+1 known agents, who vote as follows:

  • n/21\left\lfloor n/2\right\rfloor-1 known agents rank cm2cm1cmc_{m-2}\succ c_{m-1}\succ\cdots\succ c_{m}.

  • One known agent ranks cm2cmcm1c_{m-2}\succ c_{m}\succ c_{m-1}\succ\cdots.

  • One known agent ranks cm1cm2cmc_{m-1}\succ c_{m-2}\succ\cdots\succ c_{m}.

  • In case nn is odd, the remaining known agent abstains.

We first show that cmc_{m} cannot win. To this end, we show that the difference in scores between cm2c_{m-2} and cmc_{m} is always strictly positive.

  • The difference in scores given by the known agents is

    scoreK(cm2)scoreK(cm)=\displaystyle\operatorname{score}_{K}(c_{m-2})-\operatorname{score}_{K}(c_{m})= (n/21)(sms1)+(smsm1)+(sm1s1).\displaystyle(\left\lfloor n/2\right\rfloor-1)(s_{m}-s_{1})+(s_{m}-s_{m-1})+(s_{m-1}-s_{1}).
    =\displaystyle= (n/2)(sms1)\displaystyle(\left\lfloor n/2\right\rfloor)(s_{m}-s_{1})
  • There are n/21\left\lfloor n/2\right\rfloor-1 agents not in KK (including Alice). These agents can reduce the score-difference by at most (n/21)(sms1)(\left\lfloor n/2\right\rfloor-1)(s_{m}-s_{1}). Therefore,

    scoreN(cm2)scoreN(cm)(sms1),\displaystyle\operatorname{score}_{N}(c_{m-2})-\operatorname{score}_{N}(c_{m})\geq(s_{m}-s_{1}),

    which is positive for any score vector. So cmc_{m} has no chance to win or even tie.

Therefore, switching cmc_{m} and cm1c_{m-1} can never harm Alice — the manipulation is safe.

Next, we show that the manipulation can help cm1c_{m-1} win. We compute the score-difference between cm1c_{m-1} and the other candidates with and without the manipulation.

Suppose that the agents not in KK vote as follows:

  • the n/22\left\lfloor n/2\right\rfloor-2 unknown agents666Here we use the assumption n4n\geq 4. rank cm1cm2c_{m-1}\succ c_{m-2}\succ\cdots.

  • Alice votes truthfully cmcm1cm2c1c_{m}\succ c_{m-1}\succ c_{m-2}\cdots\succ c_{1}.

Then,

scoreN(cm2)scoreN(cm1)=\displaystyle\operatorname{score}_{N}(c_{m-2})-\operatorname{score}_{N}(c_{m-1})= (n/21)(smsm1)+(smsm2)+(sm1sm)\displaystyle(\left\lfloor n/2\right\rfloor-1)(s_{m}-s_{m-1})+(s_{m}-s_{m-2})+(s_{m-1}-s_{m})
+(n/22)(sm1sm)+(sm2sm1)\displaystyle+(\left\lfloor n/2\right\rfloor-2)(s_{m-1}-s_{m})+(s_{m-2}-s_{m-1})
=\displaystyle= (smsm1),\displaystyle(s_{m}-s_{m-1}),

which is positive by the assumption sm>sm1s_{m}>s_{m-1}. The candidates cj<m2c_{j<m-2} are ranked even lower than cm1c_{m-1} by all agents. Therefore the winner is cm2c_{m-2}.

If Alice switches cm1c_{m-1} and cmc_{m}, then the score of cm1c_{m-1} increases by smsm1s_{m}-s_{m-1} and the scores of all other candidates except cmc_{m} do not change. Therefore, scoreN(cm2)scoreN(cm1)\operatorname{score}_{N}(c_{m-2})-\operatorname{score}_{N}(c_{m-1}) becomes 0, and there is a tie between cm2c_{m-2} and cm1c_{m-1}, which is better for Alice by assumption. Therefore, the manipulation is profitable.

{lemmarep}

Let m3m\geq 3 and n2mn\geq 2m. If s2>s1s_{2}>s_{1} and there are kn/2+1k\geq\left\lceil n/2\right\rceil+1 known agents, then switching the bottom two candidates (c2c_{2} and c1c_{1}) may be a safe profitable manipulation for Alice. {proofsketch} For some votes by the known agents, c1c_{1} has no chance to win, so the worst candidate for Alice that could win is c2c_{2}. Therefore, switching c1c_{1} and c2c_{2} cannot harm, but may help a better candidate win over c2c_{2}.

Proof.

Suppose there is a subset KK of n/2+1\left\lceil n/2\right\rceil+1 known agents, who vote as follows:

  • n/21\left\lfloor n/2\right\rfloor-1 known agents rank c2cmc1c_{2}\succ c_{m}\succ\cdots\succ c_{1}.

  • Two known agents rank cmc2c1c_{m}\succ c_{2}\succ\cdots\succ c_{1}.

  • In case nn is odd, the remaining known agent abstains.

We first show that c1c_{1} cannot win. To this end, we show that the difference in scores between c2c_{2} and c1c_{1} is always strictly positive.

  • The difference in scores given by the known agents is

    scoreK(c2)scoreK(c1)=\displaystyle\operatorname{score}_{K}(c_{2})-\operatorname{score}_{K}(c_{1})= (n/21)(sms1)+2(sm1s1).\displaystyle(\left\lfloor n/2\right\rfloor-1)(s_{m}-s_{1})+2(s_{m-1}-s_{1}).
  • There are n/21\left\lfloor n/2\right\rfloor-1 agents not in KK (including Alice). These agents can reduce the score-difference by at most (n/21)(sms1)(\left\lfloor n/2\right\rfloor-1)(s_{m}-s_{1}). Therefore,

    scoreN(c2)scoreN(c1)2(sm1s1),\displaystyle\operatorname{score}_{N}(c_{2})-\operatorname{score}_{N}(c_{1})\geq 2(s_{m-1}-s_{1}),

    which is positive by the assumption s2>s1s_{2}>s_{1}. So c1c_{1} has no chance to win or even tie.

Therefore, switching c2c_{2} and c1c_{1} can never harm Alice — the manipulation is safe.

Next, we show that the manipulation can help cmc_{m} win, when the agents not in KK vote as follows:

  • n/23\left\lfloor n/2\right\rfloor-3 unknown agents rank cmc2c_{m}\succ c_{2}\succ\cdots, where each candidate except c1,c2,cmc_{1},c_{2},c_{m} is ranked last by at least one voter (here we use the assumption n2mn\geq 2m).

  • One unknown agent ranks c2cmc1c_{2}\succ\cdots\succ c_{m}\succ c_{1};

  • Alice votes truthfully cmc2c1c_{m}\succ\cdots\succ c_{2}\succ c_{1}.

Then,

scoreN(c2)scoreN(cm)=\displaystyle\operatorname{score}_{N}(c_{2})-\operatorname{score}_{N}(c_{m})= (n/21)(smsm1)+2(sm1sm)\displaystyle(\left\lfloor n/2\right\rfloor-1)(s_{m}-s_{m-1})+2(s_{m-1}-s_{m})
+(n/23)(sm1sm)+(sms2)+(s2sm)\displaystyle+(\left\lfloor n/2\right\rfloor-3)(s_{m-1}-s_{m})+(s_{m}-s_{2})+(s_{2}-s_{m})
=\displaystyle= 0.\displaystyle 0.

Moreover, for any j{1,2,m}j\not\in\{1,2,m\}, the score of cjc_{j} is even lower (here we use the assumption that cjc_{j} is ranked last by at least one unknown agent):

scoreN(c2)scoreN(cj)\displaystyle\operatorname{score}_{N}(c_{2})-\operatorname{score}_{N}(c_{j})\geq (n/21)(smsm2)+2(sm1sm2)\displaystyle(\left\lfloor n/2\right\rfloor-1)(s_{m}-s_{m-2})+2(s_{m-1}-s_{m-2})
+(n/24)(sm1sm2)+(sm1s1)+(smsm1)+(s2sm1)\displaystyle+(\left\lfloor n/2\right\rfloor-4)(s_{m-1}-s_{m-2})+(s_{m-1}-s_{1})+(s_{m}-s_{m-1})+(s_{2}-s_{m-1})
\displaystyle\geq (sm1s1)+(s2sm1)\displaystyle(s_{m-1}-s_{1})+(s_{2}-s_{m-1})
=\displaystyle= s2s1,\displaystyle s_{2}-s_{1},

which is positive by the lemma assumption. Therefore, when Alice is truthful, the outcome is a tie between cmc_{m} and c2c_{2}.

If Alice switches c1c_{1} and c2c_{2}, then the score of c2c_{2} decreases by s2s1s_{2}-s_{1}, which is positive by the lemma assumption, and the scores of all other candidates except c1c_{1} do not change. So cmc_{m} wins, which is better for Alice than a tie. Therefore, the manipulation is profitable. ∎

Section 7.1 can be generalized as follows.

{lemmarep}

Let m3m\geq 3 and n2mn\geq 2m. For every integer t{1,,m2}t\in\{1,\ldots,m-2\}, if st+1>st==s1s_{t+1}>s_{t}=\cdots=s_{1}, and there are kn/2+1k\geq\left\lceil n/2\right\rceil+1 known agents, then switching ct+1c_{t+1} and ctc_{t} may be a safe profitable manipulation.

{proofsketch}

For some votes by the known agents, all candidates c1,,ctc_{1},\ldots,c_{t} have no chance to win, so the worst candidate for Alice that could win is ct+1c_{t+1}. Therefore, switching ctc_{t} and ct+1c_{t+1} cannot harm, but can help better candidates win over ct+1c_{t+1}.

Proof.

Suppose there is a subset KK of n/2+1\left\lceil n/2\right\rceil+1 known agents, who vote as follows:

  • n/21\left\lfloor n/2\right\rfloor-1 known agents rank ct+1cmc_{t+1}\succ c_{m} first and rank ctc1c_{t}\succ\cdots\succ c_{1} last.

  • Two known agents rank cmct+1c_{m}\succ c_{t+1} first and rank ctc1c_{t}\succ\cdots\succ c_{1} last.

  • In case nn is odd, the remaining known agent abstains.

We first show that the tt worst candidates for Alice (c1,,ctc_{1},\ldots,c_{t}) cannot win. Note that, by the lemma assumption st==s1s_{t}=\cdots=s_{1}, all these candidates receive exactly the same score by all known agents. We show that the difference in scores between ct+1c_{t+1} and ctc_{t} (and hence all tt worst candidates) is always strictly positive.

  • The difference in scores given by the known agents is

    scoreK(ct+1)scoreK(ct)=\displaystyle\operatorname{score}_{K}(c_{t+1})-\operatorname{score}_{K}(c_{t})= (n/21)(sms1)+(sm1s1).\displaystyle(\left\lfloor n/2\right\rfloor-1)(s_{m}-s_{1})+(s_{m-1}-s_{1}).
  • There are n/21\left\lfloor n/2\right\rfloor-1 agents not in KK (including Alice). These agents can reduce the score-difference by at most (n/21)(sms1)(\left\lfloor n/2\right\rfloor-1)(s_{m}-s_{1}). Therefore,

    scoreN(ct+1)scoreN(ct)(sm1s1),\displaystyle\operatorname{score}_{N}(c_{t+1})-\operatorname{score}_{N}(c_{t})\geq(s_{m-1}-s_{1}),

    which is positive by the assumption m2tm-2\geq t and st+1>sts_{t+1}>s_{t}. So no candidate in c1,,ctc_{1},\ldots,c_{t} has a chance to win or even tie.

Therefore, switching ct+1c_{t+1} and ctc_{t} can never harm Alice — the manipulation is safe.

Next, we show that the manipulation can help cmc_{m} win. We compute the score-difference between cmc_{m} and the other candidates with and without the manipulation.

Suppose that the agents not in KK vote as follows:

  • n/23\left\lfloor n/2\right\rfloor-3 unknown agents rank cmct+1c_{m}\succ c_{t+1}\succ\cdots, where each candidate in ct+2,,cm1c_{t+2},\ldots,c_{m-1} is ranked last by at least one voter (here we use the assumption n2mn\geq 2m).

  • One unknown agent ranks ct+1cmctc1c_{t+1}\succ\cdots\succ c_{m}\succ c_{t}\succ\cdots\succ c_{1};

  • Alice votes truthfully cmct+1ctc1c_{m}\succ\cdots\succ c_{t+1}\succ c_{t}\succ\cdots\succ c_{1}.

Then,

scoreN(ct+1)scoreN(cm)=\displaystyle\operatorname{score}_{N}(c_{t+1})-\operatorname{score}_{N}(c_{m})= (n/21)(smsm1)+2(sm1sm)\displaystyle(\left\lfloor n/2\right\rfloor-1)(s_{m}-s_{m-1})+2(s_{m-1}-s_{m})
+(n/23)(sm1sm)+(smst+1)+(st+1sm)\displaystyle+(\left\lfloor n/2\right\rfloor-3)(s_{m-1}-s_{m})+(s_{m}-s_{t+1})+(s_{t+1}-s_{m})
=\displaystyle= 0.\displaystyle 0.

Moreover, for any j{t+2,,m1}j\in\{t+2,\ldots,m-1\}, the score of cjc_{j} is even lower (here we use the assumption that cjc_{j} is ranked last by at least one unknown agent):

scoreN(ct+1)scoreN(cj)\displaystyle\operatorname{score}_{N}(c_{t+1})-\operatorname{score}_{N}(c_{j})\geq (n/21)(smsm2)+2(sm1sm2)\displaystyle(\left\lfloor n/2\right\rfloor-1)(s_{m}-s_{m-2})+2(s_{m-1}-s_{m-2})
+(n/24)(sm1sm2)+(sm1s1)+(smsm1)+(st+1sm1)\displaystyle+(\left\lfloor n/2\right\rfloor-4)(s_{m-1}-s_{m-2})+(s_{m-1}-s_{1})+(s_{m}-s_{m-1})+(s_{t+1}-s_{m-1})
\displaystyle\geq (sm1s1)+(st+1sm1)\displaystyle(s_{m-1}-s_{1})+(s_{t+1}-s_{m-1})
=\displaystyle= st+1s1,\displaystyle s_{t+1}-s_{1},

which is positive by the lemma assumption. Therefore, when Alice is truthful, the outcome is a tie between cmc_{m} and ct+1c_{t+1}.

If Alice switches ctc_{t} and ct+1c_{t+1}, then the score of ct+1c_{t+1} decreases by st+1sts_{t+1}-s_{t}, which is positive by the lemma assumption, and the scores of all other candidates except ctc_{t} do not change. As ctc_{t} cannot win, cmc_{m} wins, which is better for Alice than a tie. Therefore, the manipulation is profitable. ∎

Combining the lemmas leads to an upper bound on the RAT-degree of any positional voting rule:

Theorem 7.1.

The RAT-degree of any positional voting rule for m3m\geq 3 candidates and n2mn\geq 2m agents is at most n/2+1\left\lceil n/2\right\rceil+1.

Proof.

Consider a positional voting rule with score-vector 𝐬\mathbf{s}. Let t{1,,m1}t\in\{1,\ldots,m-1\} be the smallest index for which st+1>sts_{t+1}>s_{t} (there must be such an index by definition of a score-vector).

If tm+2t\leq m+2, then Section 7.1 implies that, for some votes by the n/2+1\left\lceil n/2\right\rceil+1 known agents, switching ct+1c_{t+1} and ctc_{t} may be a safe and profitable manipulation for Alice.

Otherwise, t=m1t=m-1, and Section 7.1 implies the same.

In all cases, Alice has a safe profitable manipulation. ∎

Next, we show that the plurality voting rule, which is the positional voting rule with score-vector (0,0,0,,0,1)(0,0,0,\ldots,0,1), attains the upper bound of Theorem 7.1 (at least when nn is even).

{lemmarep}

Let n4n\geq 4 be an even number. For the plurality voting rule, if there are at most n/2n/2 known agents, then Alice has no safe profitable manipulation.

Proof.

For any potential manipulation by Alice we have to prove that, for any set KK of n/2n/2 agents and any combination of votes by the agents of KK, either (1) the manipulation is not profitable (for any preference profile for the (n/21)(n/2-1) unknown agents, Alice weakly prefers to tell the truth); or (2) the manipulation is not safe (there exists a preference profile for the unknown agents such that Alice strictly prefers to tell the truth).

If the manipulation does not involve Alice’s top candidate cmc_{m}, then it does not affect the outcome and cannot be profitable. So let us consider a manipulation in which Alice ranks another candidate caltAcmc^{A}_{alt}\neq c_{m} at the top.

Let caltK=argmaxj[m1]scoreK(cj)\displaystyle c^{K}_{alt}=\operatorname*{arg\,max}_{j\in[m-1]}\operatorname{score}_{K}(c_{j}) denote the candidate with the highest number of votes among the known agents, except Alice’s top candidate (cmc_{m}). Consider the following two cases.

Case 1: scoreK(caltK)=0\operatorname{score}_{K}(c^{K}_{alt})=0.

Since caltKc^{K}_{alt} is a candidate who got the maximum number of votes from KK except cmc_{m}, this implies that all n/2n/2 known agents either vote for cmc_{m} or abstain, Then, it is possible that the n/21n/2-1 unknown agents vote for caltAc^{A}_{alt} or abstain, such that the score-difference score(cm)score(caltA)=1\operatorname{score}(c_{m})-\operatorname{score}(c^{A}_{alt})=1. Then, when Alice tells the truth, her favorite candidate, cmc_{m} wins, as scoreN(cm)scoreN(caltA)=2\operatorname{score}_{N}(c_{m})-\operatorname{score}_{N}(c^{A}_{alt})=2 and the scores of all other candidates are 0. But when Alice manipulates and votes for caltAc^{A}_{alt}, the outcome is a tie between cmc_{m} and caltAc^{A}_{alt}, which is worse for Alice. Hence, Alice’s manipulation is not safe.

Case 2: scoreK(caltK)1\operatorname{score}_{K}(c^{K}_{alt})\geq 1.

Then again the manipulations not safe, as it is possible that the unknown agents vote as follows:

  • Some scoreK(cm)\operatorname{score}_{K}(c_{m}) agents vote for caltKc^{K}_{alt};

  • Some scoreK(caltK)1\operatorname{score}_{K}(c^{K}_{alt})-1 agents vote for cmc_{m}.

    Note that this is possible as both values are non-negative and scoreK(cm)+scoreK(caltK)j=1mscoreK(cj)n/2\operatorname{score}_{K}(c_{m})+\operatorname{score}_{K}(c^{K}_{alt})\leq\sum_{j=1}^{m}\operatorname{score}_{K}(c_{j})\leq n/2, which means that scoreK(cm)+(scoreK(caltK)1)n/21\operatorname{score}_{K}(c_{m})+\left(\operatorname{score}_{K}(c^{K}_{alt})-1\right)\leq n/2-1 (the number of unknown agents);

  • The remaining unknown agents (if any) are split evenly between cmc_{m} and caltKc^{K}_{alt}; if the number of remaining unknown agents is odd, then the extra agent votes for cmc_{m}.

We now prove that the manipulation is harmful for Alice. We distinguish three sub-cases. Denote by RR the set of Remaining unknown agents, mentioned in the third bullet above:

  • If R=R=\emptyset (which means that scoreK(cm)+scoreK(caltK)1=n/21\operatorname{score}_{K}(c_{m})+\operatorname{score}_{K}(c^{K}_{alt})-1=n/2-1), then the scores of cmc_{m} and caltKc^{K}_{alt} without Alice’s vote are n/2n/2 and n/21n/2-1 respectively, which are at least 22 and 11 respectively (as n4n\geq 4). The scores of all other candidates are 0.

    When Alice is truthful, the outcome is a tie between cmc_{m} and caltKc^{K}_{alt}; when Alice manipulates and votes for caltAc^{A}_{alt}, caltKc^{K}_{alt} wins, which is worse for Alice.

  • If |R|>0|R|>0 and it is even, then without Alice’s vote, score(caltK)\operatorname{score}(c^{K}_{alt}) is strictly higher than the scores of all other candidates, and higher than score(cm)\operatorname{score}(c_{m}) by exactly 11.

    When Alice is truthful, the outcome is a tie between cmc_{m} and caltKc^{K}_{alt}. But when Alice manipulates and votes for caltAc^{A}_{alt}, either caltKc^{K}_{alt} wins or there is a tie between caltAc^{A}_{alt} and caltKc^{K}_{alt}; both outcomes are worse for Alice.

  • If |R|>0|R|>0 and it is odd, then without Alice’s vote, score(cm)=score(caltK)\operatorname{score}(c_{m})=\operatorname{score}(c^{K}_{alt}), and both scores are at least the maximum score among the other candidates. When Alice is truthful, her favorite candidate cmc_{m} wins. But when Alice manipulates and votes for caltAc^{A}_{alt}, then either caltAc^{A}_{alt} wins, or there is a tie between cmc_{m} and caltKc^{K}_{alt} (and possibly some other candidates); both outcomes are worse for Alice.

Thus, in all cases, Alice does not have a safe profitable manipulation. ∎

Combining Section 7.1 and Section 7.1 gives the exact RAT-degree of plurality voting.

Corollary 7.2.

When m3m\geq 3 and n4n\geq 4 and nn is even, the RAT-degree of plurality voting is n/2+1n/2+1.

7.2. Positional voting rules: tighter upper bounds

We now show that positional voting rules may have a RAT-degree substantially lower than plurality.

The following lemma strengthens Section 7.1.

{lemmarep}

Let {2,,m1}\ell\in\{2,\ldots,m-1\} be an integer. Consider a positional voting setting with m3m\geq 3 candidates and n(+1)mn\geq(\ell+1)m agents. Denote stop::=j=m+1msj=s_{\mathrm{top:}\ell}:=\sum_{j=m-\ell+1}^{m}s_{j}= the sum of the \ell highest scores and sbot::=j=1sj=s_{\mathrm{bot:}\ell}:=\sum_{j=1}^{\ell}s_{j}= the sum of the \ell lowest scores.

If s2>s1s_{2}>s_{1} and there are kk known agents, where

k>smsbot:sm+stop:sbot:s1n,\displaystyle k>\frac{\ell s_{m}-s_{\mathrm{bot:}\ell}}{\ell s_{m}+s_{\mathrm{top:}\ell}-s_{\mathrm{bot:}\ell}-\ell s_{1}}n,

then switching the bottom two candidates (c2c_{2} and c1c_{1}) may be a safe profitable manipulation for Alice. {proofsketch} The proof has a similar structure to that of Section 7.1. Note that the expression at the right-hand side can be as small as 1+1n\displaystyle\frac{1}{\ell+1}n (for the anti-plurality rule), which is much smaller than the n/2+1\left\lceil n/2\right\rceil+1 known agents required in Section 7.1. Still, we can prove that, for some reports of the known agents, the score of c1c_{1} is necessarily lower than the arithmetic mean of the scores of the \ell candidates {cm,c2,,c}\{c_{m},c_{2},\cdots,c_{\ell}\}. Hence, it is lower than at least one of these scores. Therefore ,c1c_{1} still cannot win, so switching c1c_{1} and c2c_{2} is safe.

Proof.

Suppose there is a subset KK of kk known agents, who vote as follows:

  • k2k-2 known agents rank c2cmc_{2}\succ c_{m}, then all candidates {c3,,c}\{c_{3},\cdots,c_{\ell}\} in an arbitrary order, then the rest of the candidates in an arbitrary order, and lastly c1c_{1}.

  • Two known agents rank cmc2c_{m}\succ c_{2}, then all candidates {c3,,c}\{c_{3},\cdots,c_{\ell}\} in an arbitrary order, then the rest of the candidates in an arbitrary order, and lastly c1c_{1}.

We first show that c1c_{1} cannot win. Denote L:={cm,c2,c3,,c}L:=\{c_{m},c_{2},c_{3},\ldots,c_{\ell}\}. We show that the difference in scores between some of the \ell candidates in LL and c1c_{1} is always strictly positive.

  • The known agents rank all candidates in LL at the top \ell positions. Therefore, each agent gives all of them together a total score of stop:s_{\mathrm{top:}\ell}. So

    cL(scoreK(c)scoreK(c1))=\displaystyle\sum_{c\in L}(\operatorname{score}_{K}(c)-\operatorname{score}_{K}(c_{1}))= k(stop:s1).\displaystyle k(s_{\mathrm{top:}\ell}-\ell s_{1}).
  • There are nkn-k agents not in KK (including Alice). Each of these agents gives all candidates in LL together at least sbot:s_{\mathrm{bot:}\ell}, and gives c1c_{1} at most sms_{m} points. Therefore, we can bound the sum of score differences as follows:

    cL(scoreN(c)scoreN(c1))\displaystyle\sum_{c\in L}(\operatorname{score}_{N}(c)-\operatorname{score}_{N}(c_{1}))\geq k(stop:s1)+(nk)(sbot:sm)\displaystyle k(s_{\mathrm{top:}\ell}-\ell s_{1})+(n-k)(s_{\mathrm{bot:}\ell}-\ell s_{m})
    =\displaystyle= k(sm+stop:sbot:s1)+n(sbot:sm).\displaystyle k(\ell s_{m}+s_{\mathrm{top:}\ell}-s_{\mathrm{bot:}\ell}-\ell s_{1})+n(s_{\mathrm{bot:}\ell}-\ell s_{m}).

    The assumption on kk implies that this expression is positive. Therefore, for at least one cLc\in L, scoreN(c)scoreN(c1)>0\operatorname{score}_{N}(c)-\operatorname{score}_{N}(c_{1})>0. So c1c_{1} has no chance to win or even tie. Therefore, switching c2c_{2} and c1c_{1} is a safe manipulation.

Next, we show that the manipulation can help cmc_{m} win, when the agents not in KK vote as follows:

  • k4k-4 unknown agents rank cmc2c_{m}\succ c_{2}\succ\cdots, where each candidate except c1,c2,cmc_{1},c_{2},c_{m} is ranked last by at least one voter (here we use the assumption n(+1)mn\geq(\ell+1)m: the condition on kk implies k>n/(+1)mk>n/(\ell+1)\geq m, so km+1k\geq m+1 and k4m3k-4\geq m-3).

  • One unknown agent ranks c2cmc1c_{2}\succ\cdots\succ c_{m}\succ c_{1};

  • Alice votes truthfully cmc2c1c_{m}\succ\cdots\succ c_{2}\succ c_{1}.

  • If there are remaining unknown agents, then they are split evenly between cmc2c_{m}\succ c_{2}\succ\cdots and c2cmc_{2}\succ c_{m}\succ\cdots (if the number of remaining agents is odd, then the last one abstains).

Then,

scoreN(c2)scoreN(cm)=\displaystyle\operatorname{score}_{N}(c_{2})-\operatorname{score}_{N}(c_{m})= (k2)(smsm1)+2(sm1sm)\displaystyle(k-2)(s_{m}-s_{m-1})+2(s_{m-1}-s_{m})
+(k4)(sm1sm)+(sms2)+(s2sm)\displaystyle+(k-4)(s_{m-1}-s_{m})+(s_{m}-s_{2})+(s_{2}-s_{m})
=\displaystyle= 0.\displaystyle 0.

Moreover, for any j{1,2,m}j\not\in\{1,2,m\}, the score of cjc_{j} is even lower (here we use the assumption that cjc_{j} is ranked last by at least one unknown agent):

scoreN(c2)scoreN(cj)\displaystyle\operatorname{score}_{N}(c_{2})-\operatorname{score}_{N}(c_{j})\geq (k2)(smsm2)+2(sm1sm2)\displaystyle(k-2)(s_{m}-s_{m-2})+2(s_{m-1}-s_{m-2})
+(k5)(sm1sm2)+(sm1s1)+(smsm1)+(s2sm1)\displaystyle+(k-5)(s_{m-1}-s_{m-2})+(s_{m-1}-s_{1})+(s_{m}-s_{m-1})+(s_{2}-s_{m-1})
\displaystyle\geq (sm1s1)+(s2sm1)\displaystyle(s_{m-1}-s_{1})+(s_{2}-s_{m-1})
=\displaystyle= s2s1,\displaystyle s_{2}-s_{1},

which is positive by the lemma assumption. Therefore, when Alice is truthful, the outcome is a tie between cmc_{m} and c2c_{2}.

If Alice switches c1c_{1} and c2c_{2}, then the score of c2c_{2} decreases by s2s1s_{2}-s_{1}, which is positive by the lemma assumption, and the scores of all other candidates except c1c_{1} do not change. So cmc_{m} wins, which is better for Alice than a tie. Therefore, the manipulation is profitable. ∎

In particular, for the anti-plurality rule the condition in Section 7.2 for =m1\ell=m-1 is k>n/mk>n/m.

Corollary 7.3.

Let m3m\geq 3 and nm2n\geq m^{2}. The RAT-degree of anti-plurality is at most n/m+1\left\lfloor n/m\right\rfloor+1.

Intuitively, the reason that anti-plurality fares worse than plurality is that, even with a small number of known agents, it is possible to deduce that some candidate has no chance to win, and therefore there is a safe manipulation.

While we do not yet have a complete characterization of the RAT-degree of positional voting rules, our current results already show the strategic importance of the choice of scores.

7.3. Higher RAT-degree?

Theorem 7.1 raises the question of whether some other, non-positional voting rules have RAT-degrees substantially higher than n/2n/2. Using ideas similar to those in Section 5.4, we could use a selection rule Γ\Gamma to choose a “dictator”, and implement the dictator’s first choice. This deterministic mechanism has RAT-degree n1n-1, as without knowledge of all other agents’ inputs, every manipulation might cause the manipulator to lose the chance of being a dictator. However, besides the fact that this is an unnatural mechanism, it suffers from other problems such as the no-show paradox (a participating voter might affect the selection rule in a way that will make another agent a dictator, which might be worse than not participating at all).

Our main open problem is therefore to devise natural voting rules with a high RAT-degree. {open} Does there exist a non-dictatorial voting rule that satisfies the participation criterion (i.e. does not suffer from the no-show paradox), with RAT-degree larger than n/2+1\left\lceil n/2\right\rceil+1?

8. Stable Matchings

In this section, we consider mechanisms for stable matchings. Here, the nn agents are divided into two disjoint subsets, MM and WW, that need to be matched to each other. The most common examples are men and women or students and universities. Each agent has a strict preference order over the agents in the other set and being unmatched – for each mMm\in M, an order m\succ_{m} over W{ϕ}W\cup\{\phi\}; and for each wWw\in W an order, w\succ_{w}, over M{ϕ}M\cup\{\phi\}.

A matching between MM to WW is a mapping μ\mu from MWM\cup W to MW{ϕ}M\cup W\cup\{\phi\} such that (1) μ(m)W{ϕ}\mu(m)\in W\cup\{\phi\} for each mMm\in M, (2) and μ(w)M{ϕ}\mu(w)\in M\cup\{\phi\} for each wWw\in W, and (3) μ(m)=w\mu(m)=w if and only if μ(w)=m\mu(w)=m for any (m,w)M×W(m,w)\in M\times W. When μ(a)=ϕ\mu(a)=\phi it means that agent aa is unmatched under μ\mu. A matching is said to be stable if (1) no agent prefers being unmatched over their assigned match, and (2) there is no pair (m,w)M×W(m,w)\in M\times W such that mm prefers ww over his assigned match while ww prefers mm over her assigned match – wmμ(m)w\succ_{m}\mu(m) and mwμ(w)m\succ_{w}\mu(w).

A mechanism in this context gets the preference orders of all agents and returns a stable matching.

Results

Our results for this problem are preliminary, so we provide only a brief overview here, with full descriptions and proofs in the appendix. We believe, however, that this is an important problem and that our new definition opens the door to many further questions.

We first analyze the deferred acceptance mechanism and prove that its RAT-degree is at most 33, showing that it is 33-known-agents safely manipulable. The proof relies on truncation, where an agent in WW falsely reports preferring to remain unmatched over certain options. We further show that even without truncation, the RAT-degree is at most 55.

Finally, we examine the Boston mechanism and establish an upper bound of 22 on its RAT-degree.

8.1. Deferred Acceptance (Gale-Shapley)

The deferred acceptance algorithm (gale1962college) is one of the most well-known mechanisms for computing a stable matching. In this algorithm, one side of the market - here, MM - proposes, while the other side - WW - accepts or rejects offers iteratively. {toappendix}

8.2. Deferred Acceptance (Gale-Shapley): descriptions and proofs

The algorithm proceeds as follows:

  1. (1)

    Each mMm\in M proposes to his most preferred alternative according to m\succ_{m} that has not reject him yet and that he prefers over being matched.

  2. (2)

    Each wWw\in W tentatively accepts her most preferred proposal according to w\succ_{w} that she prefers over being matched, and rejects the rest.

  3. (3)

    The rejected agents propose to their next most preferred choice as in step 1.

  4. (4)

    The process repeats until no one of the rejected agents wishes to make a new proposal.

  5. (5)

    The final matching is determined by the last set of accepted proposal.

It is well known that the mechanism is truthful for the proposing side (MM) but untruthful for the other side (WW). That is, the agents in WW may have an incentive to misreport their preferences to obtain a better match. We prove that:

{theoremrep}

The RAT-degree of deferred acceptance is at most 33.

Proof.

To prove that the RAT-degree is at most 33, we show that it is 33-known-agents manipulable.

Let w1Ww_{1}\in W and assume without loss of generality that m1w1m2w1m_{1}\succ_{w_{1}}m_{2}\succ_{w_{1}}\cdots (the preferences between the other alternatives are irrelevant).

Consider the case where the 33-known-agents are as follows::

  • Let w2Ww_{2}\in W be an agent whose preferences are m2w2m1w2m_{2}\succ_{w_{2}}m_{1}\succ_{w_{2}}\cdots.

  • The preferences of m1m_{1} are w2m1w1m1w_{2}\succ_{m_{1}}w_{1}\succ_{m_{1}}\cdots.

  • The preferences of m2m_{2} are w1m2w2m2w_{1}\succ_{m_{2}}w_{2}\succ_{m_{2}}\cdots.

In this case, when w1w_{1} tells the truth, the resulting matching includes the pairs (m1,w2)(m_{1},w_{2}) and (m2,w1)(m_{2},w_{1}), since in this case it proceeds as follows:

  • In the first step, all the agents in MM propose to their most preferred option: m1m_{1} proposes to w2w_{2} and m2m_{2} proposes to w1w_{1}.

    Then, the agents in WW tentatively accept their most preferred proposal among those received, as long as she prefers it to remaining unmatched: w1w_{1} tentatively accepts m2m_{2} since she prefers him over being unmatched, and since m2m_{2} must be her most preferred option among the proposers as m1m_{1} (her top choice) did not propose to her. Similarly, w2w_{2} tentatively accepts m1m_{1}.

  • In the following steps, more rejected agents in MM might propose to w1w_{1} and w2w_{2}, but they will not switch their choices, as they prefer m2m_{2} and m1m_{1}, respectively.

    Thus, when the algorithm terminates m1m_{1} is matched to w2w_{2} and m2m_{2} is matched to w1w_{1}.

Which means that w1w_{1} is matched to her second-best option.

We shall now see that w1w_{1} can increase her utility by misreporting that her preference order is: m1w1ϕw1m2w1m_{1}\succ^{\prime}_{w_{1}}\phi\succ^{\prime}_{w_{1}}m_{2}\succ^{\prime}_{w_{1}}\cdots. The following shows that in this case, the resulting matching includes the pairs (m1,w1)(m_{1},w_{1}) and (m2,w2)(m_{2},w_{2}), meaning that w1w_{1} is matched to her most preferred option (instead of her second-best).

  • In the first step, as before, m1m_{1} proposes to w2w_{2} and m2m_{2} proposes to w1w_{1}.

    However, here, w1w_{1} rejects m2m_{2} because, according to her false report, she prefers being unmatched over being matched to m2m_{2}. As before, w2w_{2} tentatively accepts m1m_{1}.

  • In the second step, m2m_{2}, having been rejected by w1w_{1}, proposes to his second-best choice w2w_{2}.

    Since w2w_{2} prefers m2m_{2} over m1m_{1}, she rejects m1m_{1} and tentatively accepts m2m_{2}.

  • In the third step, m1m_{1}, having been rejected by w2w_{2}, proposes to his second-best choice w1w_{1}.

    w1w_{1} now tentatively accepts m1m_{1} since according to her false report, she prefers him over being unmatched.

  • In the following steps, more rejected agents in MM might propose to w1w_{1} and w2w_{2}, but they will not switch their choices, as they prefer m2m_{2} and m1m_{1}, respectively.

    In the following steps, more rejected agents in MM can propose to w1w_{1} and w2w_{2} but they would reject their proposes as both of them have the alternative of being matched to their best-option.

    Thus, when the algorithm terminates m1m_{1} will be matched to w1w_{1} and and m2m_{2} will be matched to w2w_{2}.

Thus, regardless of the reports of the remaining (n4)(n-4) remaining (unknown) agents report, w1w_{1} strictly prefers to misreport her preferences. ∎

The proof is based on a key type of manipulation in this setting called truncation (roth1999truncation; ehlers2008truncation; coles2014optimal) – where agents in WW falsely report that they prefer being unmatched rather than being matched with certain agents — even though they actually prefer these matches to being unmatched.

However, in some settings, it is reasonable to assume that agents always prefer being matched if possible. In such cases, the mechanism is designed to accept only preferences over agents from the opposite set (or equivalently, orders where being unmatched is always the least preferred option). Clearly, under this restriction, truncation is not a possible manipulation. We prove that even when truncation is not possible, the RAT-degree is bounded.

{theoremrep}

Without truncation, the RAT-degree of deferred acceptance is at most 55.

Proof.

To prove that the RAT-degree is at most 55, we show that it is 55-known-agents manipulable.

Let w1Ww_{1}\in W and assume without loss of generality that m1w1m2w1m3w1m_{1}\succ_{w_{1}}m_{2}\succ_{w_{1}}m_{3}\succ_{w_{1}}\cdots (the preferences between the other alternatives are irrelevant).

Consider the case where the 55-known-agents are as follows:

  • Let w2Ww_{2}\in W be an agent whose preferences are m1w2m2w2m3w2m_{1}\succ_{w_{2}}m_{2}\succ_{w_{2}}m_{3}\succ_{w_{2}}\cdots.

  • Let w3Ww_{3}\in W be an agent whose preferences are m2w3m1w3m3w2m_{2}\succ_{w_{3}}m_{1}\succ_{w_{3}}m_{3}\succ_{w_{2}}\cdots.

  • The preferences of m1m_{1} are w3m1w1m1w2m1w_{3}\succ_{m_{1}}w_{1}\succ_{m_{1}}w_{2}\succ_{m_{1}}\cdots.

  • The preferences of m2m_{2} are w1m2w3m2w2m2w_{1}\succ_{m_{2}}w_{3}\succ_{m_{2}}w_{2}\succ_{m_{2}}\cdots.

  • The preferences of m3m_{3} are w1m2w3m2w2m2w_{1}\succ_{m_{2}}w_{3}\succ_{m_{2}}w_{2}\succ_{m_{2}}\cdots.

In this case, when w1w_{1} tells the truth, the resulting matching includes the pairs (m1,w3)(m_{1},w_{3}), (m2,w1)(m_{2},w_{1}) and (m3,w2)(m_{3},w_{2}), since in this case it proceeds as follows:

  • In the first step, all the agents in MM propose to their most preferred option: m1m_{1} proposes to w3w_{3}, while m2m_{2} and m3m_{3} proposes to w1w_{1}.

    Then, the agents in WW tentatively accept their most preferred proposal among those received. w1w_{1} tentatively accepts m2m_{2} since he must be her most preferred option among the proposers – as he is her second-best and her top choice, m1m_{1}, did not propose to her; and rejects m3m_{3}. Similarly, w3w_{3} tentatively accepts m1m_{1}. w2w_{2} did not get any proposes.

  • In the second step, m3m_{3}, having been rejected by w1w_{1}, proposes to his second-best choice w3w_{3}.

    Since w3w_{3} prefers her current match m1m_{1} over m3m_{3}, she rejects m3m_{3}.

  • In the third step, m3m_{3}, having been rejected by w3w_{3}, proposes to his third-best choice w2w_{2}.

    Since w2w_{2} does not have a match, she tentatively accepts m3m_{3}.

  • In the following steps, more rejected agents in MM - that are not m1,m2m_{1},m_{2} and m3m_{3}, might propose to w1,w2w_{1},w_{2} and w3w_{3}, but they will not switch their choices, as they can only be least preferred than their current match.

    Thus, when the algorithm terminates m1m_{1} is matched to w3w_{3}, m2m_{2} is matched to w1w_{1}, and m3m_{3} is matched to w2w_{2}.

Which means that w1w_{1} is matched to her second-best option.

However, when w1w_{1} misreporting that her preference order is: m1w1m3w1m2w1m_{1}\succ^{\prime}_{w_{1}}m_{3}\succ^{\prime}_{w_{1}}m_{2}\succ^{\prime}_{w_{1}}\cdots. The following shows that in this case, the resulting matching includes the pairs (m1,w1)(m_{1},w_{1}), (m2,w3)(m_{2},w_{3}) and (m3,w2)(m_{3},w_{2}), meaning that w1w_{1} is matched to her most preferred option (instead of her second-best).

  • In the first step, as before, m1m_{1} proposes to w3w_{3}, while m2m_{2} and m3m_{3} proposes to w1w_{1}.

    However, here, w1w_{1} tentatively accepts m3m_{3} and rejects m2m_{2}. As before, w3w_{3} tentatively accepts m1m_{1} and w2w_{2} did not get any proposes.

  • In the second step, m2m_{2}, having been rejected by w1w_{1}, proposes to his second-best choice w3w_{3}.

    Since w3w_{3} prefers m2m_{2} over m1m_{1}, she rejects m1m_{1} and tentatively accepts m2m_{2}.

  • In the third step, m1m_{1}, having been rejected by w3w_{3}, proposes to his second-best choice w1w_{1}.

    w1w_{1} tentatively accepts m1m_{1} since according to her false report, she prefers him over m3m_{3}.

  • In the fourth step, m3m_{3}, having been rejected by w1w_{1}, proposes to his second-best choice w3w_{3}.

    w3w_{3} prefers her current match m2m_{2}, and thus rejects m3m_{3}.

  • In the fifth step, m3m_{3}, having been rejected by w3w_{3}, proposes to his third-best choice w2w_{2}.

    As w2w_{2} does not have a match, she tentatively accepts m3m_{3}.

  • In the following steps, more rejected agents in MM - that are not m1,m2m_{1},m_{2} and m3m_{3}, might propose to w1,w2w_{1},w_{2} and w3w_{3}, but they will not switch their choices, as they can only be least preferred than their current match.

    Thus, when the algorithm terminates m1m_{1} is matched to w1w_{1}, m2m_{2} is matched to w3w_{3}, and m3m_{3} is matched to w2w_{2}.

Thus, regardless of the reports of the remaining (n6)(n-6) remaining (unknown) agents report, w1w_{1} strictly prefers to misreport her preferences. ∎

8.3. Boston Mechanism

The Boston mechanism (abdulkadirouglu2003school) is a widely used mechanism for assigning students or schools. {toappendix}

8.4. Boston Mechanism: descriptions and proofs

The mechanism proceeds in rounds as follows:

  1. (1)

    Each mMm\in M proposes to his most preferred alternative according m\succ_{m} that has not yet rejected him and is still available.

  2. (2)

    Each wWw\in W (permanently) accepts her most preferred proposal according to w\succ_{w} and rejects the rest. Those who accept a propose become unavailable.

  3. (3)

    The rejected agents propose to their next most preferred choice as in step 11.

  4. (4)

    The process repeats until all agents are either assigned or have exhausted their preference lists.

We prove that:

{theoremrep}

The RAT-degree of the Boston mechanism is at most 22.

Proof.

To prove that the RAT-degree is at most 22, we show that it is 22-known-agents manipulable.

Let m1Mm_{1}\in M be an agent with preferences w1m1w2m1w_{1}\succ_{m_{1}}w_{2}\succ_{m_{1}}\cdots. Consider the case where the two known-agents are as follows:

  • Let m2Mm_{2}\in M be an agent whose preferences are similar, w1m2w2m2w_{1}\succ_{m_{2}}w_{2}\succ_{m_{2}}\cdots.

  • The preferences of w1w_{1} are m2w1m1w1m_{2}\succ_{w_{1}}m_{1}\succ_{w_{1}}\cdots.

When m1m_{1} reports truthfully, the mechanism proceeds as follows: In the first round, both m1m_{1} and m2m_{2} proposes to w1w_{1}. Since w1w_{1} prefers m2m_{2}, she rejects m1m_{1} and becomes unavailable. Thus, in the second round, m1m_{1} proposes to w2w_{2}.

However, we prove that m1m_{1} has a safe-and-profitable manipulation: misreports his preference as w2m1w3m1m1w1w_{2}\succ^{\prime}_{m_{1}}w_{3}\succ^{\prime}_{m_{1}}\cdots\succ^{\prime}_{m_{1}}w_{1}. The manipulation is safe since m1m_{1} never had a chance to be matched with w1w_{1} (regardless of his report). The manipulation is profitable as there exists a case where the manipulation improves m1m_{1}’s outcome. Consider the case where the top choice of w2w_{2} is m1m_{1} and there is another agent, m3m_{3}, whose top choice is w2w_{2}. Notice that w2w_{2} prefers m1m_{1} over m3m_{3}. When m1m_{1} reports truthfully, then in the first round, m3m_{3} proposes to w2w_{2} and gets accepted, making her unavailable by the time m1m_{1} reaches her in the second round. However, if m1m_{1} misreports and proposes to w2w_{2} in the first round, she will accept him (as she prefers him over m3m_{3}). This guarantees that m1m_{1} is matched to w2w_{2}, improving his outcome compared to truthful reporting. ∎

9. Discussion and Future Work

Our main goal in this paper is to encourage a more quantitative approach to truthfulness that can be applied to various problems. When truthfulness is incompatible with other desirable properties, we aim to find mechanisms that are “as hard to manipulate as possible”, where hardness is measured by the amount of knowledge required for a safe manipulation. We have considered several alternatives towards the same goal.

Using Randomization

If we used randomness, we could eliminate all safe manipulations. Consider for example the following voting rule:

  • With some small probability p>0p>0, run random dictatorship, that is, choose a random voter and elect his top candidate;

  • Otherwise, run plurality voting (or any other voting rule with desirable properties).

Due to the small probability of being a dictator, each voter might lose from manipulation, so no manipulation is safe. This rule is not entirely far-fetched: governments do sometimes use randomization to encourage truth-telling in tax reports (see e.g. (haan2012sound)). However, randomization is very unlikely to be acceptable in high-stakes voting scenarios.

Beyond Worst-Case

We defined the RAT-degree as a “worst case” concept: to prove an upper bound, we find a single example of a safe manipulation. This is similar to the situation with the classic truthfulness notion, where to prove non-truthfulness, it is sufficient to find a single example of a manipulation. To go beyond the worst case, one could follow relaxations of truthfulness, such as truthful-in-expectation (lavi2011truthful) or strategyproofness-in-the-large (azevedo2019strategy), and define similarly “RAT-degree in expectation” or “RAT-degree in the large”.

Alternative Information Measurements

Another avenue for future work is to study other ways to quantify truthfulness. For example, instead of counting the number of known agents, one could count the number of bits that an agent should know about other agents’ preferences in order to have a safe manipulation. The disadvantage of this approach is that different domains have different input formats, and therefore it would be hard to compare numbers of bits in different domains (see Appendix A for more details).

Changing Quantifiers

One could argue for a stronger definition requiring that a safe manipulation exists for every possible set of kk known-agents, rather than for some set, or similarly for every possible preference profile for the known agents rather than just in some profile. However, we believe such definitions would be less informative, as in many cases, a manipulation that is possible for some set of kk known-agents, is not possible for any such set. For example, in the first-price auction with discount (see Section 4.2), the RAT-degree is 11 under our definition. But if we required the the knowledge on any agent’s bid would allow manipulation rather than just one, the degree would automatically jump to n1n-1, making the measure far less meaningful.

Combining the ”known agents” concept with other notions

We believe that the “known agents” approach can be used to quantify the degree to which a mechanism is robust to other types of manipulations (besides safe manipulations), such as “always-profitable” manipulations or “obvious” manipulation. Accordingly, one can define the “max-min-strategyproofness degree” or the “NOM degree” (see Appendix A).

Appendix A Related Work: Extended Discussion

A.1. Truthfulness Relaxations

The large number of impossibility results have lead to extensive research into relaxations of truthfulness. A relaxed truthfulness notion usually focuses on a certain subset of all possible manipulations, which are considered more “likely”. It requires that none of the manipulations from this subset is profitable. Different relaxations consider different subsets of “likely” manipulations.

RAT

Closest to our work is the recent paper by (BU2023Rat), which introduces the definition on which this paper is build upon - risk-avoiding truthfulness (RAT). The definition assumes that agents avoid risk - they will manipulate only when it is sometimes beneficial but never harmful. We first note that their original term for this concept is risk-averse truthfulness. However, since the definition assumes that agents completely avoid any element of risk, we adopt this new name, aiming to more accurately reflect this assumption.

We extend their work in two key directions. First, we generalize the definition from cake-cutting to any social choice problem. Second, we move beyond a binary classification of whether a mechanism is RAT, to a quantitative measure of its robustness to manipulation by such agents. Our new definition provides deeper insight of the mechanism’ robustness. In Section 6, we also analyze the mechanisms proposed in their paper alongside additional mechanisms for cake-cutting.

Their paper (in Section 5) provides an extensive comparison between RAT of related relaxation of truthfulness. For completeness, we include some of these comparisons here as well.

Maximin Strategy-Proofness

Another related relaxation of truthfulness, introduced by brams2006better, assumes the opposite extreme to the standard definition regarding the agents’ willingness to manipulate. In the standard definition, an agent is assumed to manipulate if for some combination of the other agents’ reports, it is beneficial. In contrast, this relaxation assumes that an agent will manipulate only if the manipulation is always beneficial — i.e., for any combination of the other agents’ reports.

This definition is not only weaker than the standard one but also weaker than RAT, as it assumes that agents manipulate in only a subset of the cases where RAT predicts manipulation. We believe that, in this sense, RAT provides a more realistic and balanced assumption. However, we note that a similar approach can be applied to this definition as well — the degree to which a mechanism is robust to always-profitable manipulations (rather than to safe manipulations).

NOM

troyan2020obvious introduce the notion of not-obvious manipulability (NOM), which focuses on obvious manipulation — informally, a manipulation that benefits the agent either in the worst case or in the best case. It presumes that agents are boundedly rational, and only consider extreme situations — best or worst cases. A mechanism is NOM if there are no obvious manipulations.

This notion is a relaxation of truthfulness since the existence of an obvious manipulation imposes a stronger requirement than merely having a profitable one.

However, an obvious manipulation is not necessarily a safe manipulation, nor is a safe manipulation necessarily an obvious one (see Appendix D in (BU2023Rat) for more details). Therefore, NOM and RAT are independent notions.

Regret-Free Truth-telling (RFTT)

regret2018Fernandez proposes another relaxation of truthfulness, called regret-free truth-telling. This concept also considers the agent’s knowledge about other agents but does so in a different way. Specifically, a mechanism satisfies RFTT if no agent ever regrets telling the truth after observing the outcome – meaning that she could not have safely manipulated given that only the reports that are consistent with the observed outcome are possible. Note that the agent does not observe the actual reports of others but only the final outcome (the agents are assumed to know how the mechanism works).

An ex-post-profitable manipulation is always profitable, but the opposite is not true, as it is possible that the profiles in which a manipulation could be profitable are not consistent with the outcome. This means that RFTT does not imply RAT. A safe manipulation is always ex-post-safe, but the opposite is not true, as it is possible that the manipulation is harmful for some profiles that are inconsistent with the outcome. This means that RAT does not imply RFTT.

Different Types of Risk

slinko2008nondictatorial; slinko2014ever and hazon2010complexity study “safe manipulations” in voting. Their concept of safety is similar to ours, but allows a simultaneous manipulation by a coalition of voters. They focus on a different type of risk for the manipulators: the risk that too many or too few of them will perform the exact safe manipulation.

A.2. Alternative Measurements

Various measures can be used to quantify the manipulability of a mechanism. Below, we compare some existing approaches with our RAT-degree measure.

Computational Complexity

Even when an agent knows the reports of all other agents, it might be hard to compute a profitable manipulation. Mechanisms can be ranked according to the run-time complexity of this computation: mechanisms in which a profitable manipulation can be computed in polynomial time are arguably more manipulable than mechanisms in which this problem is NP-hard. This approach was pioneered by bartholdi1989computational; bartholdi1991single for voting, and applied in other social choice settings. See faliszewski2010ai; veselova2016computational for surveys.

Nowadays, with the advent of efficient SAT and CSP solvers, NP-hardness does not seem a very good defense against manipulation. Moreover, some empirical studies show that voting rules that are hard to manipulate in theory, may be easy to manipulate in practice (walsh2011hard). We claim that the main difficulty in manipulation is not the computational hardness, but rather the informational hardness — learning the other agents’ preferences.

Queries and Bits

Instead of counting the number of agents, we could count the number of bits that an agent needs to know in order to have a safe manipulation (this is similar in spirit to the concepts of communication complexity - e.g., (nisan2002communication; grigorieva2006communication; Communication2019Branzei; Babichenko2019communication) and compilation complexity - e.g., (chevaleyre2009compiling; xia2010compilation; karia2021compilation)). But this definition may be incomparable between different domains, as the bit length of the input is different. We could also measure the number of basic queries, rather than the number of agents. But then we have to define what “basic query” means, which may require a different definition for each setting. The number of agents is an objective measure that is relevant for all settings.

Probability of Having Profitable Manipulation.

Truthfulness requires that not even a single preference-profile allows a profitable manipulation. Mechanisms can be ranked according to the probability that such a profile exists – e.g., (barrot2017manipulation; lackner2018approval; lackner2023free). The probability of truthful mechanism is zero, and the lower the probability is, the more resistant the mechanism is to profitable manipulations.

A disadvantage of this approach is that it requires knowledge of the distribution over preference profiles; our RAT-degree does not require it.

Incentive Ratio

Another common measure is the incentive ratio of a mechanism chen2011profitable, which describes the extent to which an agent can increase her utility by manipulating. For each agent, it considers the maximum ratio between the utility obtained by manipulating and the utility received by being truthful, given the same profile of the others. The incentive ratio is then defined as the maximum of these values across all agents.

This measure is widely used - e.g., (chen2022incentive; li2024bounding; cheng2022tight; cheng2019improved). However, it is meaningful only when utility values have a clear numerical interpretation, such as in monetary settings. In contrast, our RAT-degree measure applies to any social choice setting, regardless of how utilities are represented.

Degree of Manipulability

aleskerov1999degree define four different indices of ”manipulability” of ranked voting rules, based on the fraction of profiles in which a profitable manipulation exists; the fraction of manipulations that are profitable; and the average and maximum gain per manipulation. As the indices are very hard to compute, they estimate them on 26 voting rules using computer experiments. These indices are useful under an assumption that all profiles are equally probable (“impartial culture”), or assuming a certain mapping between ranks of candidates and their values for the agent.

andersson2014budget; andersson2014least measure the degree of manipulability by counting the number of agents who have a profitable manipulation. They define a rule as minimally manipulable with respect to a set of acceptable mechanisms if for each preference profile, the number of agents with a profitable manipulation is minimal across all acceptable mechanisms.