theoremTheorem[section] \newtheoremrepproposition[theorem]Proposition \newtheoremreplemma[theorem]Lemma \newtheoremrepclaim[theorem]Claim \newtheoremrepobservation[theorem]Observation \newtheoremrepopen[theorem]Open Question
It’s Not All Black and White: Degree of Truthfulness for Risk-Avoiding Agents
Abstract.
The classic notion of truthfulness requires that no agent has a profitable manipulation – an untruthful report that, for some combination of reports of the other agents, increases her utility. This strong notion implicitly assumes that the manipulating agent either knows what all other agents are going to report, or is willing to take the risk and act as-if she knows their reports.
Without knowledge of the others’ reports, most manipulations are risky – they might decrease the manipulator’s utility for some other combinations of reports by the other agents. Accordingly, a recent paper (Bu, Song and Tao, “On the existence of truthful fair cake cutting mechanisms”, Artificial Intelligence 319 (2023), 103904) suggests a relaxed notion, which we refer to as risk-avoiding truthfulness (RAT), which requires only that no agent can gain from a safe manipulation – one that is sometimes beneficial and never harmful.
Truthfulness and RAT are two extremes: the former considers manipulators with complete knowledge of others, whereas the latter considers manipulators with no knowledge at all. In reality, agents often know about some — but not all — of the other agents. This paper introduces the RAT-degree of a mechanism, defined as the smallest number of agents whose reports, if known, may allow another agent to safely manipulate, or if there is no such number. This notion interpolates between classic truthfulness (degree ) and RAT (degree at least ): a mechanism with a higher RAT-degree is harder to manipulate safely.
To illustrate the generality and applicability of this concept, we analyze the RAT-degree of prominent mechanisms across various social choice settings, including auctions, indivisible goods allocations, cake-cutting, voting, and stable matchings.
1. Introduction
The Holy Grail of mechanism design is the truthful mechanism — a mechanism in which the (weakly) dominant strategy of each agent is truthfully reporting her type. But in most settings, there is provably no truthful mechanism that satisfies other desirable properties such as budget-balance, efficiency or fairness. Practical mechanisms are thus manipulable in the sense that some agent has a profitable manipulation – for some combination of reports by the other agents, agent can induce the mechanism to yield an outcome (strictly) better for her by reporting non-truthfully.
This notion of manipulability implicitly assumes that the manipulating agent either knows the reports made by all other agents, or is willing to take the risk and act as-if she knows their reports. Without knowledge of the others’ reports, most manipulations are risky – they might decrease the manipulator’s utility for some other combinations of reports by the other agents. In practice, many agents are risk-avoiding and will not manipulate in such cases. This highlights a gap between the standard definition and the nature of such agents.
To illustrate, consider a simple example in the context of voting. Under the Plurality rule, agents vote for their favorite candidate, and the candidate with the most votes wins. If an agent knows that her preferred candidate has no chance of winning, she may find it beneficial to vote for her second-choice candidate to prevent an even less preferred candidate from winning. However, if the agent lacks precise knowledge of the other votes and decides to vote for her second choice, it may backfire – she might inadvertently cause an outcome worse than if she had voted truthfully. For instance, if the other agents vote in a way that makes the agent the tie-breaker.
Indeed, the literature on cake-cutting (e.g. (brams2006better; BU2023Rat)), voting (e.g. (slinko2008nondictatorial; slinko2014ever; hazon2010complexity)) and recently also stable matching (e.g. regret2018Fernandez; chen2024regret) studies truthfulness among such agents. In particular, BU2023Rat introduced a weaker notion of truthfulness, suitable for risk-avoiding agents, for cake-cutting. Their definition can be adapted to any problem as follows.
Let us first define a safe manipulation as a non-truthful report that may never harm the agent’s utility. Based on that, a mechanism is safely manipulable if some agent has a manipulation that is both profitable and safe; otherwise, the mechanism is Risk-Avoiding Truthful (RAT).
Standard truthfulness and RAT can be seen as two extremes with respect to safe-and-profitable manipulations: the former considers manipulators with complete knowledge of others, whereas the latter considers manipulators with no knowledge at all. In reality, agents often know about some — but not all — of the other agents.
This paper introduces the RAT-Degree — a new measurement that quantifies how robust a mechanism is to such safe-and-profitable manipulations. The RAT-Degree of a mechanism is an integer , which represents — roughly — the smallest number of agents whose reports, if known, may allow another agent to safely manipulate; or if there is no such number. (See Section 3 for formal definition).
This measure allows us to position mechanisms along a spectrum. A higher degree implies that an agent has to work harder in order to collect the information required for a successful manipulation; therefore it is less likely that mechanism will be manipulated. On one end of the spectrum are truthful mechanisms – where no agent can safely manipulate even with complete knowledge of all the other agents. The RAT-degree of such mechanisms is . While on the other end are mechanisms that are safely manipulable – no knowledge about other agents is required to safely manipulate. The RAT-degree of such mechanisms is .
Importantly, the RAT-degree is determined by the worst-case scenario for the mechanism designer, which corresponds to the best-case scenario for the manipulating agents. The way we measure the amount of knowledge is based on a general objective applicable to all social choice settings.
Contributions.
Our main contribution is the definition of the RAT-degree.
To illustrate the generality and usefulness of this concept, we selected several different social choice domains, and analyzed the RAT-degree of some prominent mechanisms in each domain. As our goal is mainly to illustrate the new concepts, we did not attempt to analyze all mechanisms and all special case of each mechanism, but rather focused on some cases that allowed for a more simple analysis. To prove an upper bound on the RAT-degree, we need to show a profitable manipulation. However, in contrast to usual proofs of manipulability, we also have to analyze more carefully, how much knowledge on other agents is sufficient in order to guarantee the safety of the manipulation.
Organization.
Section 2 introduces the model and required definitions. Section 3 presents the definition of RAT-degree. Section 4 explores auctions for a single good. Section 5 examines indivisible goods allocations. Section 5 focuses on cake cutting. Section 7 addresses single-winner ranked voting. Section 8 considers stable matchings. Section 9 concludes with some future work directions.
Due to space constraints, most proofs are delegated to appendices, but we have attempted to provide intuitive proof sketches in the paper body.
1.1. Related Word
There is a vast body of work on truthfulness relaxations and alternative measurements of manipulability. Due to space constraints, we provide only a brief overview here; a more in-depth discussion of these works and their relation to RAT-degree can be found in Appendix A.
Truthfulness Relaxations.
Various truthfulness relaxations focus on a certain subset of all possible manipulations, which are considered more “likely”. It requires that none of the manipulations from this subset is profitable. Different relaxations consider different subsets of “likely” manipulations.
BU2023Rat introduce the definition on which this paper is build upon: risk-avoiding truthfulness (RAT),111The original term of BU2023Rat is risk-averse truthfulness. However, since the definition assumes that agents completely avoid any element of risk, we adopt this new name, aiming to more accurately reflect this assumption. assuming agents manipulate only when it is sometimes beneficial but never harmful. We extend their work by generalizing RAT from cake cutting to any social choice problem, and by suggesting a quantitative measure of the robustness of a mechanism to such manipulations.
brams2006better propose maximin strategy-proofness, where an agent manipulates only if it is always beneficial, making it a weaker condition than RAT. troyan2020obvious introduce not-obvious manipulability (NOM), which assumes agents consider only extreme best or worst cases. RAT and NOM are independent notions. regret2018Fernandez define regret-free truth-telling (RFTT), where agents never regret truth-telling after observing the outcome. RAT and RFTT do not imply each other. Additionally, slinko2008nondictatorial; slinko2014ever; hazon2010complexity study ”safe manipulations” in voting, but they consider coalition of voters and a different type of risk - that too many or too few participants will perform the exact safe manipulation.
Alternative Measurements.
There are many approaches quantify manipulability from different perspectives. One approach considers the computational complexity of finding a profitable manipulation — e.g., (bartholdi1989computational; bartholdi1991single) (see (faliszewski2010ai; veselova2016computational) for surveys). Another measurement is the number of bits an agent needs to know in order to have a safe manipulation - spirit to the concepts of communication complexity - e.g., (nisan2002communication; grigorieva2006communication; Communication2019Branzei; Babichenko2019communication) and compilation complexity - e.g., (chevaleyre2009compiling; xia2010compilation; karia2021compilation). A third approach evaluates the probability that a profitable manipulation exists —e.g., (barrot2017manipulation; lackner2018approval; lackner2023free). The incentive ratio, which measures how much an agent can improve their utility by manipulating, is also widely studied—e.g., (chen2011profitable; chen2022incentive; li2024bounding; cheng2022tight; cheng2019improved). Other metrics include assessing the average and maximum gain per manipulation (aleskerov1999degree) and counting the number of agents who benefit from manipulating (andersson2014budget; andersson2014least).
2. Preliminaries
We consider a generic social choice setting, with a set of agents , and a set of potential outcomes . Each agent, , has preferences over the set of outcomes , that can be described in one of two ways: (1) a linear ordering of the outcomes, or (2) a utility function from to . The set of all possible preferences for agent is denoted by , and is referred to as the agent’s domain. We denote the agent’s true preferences by . Unless otherwise stated, when agent weakly prefers the outcome over , it is denoted by ; and when she strictly prefers over , it is denoted by .
A mechanism or rule is a function , which takes as input a list of reported preferences (which may differ from the true preferences), and returns the chosen outcome. In this paper, we focus on deterministic and single-valued mechanisms.
For any agent , we denote by the preference profile in which agent reports , and the other agents report .
Truthfulness
A manipulation for a mechanism and agent is an untruthful report . A manipulation is profitable if there exists a set of preferences of the other agents for which it increases the manipulator’s utility:
(1) |
A mechanism is called manipulable if some agent has a profitable manipulation; otherwise is called truthful.
RAT
A manipulation is safe if it never harms the manipulator’s utility – it is weakly preferred over telling the truth for any possible preferences of the other agents:
(2) |
A mechanism is called safely-manipulable if some agent has a manipulations that is profitable and safe; otherwise is called risk-avoiding truthful (RAT).
3. The RAT-Degree
Let , with and . We denote by the preference profile in which the preferences of agent are , the preferences of the agents in are , and the preferences of the agents in are .
Definition 3.1.
A manipulation, , is profitable-and-safe-given--known-agents if for some subset with and some preferences for them , the following holds:
(3) | |||
(4) |
In words: The agents in are those whose preferences are Known to ; the agents in are those whose preferences are unknown to . Given that the preferences of the known-agents are , (3) says that there exist a preference profile of the unknown-agents that makes the manipulation profitable for agent ; while (4) says that the manipulation is safe – it is weakly preferred over telling the truth for any preference profile of the unknown-agents.
A profitable-and-safe manipulation (with no known-agents) is a special case in which .
Definition 3.2.
A mechanism is called -known-agents safely-manipulable if some agent has a profitable-and-safe-manipulation-given--known-agents.
Let . If a mechanism is -known-agents safely-manipulable, then it is also -known-agents safely-manipulable.
Proof.
By definition, some agent has a profitable-and-safe-manipulation-given--known-agents. That is, there exists a subset with and some preference profile for them , such that (3) and (4) hold. Let . Consider the preferences that has in some profile satisfying (3) (profitable). Define and construct a preference profile where the preferences of the agents in remain , and ’s preferences are set to . Since (3) holds for , the same manipulation remains profitable given the new set of known-agents. Moreover, (4) continues to hold, as the set of unknown agents has only shrunk. Thus, the mechanism is also -known-agents safely manipulable. ∎
Definition 3.3.
The RAT-degree of a mechanism is the minimum for which the mechanism is -known-agent safely manipulable, or if there is no such .
A mechanism is truthful if-and-only-if its RAT-degree is .
A mechanism is RAT if-and-only-if its RAT-degree is at least .
Figure 1 illustrates the relation between classes of different RAT-degree.

… | ||||
---|---|---|---|---|
3.1. An Intuitive Point of View
Consider LABEL:tab:safe-manip-i. When the risk-avoiding agent has no information (-known-agents), a profitable-and-safe manipulation is a row in the table that represents an alternative report , that dominates – this means that for each one of the columns, the outcome in the corresponding row is better than the outcome of the first row. When the risk-avoiding agent has more information (-known-agents, when ), it is equivalent to considering a strict subset of the columns. Lastly, when the risk-avoiding agent has a full information (-known-agents), it is equivalent to consider only one column.
4. Auction for a Single Good
We consider a seller owning a single good, and potential buyers (the agents). The true preferences are given by real values , representing the happiness of agent from receiving the good. The reported preferences are the “bids” . A mechanism in this context has to determine the winner — the agent who will receive the good, and the price — how much the winner will pay.
We assume that agents are quasi-linear – meaning their valuations can be interpreted in monetary units. Accordingly, the utility of the winning agent is the valuation minus the price; while the utility of the other agents is zero.
Results.
The two most well-known mechanisms in this context are first-price auction and second-price auctions. First-price auction is known to be manipulable; moreover, it is easy to show that it is safely-manipulable, so its RAT-degree is . We show that a first-price auction with a positive discount has RAT-degree . Second-price auction is known to be truthful, so its RAT-degree is . However, it has some decisive practical disadvantages (ausubel2006lovely), in particular, when buyers are risk-averse, the expected revenue of a second-price auction is lower than that of a first-price auction (nisan2007algorithmic); even when the buyers are risk-neutral, a risk-averse seller would prefer the revenue distribution of a first-price auction (krishna2009auction).
This raises the question of whether it is possible to combine the advantages of both auction types. Indeed, we prove that any auction that applies a weighted average between the first-price and the second-price achieves a RAT-degree of of , which is very close to being fully truthful (RAT-degree ). This implies that a manipulator agent would need to obtain information about all other agents to safely manipulate – which is a very challenging task. Importantly, the seller’s revenue from such auction is higher compared to the second-price auction, giving this mechanism a significant advantage in this context. This result opens the door to exploring new mechanisms that are not truthful but come very close it. Such mechanisms may enable desirable properties that are unattainable with truthful mechanisms.
4.1. First-Price Auction
In first-price auction, the agent who bids the highest price wins the good and pays her bid; the other agents get and pay nothing.
It in well-known that the first-price auction is not truthful. We start by proving that the situation is even worse: first-price auction is safely-manipulable, meaning that its RAT-degree is .
First-price auction is safely manipulable (RAT-degree = ). {proofsketch} A truthful agent always gets a utility of , whether he wins or loses. On the other hand, an agent who manipulates by bidding slightly less than gets a utility of at least when he loses and a strictly positive utility when he wins. Thus, this manipulation is safe and profitable.
Proof.
To prove the mechanism is safely manipulable, we need to show an agent and an alternative bid, such that the agent always weakly prefers the outcome that results from reporting the alternative bid over reporting her true valuation, and strictly prefers it in at least one scenario. We prove that the mechanism is safely-manipulable by all agents with positive valuations.
Let be an agent with valuation , we shall now prove that bidding is a safe manipulation.
We need to show that for any combination of bids of the other agents, bidding does not harm agent , and that there exists a combination where bidding strictly increases her utility. To do so, we consider the following cases according to the maximum bid of the other agents:
-
•
The maximum bid is smaller than : if agent bids her valuation , she wins the good and pays , results in utility . However, by bidding , she still wins but pays only , yielding a positive utility.
Thus, in this case, agent strictly increases her utility by lying.
-
•
The maximum bid is between and : if agent bids her valuation , she wins the good and pays , resulting in utility . By bidding , she loses the good but pays noting, also resulting in utility .
Thus, in this case, bidding does not harm agent .
-
•
The maximum bid is higher than : Regardless of whether agent bids her valuation or , she does not win the good, resulting in utility .
Thus, in this case, bidding does not harm agent .
∎
4.2. First-Price Auction with Discount
In first-price auction with discount, the agent with the highest bid wins the item and pays , where is a constant. As before, the other agents get and pay nothing.
We prove that, although this minor change does not make the mechanism truthful, it increases the degree of trustfulness for risk-averse agents. However, it is still quite vulnerable to manipulation as knowing the strategy of one other agent might be sufficient to safely-manipulate it.
Theorem 4.1.
The RAT-degree of the First-Price Auction with Discount is .
To prove that the RAT-degree is , we need to show that the mechanism is (a) not safely-manipulable, and (b) -known-agents safely-manipulable. We prove each of these in a lemma.
First-Price Auction with Discount is not safely-manipulable. {proofsketch} Here, unlike in the first-price auction, whenever the agent wins the good, she gains positive utility. This implies that no manipulation can be safe. If she under-bids her value, she risks losing the item, which strictly decreases her utility. If she over-bids, she might end up paying more than necessary, reducing her utility without increasing her chances of winning.
Proof.
We need to show that, for each agent and any bid , at least one of the following is true: either (1) for any combination of bids of the other agents, agent weakly prefers the outcome from bidding ; or (2) there exists such a combination for which strictly prefers the outcome from bidding . We consider two cases.
Case 1: . In this case condition (1) clearly holds, as bidding guarantees the agent a utility of , and no potential outcome of the auction can give a positive utility.
Case 2: . In this case we prove that condition (2) holds. We consider two sub-cases:
-
•
Under-bidding (): whenever , when agent bids truthfully she wins the good and pays , resulting in utility ; but when she bids she does not win, resulting in utility .
-
•
Over-bidding (): whenever , when agent bids truthfully her utility is as before; when she bids she wins and pays , so her utility is less than .
In both cases lying may harm agent . Thus, she has no safe manipulation. ∎
First-Price Auction with Discount is -known-agents safely-manipulable. {proofsketch} Consider the case where the manipulator knows there is another agent bidding . When biding truthfully, she loses and gets zero utility. However, by bidding any value , she either wins and gains positive utility or loses and remains with zero utility. Thus it is safe and profitable.
Proof.
we need to identify an agent , for whom there exists another agent and a bid , such that if bids , then agent has a safe manipulation.
Indeed, let be an agent with valuation and let be another agent who bids some value . We prove that bidding any value is a safe manipulation for .
If truthfully bids , she loses the good (as ), and gets a utility of .
If manipulates by bidding some , then she gets either the same or a higher utility, depending on the maximum bid among the unknown agents ():
-
•
If , then wins and pays , resulting in a utility of , as . Thus, strictly gains by lying.
-
•
If , then does not win the good, resulting in utility . Thus, in this case, bidding does not harm .
-
•
If , then one of the above two cases happens (depending on the tie-breaking rule).
In all cases is a safe manipulation, as claimed. ∎
4.3. Average-First-Second-Price Auction
In the Average-First-Second-Price (AFSP) Auction, the agent with the highest bid wins the item and pays , where is the second-highest bid, and is a fixed constant. That is, the price is a weighted average between the first price and the second price.
We show that this simple change makes a significantly difference – the RAT-degree increases to . This means that a manipulator agent would need to obtain information about all other agents to safely manipulate the mechanism — a very challenging task in practice.
Theorem 4.2.
The RAT-degree of the Average-Price Auction is .
The theorem is proved using the following two lemmas.
The AFSP mechanism is not -known-agents safely-manipulable.
Let be a potential manipulator, a potential manipulation, be a set of known agents, with , and be a vector that represents their bids. We prove that the only unknown-agent can make any manipulation either not profitable or not safe. We denote by the maximum bid among the agents in and consider each of the six possible orderings of , and . In two cases ( or ) the manipulation is not profitable; in the other four cases, the manipulation is not safe.
Proof.
Let be an agent with true value , and a manipulation . We show that this manipulation is unsafe even when knowing the bids of of the other agents.
Let be a subset of of the remaining agents (the agents in are the “known agents”), and let be a vector that represents their bids. Lastly, let be the only agent in . We need to prove that at least one of the following is true: either (1) the manipulation is not profitable — for any possible bid of agent , agent weakly prefers the outcome from bidding over the outcome from bidding ; or (2) the manipulation is not safe — there exists a bid for agent , such that agent strictly prefers the outcome from bidding .
Let . We consider each of the six possible orderings of , and (cases with equalities are contained in cases with inequalities, according to the tie-breaking rule):
-
•
or : In these cases (1) holds, as for any bid of agent , agent never wins. Therefore the manipulation is not profitable.
-
•
: We show that (2) holds. Assume that bids any value . When bids truthfully, she does not win the good so her utility is . But when bids , she wins and pays a weighted average between and . As both these numbers are strictly greater than , the payment is larger than as well, resulting in a negative utility. Hence, the manipulation is not safe.
-
•
: We show that (2) holds. Assume that bids any value . When tells the truth, she wins and pays ; but when bids , she still wins but pays a higher price, , so her utility decreases. Hence, the manipulation is not safe.
-
•
: We show that (2) holds. Assume that agent bids any value . When tells the truth, she wins and pays , resulting in a positive utility. But when bids , she does not win and her utility is . Hence, the manipulation is not safe.
-
•
: We show that (2) holds. Assume that bids any value . When tells the truth, she wins and pays , resulting in a positive utility. But when bids , she does not win and her utility is . Hence, the manipulation is not safe.
-
•
or : We show that (2) holds. Assume that bids any value . When tells the truth, she wins and pays , which is smaller than as both and are smaller than . Therefore, ’s utility is positive. But when bids , she does not win and her utility is . Hence, the manipulation is not safe.
∎
The AFSP mechanism is -known-agents safely-manipulable. {proofsketch} Consider any combination of bids of the other agents in which all the bids are strictly smaller than . Let be the highest bid among the other agents. Then any alternative bid is a safe manipulation.
Proof.
given an agent , we need to show an alternative bid and a combination of bids of the other agents, such that the agent strictly prefers the outcome resulting from its untruthful bid over her true valuation.
Consider any combination of bids of the other agents in which all the bids are strictly smaller than . Let be the highest bid among the other agents. We prove that any alternative bid is a safe manipulation.
When agent bids her valuation , she wins the good and pays , yielding a (positive) utility of
But when bids , as , she still wins the good but pays , which is smaller as ; therefore her utility is higher. ∎
Conclusion.
By choosing a high value for the parameter , the Average-Price Auction becomes similar to the first-price auction, and therefore may attain a similar revenue in practice, but with better strategic properties. The average-price auction is sufficiently simple to test in practice; we find it very interesting to check how it fares in comparison to the more standard auction types.
5. Indivisible Goods Allocations
In this section, we consider several mechanisms to allocate indivisible goods among the agents. Here, the true preferences are given by real values: for any , representing the happiness of agent from receiving the good . The reported preferences are real values . We assume the agents have additive valuations over the goods. Given a bundle , let be agent ’s utility upon receiving the bundle . A mechanism in this context gets (potentially untruthful) reports from all the agents and determines the allocation – a partition of , where is the bundle received by agent .
Results.
We start by considering a simple mechanism, the utilitarian goods allocation – which assigns each good to the agent who reports the highest value for it. We prove that this mechanism is safely manipulable (RAT-degree = ). We then show that the RAT-degree can be increased to by requiring normalization — the values reported by each agent are scaled such that the set of all items has the same value for all agents. The RAT-degree of the famous round-robin mechanism is also at most . In contrast, we design a new mechanism that satisfies the common fairness notion called EF1 (envy-freeness up to one good), that attains a RAT-degree of .
5.1. Utilitarian Goods Allocation
The utilitarian rule aims to maximize the sum of agents’ utilities. When agents have additive utilities, this goal can be achieved by assigning each good to an agent who reports the highest value for it. This section analyzes this mechanism.
For simplicity, we assume that agents’ reports are bounded from above by some maximum possible value . Also, we assume that in cases where multiple agents report the same highest value for some good, the mechanism employs some tie-breaking rule to allocate the good to one of them. However, the tie-breaking rule must operate independently for each good, meaning that the allocation for one good cannot depend on the tie-breaking outcomes of other goods.
We further assume that there is at least one agent who has a value different than and for at least one of the goods. Otherwise, for each good, all agents’ (true) values are or ; and in this case the mechanism is trivially truthful.
The Utilitarian allocation rule is safely manipulable (RAT-degree = 0).
Manipulating by reporting the highest possible value, , for all goods is both profitable and safe. It is profitable because if the maximum report among the other agents for a given good lies between the manipulator’s true value and their alternative bid, , the manipulator’s utility strictly increases. It is safe because, in all other cases, the utility remains at least as high.
Proof.
To prove the mechanism is safely manipulable, we need to show one agent that has an alternative report, such that the agent always weakly prefers the outcome that results from reporting the alternative report over reporting her true valuations, and strictly prefers it in at least one scenario.
Let be an agent who has a value different than and for at least one of the goods. Let be such good – that is, . We prove that reporting the highest possible value, , for all goods is a safe manipulation. Notice that this report is indeed a manipulation as it is different than the true report in at least one place.
We need to show that for any combination of reports of the other agents, this report does not harm agent , and that there exists a combination where it strictly increases her utility.
Noting that since the utilities are additive and tie-breaking is performed separately for each good, we can analyze each good independently. The following proves that for all goods, agent always weakly prefers to report truthfully. Case 4 (marked by *) proves that for the good , there exists a combination of reports of the others, for which agent strictly prefers the outcome from report truthfully. Which proves that it is indeed a safe manipulation.
Let be a good. We consider the following cases according to the value of agent for the good , ; and the maximum report among the other agents for this good:
-
•
: In this case, both the truthful and the untruthful reports are the same.
-
•
: Agent does not care about this good, so regardless of the reports which determines whether or not agent wins the good, her utility from it is .
Thus, in this case, agent is indifferent between telling the truth and manipulating.
-
•
and the maximum report of the others is strictly smaller than : Agent wins the good in both cases – when she reports her true value or bids .
Thus, in this case, agent is indifferent between telling the truth and manipulating.
-
•
(*) and the maximum report of the others is greater than and smaller than : when agent reports her true value for the good , then she does not win it. However, by bidding , she does.
Thus, in this case, agent strictly increases her utility by lying (as her value for this good is positive).
-
•
and the maximum report of the others equals : when agent reports her true value for the good, she does not win it. However, by bidding , she gets a chance to win the good. Even for risk-averse agent, a chance of winning is better than no chance.
Thus, in this case, agent strictly increases her (expected) utility by lying.
∎
5.2. Normalized Utilitarian Goods Allocation
In the normalized utilitarian allocation rule, the agents’ reports are first normalized such that each agent’s values sum to a given constant . Then, each good is given to the agent with the highest normalized value. We focus on the case of at least three goods.
Theorem 5.1.
For goods, the RAT-degree of the Normalized Utilitarian allocation rule is .
We prove Theorem 5.1 using several lemmas that analyze different cases.
The first two lemmas prove that the rule is not safely-manipulable, so its RAT-degree is at least . Section 5.2 addresses agents who value only one good positively, while Section 5.2 covers agents who value at least two goods positively.
An agent who values only one good positively cannot safely manipulate the Normalized Utilitarian allocation rule.
Due to the normalization requirement, any manipulation by such an agent involves reporting a lower value for the only good she values positively, while reporting a higher values for some goods she values at 0. This reduces her chances of winning her desired good and raises her chances of winning goods she values at zero, ultimately decreasing her utility. Thus, the manipulation is neither profitable nor safe.
Proof.
Let be an agent who values only one good and let be the only good she likes. That is, her true valuation is and for any (any good different than ).
To prove that agent does not have a safe manipulation, we need to show that for any report for her either (1) for any reports of the other agents, agent weakly prefers the outcome from telling the truth; or (2) there exists a reports of the other agents, for which agent strictly prefers the outcome from telling the truth.
Let be an alternative report for the goods. We assume that the values are already normalized. We shall now prove that the second condition holds (lying may harm the agent).
First, as the alternative report is different, we can conclude that . We denote the difference by . Next, consider the following reports of the other agents (all agents except ): for item and for some other good.
When agent reports her true value she wins her desired good , which gives her utility . However, when she lies, she loses good and her utility decreases to (winning goods different than does not increases her utility).
That is, lying may harm the agent. ∎
An agent who values positively at least two goods cannot safely manipulate the Normalized Utilitarian allocation rule.
Since values are normalized, any manipulation by such an agent must involve increasing the reported value of at least one good while decreasing the value of at least one other good . We show that such a manipulation is not safe, by considering the case where all other agents report as follows: they assign a value of to , a value between the manipulator’s true and reported value for , and a value slightly higher than the manipulator’s report for all other goods (it is possible to construct such reports that are non-negative and normalized).
With these reports, the manipulation causes the manipulator to lose the good which has a positive value for her, whereas she wins the good with or without the manipulation, and does not win any other good. Hence, the manipulation strictly decreases her total value.
Proof.
Let be an agent who values at least two good, her true values for the goods, and a manipulation for . We need to show that the manipulation is either not safe – there exists a combination of the other agents’ reports for which agent strictly prefers the outcome from telling the truth; or not profitable – for any combination of reports of the other agents, agent weakly prefers the outcome from telling the truth. We will show that the manipulation is not safe by providing an explicit combination of other agents’ reports.
First, notice that since the the true values and untruthful report of agent are different and they sum to the same constant , there must be a good whose value was increased (i.e., ), and a good whose value was decreased (i.e., ).
Next, let , notice that . Also, let , notice that as well (here we use the condition ).
We consider the combination of reports in which all agents except report the following values, denoted by :
-
•
For good they report .
-
•
For good they report .
-
•
For the rest of the goods, , they report .
We prove that the above values constitute a legal report — they are non-negative and normalized to .
First, we show that the sum of values in this report is :
Second, we show that all the values are non-negative:
-
•
Good : it is clear as .
-
•
Good : since is strictly higher than the (non-negative) report of agent by , it is clearly non-negative.
-
•
Rest of the goods, : since , it is clear that . As and , we get that is higher than . As is strictly higher than the (non-negative) report of agent by , it is clearly non-negative.
Now, we prove that, given these reports for the unknown agents, agent strictly prefers the outcome from reporting truthfully to the outcome from manipulating.
We look at the two possible outcomes for each good – the one from telling and truth and the other from lying, and show that the outcome of telling the truth is always either the same or better, and that for at least one of the goods that agent wants (specifically, ) it is strictly better.
-
•
For good we consider two cases.
-
(1)
If : when agent is truthful we have a tie for this good as . When agent manipulates, she wins the good (as ). However, as , in both cases, her utility from this good is .
-
(2)
If : Whether agent says is truthful or not, she wins the good as . Thus, for this good, the agent receives the same utility (of ) when telling the truth or lying.
-
(1)
-
•
For good : when agent is truthful, she wins the good since :
(as ) But when agent manipulates, she loses the good since (as and ).
As the real value of agent for this good is positive, the agent strictly prefers telling the truth for this good.
-
•
Rest of the goods, : When agent is truthful, all the outcomes are possible – the agent either wins or loses or that there is a tie.
However, as for this set of goods the reports of the other agents are , when agent manipulates, she always loses the good. Thus, her utility from lying is either the same or smaller (since losing the good is the worst outcome).
Thus, the manipulation may harm the agent. ∎
The last lemma shows that the RAT-degree is at most , thus completing the proof of the theorem. {lemmarep} With goods, Normalized Utilitarian is -known-agent safely-manipulable.
Consider a scenario where there is a known agent who reports for some good that the manipulator wants and slightly more than the manipulator’s true values for all other goods. In this case, by telling the truth, the manipulator has no chance to win any good except . Therefore, reporting a value of for and a value of for all other goods is a safe manipulation.
The same manipulation is also profitable, since it is possible that the reports of all unknown agents for are between the manipulator’s true value and . In such a case, the manipulation causes the manipulator to win , which strictly increases her utility.
Proof.
Let be an agent and let be her values for the goods. We need to show (1) an alternative report for agent , (2) another agent , and (3) a report agent ; such that for any combination of reports of the remaining (unknown) agents, agent weakly prefers the outcome from lying, and that there exists a combination for which agent strictly prefers the outcome from lying.
Let be a good that agent values (i.e., ).
Let be an agent different than . We consider the following report for agent : first, , and for any good different than , where . Notice that .
We shall now prove that reporting for the good (and for the rest of the goods) is a safe manipulation for given that reports the described above.
When agent reports her true values, then she does not win the goods different than – this is true regardless of the reports of the remaining agents, as agent reports a higher value . For the good , we only know that , meaning that it depends on the reports of the remaining agents. We consider the following cases, according to the maximum report for among the remaining agents:
-
•
If the maximum is smaller than : agent wins the good in both cases (when telling the truth or lies).
(the same)
-
•
If the maximum is greater than but smaller than : when agent tells the truth, she does not win the good. However, when she lies, she does.
(lying is strictly better).
-
•
If the maximum equals : when agent tells the truth, she does not win the good. However, when she lies, we have tie for this good. Although agent is risk-averse, having a chance to win the good is strictly better than no chance at all.
(lying is strictly better).
∎
5.3. Round-Robin Goods Allocation
In round-robin goods allocation, the agents are arranged according to some predetermined order . There are rounds, corresponding to the number of goods. In each round, the agent whose turn it is (based on ) takes their most preferred good from the set of goods that remain unallocated at that point. When there are multiple goods that are equally most preferred, the agent takes the good with the smallest index. Note that normalization is not important for Round-Robin.
If there are at most goods, then the rule is clearly truthful. Thus, we assume that there are goods. Even the slight increment to makes a significant difference:
Lemma 5.2.
With goods, round-robin is -known-agent safely-manipulable.
Proof.
Let be the order according to agents’ indices: . Let agent ’s valuation be such that and . Suppose agent knows that agent will report the valuation with , , and . We show that agent has a safe and profitable manipulation by reporting , , and .
Firstly, we note that agent ’s utility is always when reporting her valuation truthfully, regardless of the valuations of agents . This is because agent will receive item (by the item-index tie-breaking rule) and agent will receive item in the first two rounds. The allocation of the remaining items does not affect agent ’s utility.
Secondly, after misreporting, agent will receive item in the first round, which already secures agent a utility of at least . Therefore, agent ’s manipulation is safe.
Lastly, if the remaining agents report the same valuations as agent does, it is easy to verify that agent will receive item in the -th round. In this case, agent ’s utility is . Thus, the manipulation is profitable. ∎
We show a partial converse. {lemmarep} With goods, round-robin is not safely-manipulable.
Proof.
Clearly, the only agent that could have a profitable manipulation is .
Order the goods by descending order of ’s values, and subject to that, by ascending index. W.l.o.g., assume that . If all valuations are equal (), then always gets the same total value (two goods), so no manipulation is profitable. Therefore we assume that .
If is truthful, he gets first, and then the last good remaining after all other agents picked a good. If, after the manipulation, still has the highest value, then still gets first, and has exactly the same bundle, so the manipulation is not profitable.
Hence, we assume that manipulates such that some other good, say , becomes the most valuable good: for all goods , and . We prove that the manipulation is not safe.
Suppose the unknown agents all assign the lowest value to , and the next-lowest value to . If is truthful, he gets and then , so his value is . But if manipulates, he gets and then , so his value is , which is strictly lower. ∎
Hence, The RAT-degree of round-robin is when and at most when .
The proof of Lemma 5.2 uses weak preferences (preferences with ties). In particular, if ’s valuations for the top two goods are different, then picking first is risky. With strict preferences, we could only prove a much weaker upper bound of . This raises the following question. {open} What is the RAT-degree of round-robin when agents report strict preferences?
5.4. An EF1 Mechanism with RAT-degree
In this section, we focus on mechanisms that always output fair allocations (with respect to the reported valuation profile). We consider the widely used fairness criterion envy-freeness up to one item (EF1), which intuitively means that no agent envies another agent if one item is (hypothetically) removed from that agent’s bundle.
Definition 5.3.
Given a valuation profile , an allocation is envy-free up to one item (EF1) if for every pair of there exists a good such that .
It is well-known and easy to see that the round-robin mechanism always outputs an EF1 allocation. However, as we have seen in the previous section, the round-robin mechanism has a very low RAT-degree. In this section, we propose a new EF1 mechanism that has RAT-degree .
5.4.1. Description of Mechanism
The mechanism has two components: an agent selection rule and an allocation rule . The agent selection rule takes the valuation profile as an input and outputs the indices of two agents, where agent is called the mechanism-favored agent and agent is called the mechanism-unfavored agent (the reason of both names will be clear soon). The allocation rule takes the valuation profile and the indices of the two agents output by as inputs and then outputs an EF1 allocation . We will complete the description of our mechanism by defining and .
We first define . Given the input valuation profile , let be the set of all EF1 allocations in which agent is not envied by anyone else, i.e., for any agent , we have . Notice that is nonempty: the allocation output by the round-robin mechanism with being the last agent under satisfies both (1) and (2) above.
The rule then outputs an allocation in that maximizes . When there are multiple maximizers, the rule breaks the tie in an arbitrary consistent way. This finishes the description of .
To describe , we first state a key property called volatility that we want from , and then show that a volatile rule can be constructed. Informally, volatility says the following. If an arbitrary agent changes the reported valuation profile from to , we can construct a valuation profile for another agent such that has a positive value on only one pre-specified good and the two agents output by switch from some pre-specified to some pre-specified .
Definition 5.4.
A selection rule is called volatile if for any six indices of agents with , , and , any good , any set of valuation profiles , and any two reported valuation profiles of agent with (i.e., for at least one good ), there exists a valuation function of agent such that
-
•
, and for any , and
-
•
outputs and for the valuation profile , and outputs and for the valuation profile .
In other words, a manipulation of agent from to can affect the output of in any possible way (from any pair to any other pair ), depending on the report of agent .
We will use an arbitrary volatile rule for our mechanism. We conclude the description of by proving (in the appendix) that such a rule exists.
There exists a volatile agent selection rule .
Proof.
The rule does the following. It first finds the maximum value among all agents and all goods: . It then views the value as a binary string that encodes the following information:
-
•
the index of an agent ;
-
•
a non-negative integer ,
-
•
two non-negative integers , between and .
We append ’s as most significant bits to if the length of the binary string is not long enough to support the format of the encoding. If the encoding of is longer than the length enough for encoding the above-mentioned information, we take only the least significant bits in the amount required for the encoding.
The mechanism-favored agent and the mechanism-unfavored agent are then decided in the following way. Let be the bit at the -th position of the binary encoding of the value .
Let . Each value of corresponds to a pair of different agents .
To see that is volatile, suppose and are different in the -th bits of their binary encoding. We construct a value that encodes the integers where
-
(1)
the -th bit of is and the -th bit of is for ;
-
(2)
The pair corresponds to the integer .
-
(3)
The pair corresponds to the integer .
(1) can always be achieved by some encoding rule. To see (2) and (3) can always be achieved, assume and without loss of generality. We can then take the integer corresponding to the pair , and the integer corresponding to the pair , modulo .
We then construct a valuation such that is the largest and is equal to . In case is not large enough, we increase it as needed by adding nmost significant digits. ∎
5.4.2. Proving RAT-degree of
Before we proceed to the proof, we first define some additional notions. We say that is a partial allocation if for any pair of and . The definition of EF1 can be straightforwardly extended to partial allocations. Given a possibly partial allocation , we say that agent strongly envies agent if for any , i.e., the EF1 criterion from to fails. Given , we say that a (possibly partial) allocation is EF1 except for if for any pair with we have for some . In words, the allocation is EF1 except that agent is allowed to strongly-envy others.
We first prove some lemmas which will be used later.
Lemma 5.5.
Fix a valuation profile. Let be a partial EF1 allocation. There exists a complete EF1 allocation such that for each .
Proof.
Construct the envy-graph for the partial allocation and then perform the envy-graph procedure proposed by lipton2004approximately to obtain a complete allocation . The monotonic property of the procedure directly implies this proposition. ∎
Fix a valuation profile and an arbitrary agent . Let be the set of all complete EF1 allocations. Let be the set of all possibly partial allocations that are EF1 except for possibly . The allocation in that maximizes agent ’s utility is also the one in that maximizes ’s utility. In other words, if gets the maximum possible value subject to EF1, he cannot get a higher value by agreeing to give up the EF1 guarantee for himself. This claim is trivially true for share-based fairness notions such as proportionality, but quite challenging to prove for EF1; see appendix.
Proof.
Assume without loss of generality. The allocation space is clearly a subset of . Assume for the sake of contradiction that, for all possibly partial allocations in , agent strongly envies someone else. Let be an allocation that minimizes (minimizes the total number of goods allocated) among all allocations in .
For each , agent will strongly envy some other agent if an arbitrary good is removed from , for otherwise, the minimality is violated. We say that an agent champions agent if the following holds.
-
•
strongly envies for ;
-
•
for , let agent removes the most valuable good from each (for ) and let be the resultant bundle; then the championed agent, agent , is defined by the index with the maximum .
We then construct a champion graph which is a directed graph with vertices where the vertices represent the agents and an edge from to represents that agent champions agent . By our definition, each vertex in the graph has at least one outgoing edge, so the graph must contain a directed cycle .
Consider a new allocation defined as follows. For every edge in , let agent remove the most valuable good from and then take the bundle. We will show that and , which will contradict the minimality of .
It is easy to see . If agent is not in the cycle , then her utility is unchanged. Otherwise, she receives a bundle that she previously strongly envies, and one good is then removed from the bundle. The property of strong envy guarantees that .
To show that , first consider any agent with that is not in . Agent ’s bundle is unchanged, and she will not strongly envy anyone else as before (as only item-removals happen during the update).
Next consider any agent with that is in . Let be the agent such that is an edge in . Let be the bundle with the most valuable good (according to ) removed from . By our definition, agent receives in the new allocation. We will prove that agent , by receiving , does not strongly-envy any of the original bundles , for any .
Our definition of championship ensures that this is true for any , as the new bundle of is at least as valuable for than every other bundle with an item removed.
It remains to show that this holds for . As we have argued at the beginning, in the original allocation , removing any item from would make agent strongly envy some other agent. By our definition of championship, when one good is removed from , agent strongly envies , which implies thinks is more valuable than . Therefore, in the new allocation, by receiving , agent does not strongly envy .
We have proved that agent , by receiving the bundle , does not strongly envy any of the original bundles . Since the new allocation only involves item removals, agent does not strongly envy anyone else in the new allocation.
Hence, the new allocation is in , which contradicts the minimality of . ∎
The - mechanism in Sect. 5.4.1 has a RAT-degree of . {proofsketch} For every profitable manipulation by , and for every unknown agent , the volatility of implies that, for some possible valuation , a truthful report by leads to being the favored agent and being the unfavored agent, whereas the manipulation leads to being the unfavored agent and being the favored agent. We use this fact, combined with Lemma 5.5 and Section 5.4.2, to prove that the manipulation may be harmful for .
Proof.
Let be two arbitrary agents. Fix arbitrary valuations for the remaining agents. Consider two arbitrary valuations for agent , and , with , where is ’s true valuation. We will show that switching from to is not a safe manipulation.
Let be some good that values positively, that is, . By the volatility of , we can construct the valuation of agent such that
-
•
, and for any ;
-
•
if agent truthfully reports , then agent is mechanism-favored and agent is mechanism-unfavored; if agent reports instead, then agent is mechanism-favored and agent is mechanism-unfavored.
Let be the allocation output by when agent reports truthfully, and be the allocation output by when agent reports . Our objective is to show that .
Let us consider first. We know that . To see this, notice that maximizes agent ’s utility as long as . In addition, there exists a valid allocation output by with : consider the round-robin mechanism with agent be the first and agent be the last under the order .
Consider a new allocation in which is moved from to , that is,
-
•
,
-
•
,
-
•
for .
Notice that is EF1 except for :
-
•
agent will not envy any agent with (as each bundle has a zero value for ), and agent will not envy agent upon removing the item from ;
-
•
no other agent strongly envies agent : given that no one envies agent in (as agent is mechanism-unfavored), no one strongly envies agent in ;
-
•
no agent strongly envies agent , as ;
-
•
any two agents in do not strongly envy each other, as their allocations are not changed.
Now, consider , which is the allocation that favors agent when agent truthfully reports . We can assume without loss of generality. If not, we can reallocate goods in to the remaining agents while keeping the EF1 property among the remaining agents (Lemma 5.5). Agent will not strongly envy anyone, as removing the good kills the envy. Thus, the resultant allocation is still EF1 and no one envies the empty bundle . In addition, by Lemma 5.5, the utility of each agent with does not decrease after the reallocation of .
Let be the set of all EF1 allocations of the item-set to the agent-set . Let be the set of all possibly partial allocations of the item-set to the agent-set that are EF1 except for agent . The above argument shows that is an allocation in that maximizes agent ’s utility. We have also proved that . By Section 5.4.2, we have . In addition, we have , , and (by our assumption), which imply . Therefore, . ∎
6. Cake Cutting
In this section, we study the cake cutting problem: the allocation of divisible heterogeneous resources to agents. The cake cutting problem was proposed by Steinhaus48; Steinhaus49, and it is a widely studied subject in mathematics, computer science, economics, and political science.
In the cake cutting problem, the resource/cake is modeled as an interval , and it is to be allocated among a set of agents . An allocation is denoted by where is the share allocated to agent . We require that each is a union of finitely many closed non-intersecting intervals, and, for each pair of , and can only intersect at interval endpoints, i.e., the measure of is . We say an allocation is complete if . Otherwise, it is partial.
The true preferences of agent are given by a value density function that describes agent ’s preference over the cake. To enable succinct encoding of the value density function, we adopt the widely considered assumption that each is piecewise constant: there exist finitely many points with such that is a constant on every interval , . Given a subset that is a union of finitely many closed non-intersecting intervals, agent ’s value for receiving is then given by
Fairness and efficiency are two natural goals for allocating the cake. For efficiency, we consider two commonly used criteria: social welfare and Pareto-optimality. Given an allocation , its social welfare is given by . This is a natural measurement of efficiency that represents the overall happiness of all agents. Pareto-optimality is a yes-or-no criterion for efficiency. An allocation is Pareto-optimal if there does not exist another allocation such that for each agent and at least one of these inequalities is strict.
For fairness, we study two arguably most important notions: envy-freeness and proportionality. An allocation is proportional if each agent receives her average share, i.e., for each , . An allocation is envy-free is every agent weakly prefers her own allocated share, i.e., for every pair , . A complete envy-free allocation is always proportional , but this implication does not hold for partial allocations.
Before we discuss our results, we define an additional notion, uniform segment, which will be used throughout this section. Given value density functions (that are piecewise constant by our assumptions), we identify the set of points of discontinuity for each and take the union of the sets. Sorting these points by ascending order, we let be all points of discontinuity for all the value density functions. Let and . These points define intervals, , such that each is a constant on each of these intervals. We will call each of these intervals a uniform segment, and we will denote for each . For each agent , we will slightly abuse the notation by using to denote with .
Since all agents’ valuations on each uniform segment are uniform, it is tempting to think about the cake cutting problem as the problem of allocating divisible homogeneous goods. However, this interpretation is inaccurate when concerning agents’ strategic behaviors, as, in the cake cutting setting, an agent can manipulate her value density function with a different set of points of discontinuity, which affects how the divisible goods are defined. To see a significant difference between these two models, in the divisible goods setting, the equal division rule that allocates each divisible good evenly to the agents is truthful (with RAT-degree ), envy-free and proportional, while, in the cake cutting setting, it is proved in tao2022existence that truthfulness and proportionality are incompatible even for two agents.
Results
In Sect. 6.1, we start by considering the simple mechanism that outputs allocation with the maximum social welfare. We show that the RAT-degree of this mechanism is . Similar as it is in the case of indivisible goods, we also consider the normalized variant of this mechanism, and we show that the RAT-degree is . In Sect. 6.2, we consider mechanisms that output fair allocations. We review the mechanisms studied in BU2023Rat by studying their RAT-degrees. We will see that one of those mechanisms, which always outputs envy-free allocation, has a RAT-degree of . However, this mechanism has a very poor performance on efficiency. Finally, in Sect. 6.5, we propose a new mechanism with RAT-degree that always outputs proportional and Pareto-optimal allocations.
6.1. Maximum Social Welfare Mechanisms
It is easy to find an allocation that maximizes the social welfare: for each uniform segment , allocate it to an agent with the maximum . When multiple agents have equally largest value of on the segment , we need to specify a tie-breaking rule. However, as we will see later, the choice of the tie-breaking rule does not affect the RAT-degree of the mechanism.
It is easy to see that, whatever the tie-breaking rule is, the maximum social welfare mechanism is safely manipulable. It is safe for an agent to report higher values on every uniform segment. For example, doubling the values on all uniform segments is clearly a safe manipulation.
Utilitarian Cake-Cutting with any tie-breaking rule has RAT-degree .
We next consider the following variant of the maximum social welfare mechanism: first rescale each such that , and then output the allocation with the maximum social welfare. We will show that the RAT-degree is . The proof is similar to the one for indivisible items (Theorem 5.1) and is given in the appendix.
When there are at least three agents, Normalized Utilitarian Cake-Cutting with any tie-breaking rule has RAT-degree .
Proof.
We assume without loss of generality that the value density function reported by each agent is normalized (as, otherwise, the mechanism will normalize the function for the agent).
We first show that the mechanism is not -known-agents safely-manipulable. Consider an arbitrary agent and let be her true value density function. Consider an arbitrary misreport of agent with . Since the value density functions are normalized, there must exist an interval where and are constant and for . Choose such that . Consider the following two value density functions (note that both are normalized):
Suppose the remaining agents’ reported value density functions are either or and each of and is reported by at least one agent (here we use the assumption ). In this case, agent will receive the empty set by reporting . On the other hand, when reporting , agent will receive an allocation that at least contains as a subset. Since has a positive value on , reporting is not a safe manipulation.
We next show that the mechanism is -known-agent safely-manipulable. Suppose agent ’s true value density function is
and agent knows that agent reports the uniform value density function for . We will show that the following manipulation of agent is safe and profitable.
Firstly, regardless of the reports of the remaining agents, the final allocation received by agent must be a subset of , as agent ’s value is higher on the other half . Since is larger than on , any interval received by agent when reporting will also be received if were reported. Thus, the manipulation is safe.
Secondly, if the remaining agents’ value density functions are
it is easy to verify that agent receives the empty set when reporting truthfully and she receives by reporting . Therefore, the manipulation is profitable. ∎
6.2. Fair Mechanisms
In this section, we focus on mechanisms that always output fair (envy-free or proportional) allocations. As we have mentioned earlier, it is proved in tao2022existence that truthfulness and proportionality are incompatible even for two agents and even if partial allocations are allowed. This motivates the search for fair cake-cutting algorithms with a high RAT-degree.
The mechanisms discussed in this section have been considered in BU2023Rat. However, they are only studied by whether or not they are risk-averse truthful (in our language, whether the RAT-degree is positive). With our new notion of RAT-degree, we are now able to provide a more fine-grained view of their performances on strategy-proofness.
One natural envy-free mechanism is to evenly allocate each uniform segment to all agents. Specifically, each is partitioned into intervals of equal length, and each agent receives exactly one of them. It is easy to see that for any under this allocation, so the allocation is envy-free and proportional.
To completely define the mechanism, we need to specify the order of evenly allocating each to the agents. A natural tie-breaking rule is to let agent get the left-most interval and agent get the right-most interval. Specifically, agent receives the -th interval of , which is . However, it was proved in BU2023Rat that the equal division mechanism under this ordering rule is safely-manipulable, i.e., its RAT-degree is . In particular, agent , knowing that she will always receive the left-most interval in each , can safely manipulate by deleting a point of discontinuity in her value density function if her value on the left-hand side of this point is higher.
To avoid this type of manipulation, a different ordering rule was considered by BU2023Rat (See Mechanism 3 in their paper): at the -th segment, the equal-length subintervals of are allocated to the agents with the left-to-right order . By using this ordering rule, an agent does not know her position in the left-to-right order of without knowing others’ value density functions. Indeed, even if only one agent’s value density function is unknown, an agent cannot know the index of any segment . This suggests that the mechanism has a RAT-degree of .
Consider the mechanism that evenly partitions each uniform segment into equal-length subintervals and allocates these subintervals to the agents with the left-to-right order . It has RAT-degree and always outputs envy-free allocations. {proofsketch} Envy-freeness is trivial: for any , we have . The general impossibility result in tao2022existence shows that no mechanism with the envy-freeness guarantee can be truthful, so the RAT-degree is at most .
To show that the RAT-degree is exactly , we show that, if even a single agent is not known to the manipulator, it is possible that this agent’s valuation adds discontinuity points in a way that the ordering in each uniform segment is unfavorable for the manipulator.
Proof.
Envy-freeness is trivial: for any , we have . The general impossibility result in tao2022existence shows that no mechanism with the envy-freeness guarantee can be truthful, so the RAT-degree is at most .
To show that the RAT-degree is exactly , consider an arbitrary agent with true value density function and an arbitrary agent whose report is unknown to agent . Fix arbitrary value density functions that are known by agent to be the reports of the remaining agents. For any , we will show that agent ’s reporting is either not safe or not profitable.
Let be the set of points of discontinuity for , and be the set of points of discontinuity with replaced by . If (i.e., the uniform segment partition defined by is “finer” than the partition defined by ), the manipulation is not profitable, as agent will receive her proportional share in both cases.
It remains to consider the case where there exists a point of discontinuity of such that and . This implies that is a point of discontinuity in , but not in nor in the valuation of any other agent. We will show that the manipulation is not safe in this case.
Choose a sufficiently small such that is contained in a uniform segment defined by . We consider two cases, depending on whether the “jump” of in its discontinuity point is upwards or downwards.
Case 1: . We can construct such that: 1) and are points of discontinuity of , and 2) the uniform segment under the profile is the -th segment where divides (i.e., agent receives the left-most subinterval of this uniform segment). Notice that 2) is always achievable by inserting a suitable number of points of discontinuity for before . Given that , agent ’s allocated subinterval on the segment has value strictly less than .
Case 2: . We can construct such that 1) is a uniform segment under the profile , and 2) agent receives the right-most subinterval on this segment. In this case, agent again receives a value of strictly less than on the segment .
We can do this for every point of discontinuity of that is in . By a suitable choice of (with a suitable number of points of discontinuity of inserted in between), we can make sure agent receives a less-than-average value on every such segment . Moreover, agent receives exactly the average value on each of the remaining segments, because the remaining discontinuity points of are contained in . Therefore, the overall utility of by reporting is strictly less than . Given that receives value exactly for truthfully reporting , reporting is not safe. ∎
Although the equal division mechanism with the above-mentioned carefully designed ordering rule is envy-free and has a high RAT-degree of , it is undesirable in at least two aspects:
-
(1)
it requires quite many cuts on the cake by making cuts on each uniform segment; this is particularly undesirable if piecewise constant functions are used to approximate more general value density functions.
-
(2)
it is highly inefficient: each agent ’s utility is never more than her minimum proportionality requirement ;
Regarding point (1), researchers have been looking at allocations with connected pieces, i.e., allocations with only cuts on the cake. A well-known mechanism in this category is the moving-knife procedure, which always outputs proportional allocations. This mechanism was first proposed by dubins1961cut. It always returns a proportional connected allocation. Unfortunately, it was shown by BU2023Rat that Dubins and Spanier’s moving-knife procedure is safely-manipulable for some very subtle reasons.
BU2023Rat proposed a variant of the moving-knife procedure that is RAT. In addition, they showed that another variant of moving-knife procedure proposed by ortega2022obvious is also RAT.222It should be noticed that, when is allowed to take value, tie-breaking needs to be handled very properly to ensure RAT. See BU2023Rat for more details. Here, for simplicity, we assume for each and . In the appendix, we describe both mechanisms and show that both of them have RAT-degree . {toappendix}
6.3. Moving-knife mechanisms: descriptions and proofs
Hereafter, we assume for each and .
Dubins and Spanier’s moving-knife procedure
Let be the value of agent ’s proportional share. In the first iteration, each agent marks a point on such that the interval has value exactly to agent . Take , and the agent with takes the piece and leaves the game. In the second iteration, let each of the remaining agents marks a point on the cake such that has value exactly . Take , and the agent with takes the piece and leave the game. This is done iteratively until agents have left the game with their allocated pieces. Finally, the only remaining agent takes the remaining part of the cake. It is easy to see that each of the first agents receives exactly her proportional share, while the last agent receives weakly more than her proportional share; hence the procedure always returns a proportional allocation.
Notice that, although the mechanism is described in an iterative interactive way that resembles an extensive-form game, we will consider the direct-revelation mechanisms in this paper, where the value density functions are reported to the mechanism at the beginning. In the above description of Dubins and Spanier’s moving-knife procedure, as well as its two variants mentioned later, by saying “asking an agent to mark a point”, we refer to that the mechanism computes such a point based on the reported value density function. In particular, we do not consider the scenario where agents can adaptively choose the next marks based on the allocations in the previous iterations.
Unfortunately, it was shown by BU2023Rat that Dubins and Spanier’s moving-knife procedure is safely-manipulable for some very subtle reasons.
BU2023Rat proposed a variant of the moving-knife procedure that is risk-averse truthful. In addition, BU2023Rat shows that another variant of moving-knife procedure proposed by ortega2022obvious is also risk-averse truthful.333It should be noticed that, when is allowed to take value, tie-breaking needs to be handled very properly to ensure risk-averse truthfulness. See BU2023Rat for more details. Below, we will first describe both mechanisms and then show that both of them have RAT-degree .
Ortega and Segal-Halevi’s moving knife procedure
The first iteration of Ortega and Segal-Halevi’s moving knife procedure is the same as it is in Dubins and Spanier’s. After that, the interval is then allocated recursively among the agents . That is, in the second iteration, each agent marks a point such that the interval has value exactly (instead of as it is in Dubins and Spanier’s moving-knife procedure). The remaining part is the same: the agent with the left-most mark takes the corresponding piece and leaves the game. After the second iteration, the remaining part of the cake is again recursively allocated to the remaining agents. This is continued until the entire cake is allocated.
Bu, Song, and Tao’s moving knife procedure
Each agent is asked to mark all the “equal-division-points” at the beginning such that for each , where we set and . The remaining part is similar to Dubins and Spanier’s moving-knife procedure: in the first iteration, agent with the minimum takes and leave the game; in the second iteration, agent with the minimum among the remaining agents takes and leave the game; and so on. The difference to Dubins and Spanier’s moving-knife procedure is that each is computed at the beginning, instead of depending on the position of the previous cut.
Theorem 6.1.
The RAT-degree of Ortega and Segal-Halevi’s moving knife procedure is .
Proof.
It was proved in BU2023Rat that the mechanism is not -known-agent safely-manipulable. It remains to show that it is -known-agents safely-manipulable. Suppose agent ’s value density function is uniform, for , and agent knows that agent will report such that for and for for some very small with . We show that the following is a safe manipulation.
Before we move on, note an important property of : for any , we have .
Let be the piece received by agent when she reports truthfully. If , the above-mentioned property implies that she will also receive exactly for reporting . If , then we know that agent is the -th agent in the procedure. To see this, we have by the property of Ortega and Segal-Halevi’s moving knife procedure, and we also have . This implies cannot be the cut point of for . On the other hand, it is obvious that agent takes a piece after agent . Thus, by the time agent takes , the only remaining agent is agent .
Since there are exactly two remaining agents in the game before agent takes , we have . This implies and . On the other hand, by reporting , agent can then get the piece with . We see that . Thus, the manipulation is safe and profitable. ∎
Theorem 6.2.
The RAT-degree of Bu, Song, and Tao’s moving knife procedure is .
Proof.
It was proved in BU2023Rat that the mechanism is not -known-agent safely-manipulable. The proof that it is -known-agents safely-manipulable is similar to the proof for Ortega and Segal-Halevi’s moving knife procedure, with the same and . It suffices to notice that the first equal-division-points are the same for and , where the last equal-division-point of is to the right of ’s. Given that agent will always receive a piece after agent , the same analysis in the previous proof can show that the manipulation is safe and profitable. ∎
6.4. Additional proofs
These results invoke the following question. {open} Is there a proportional connected cake-cutting rule with RAT-degree at least ?
We handle point (2) from above in the following subsection.
6.5. A Proportional and Pareto-Optimal Mechanism with RAT-degree
In this section, we provide a mechanism with RAT-degree that always outputs proportional and Pareto-optimal allocations. In addition, we show that the mechanism can be implemented in polynomial time. The mechanism uses some similar ideas as the one in Section 5.4.
6.5.1. Description of Mechanism
The mechanism has two components: an order selection rule and an allocation rule . The order selection rule takes the valuation profile as an input and outputs an order of the agents. We use to denote the -th agent in the order. The allocation rule then outputs an allocation based on .
We first define the allocation rule . Let be the set of all proportional allocations. Then outputs an allocation in in the following “leximax” way:
-
(1)
the allocation maximizes agent ’s utility;
-
(2)
subject to (1), the allocation maximizes agent ’s utility;
-
(3)
subject to (1) and (2), the allocation maximizes agent ’s utility;
-
(4)
We next define . We first adapt the volatility property of (defined in Sect. 5.4) to the cake-cutting setting.
Definition 6.3.
A function (from the set of valuation profiles to the set of orders on agents) is called volatile if for any two agents and any two orders and , any set of value density functions , any value density function , and any two reported valuation profiles of agent with , there exists a valuation function of agent such that
-
•
is a rescaled version of , i.e., there exists such that for all ;
-
•
outputs for the valuation profile , and outputs for the valuation profile .
In other words, a manipulation of agent from to can affect the output of in any possible way (from any order to any order ), depending on the report of agent .
There exists a volatile function .
Proof.
The function does the following. It first finds the maximum value among all the value density functions (overall all uniform segments): . It then views as a binary string that encodes the following information:
-
•
the index of an agent ,
-
•
a non-negative integer ,
-
•
two non-negative integers and that are at most .
We append ’s as most significant bits to if the length of the binary string is not long enough to support the format of the encoding. If the encoding of is longer than the length enough for encoding the above-mentioned information, we take only the least significant bits in the amount required for the encoding.
The order is chosen in the following way. Firstly, we use an integer between and to encode an order. Then, let be the -th bit that encodes agent ’s value density function. The order is defined to be .
We now prove that is volatile. Suppose and differ at their -th bits, so that that the -th bit of is and the -th bit of is . We construct a number that encodes the index , the integer , and two integers such that encodes and encodes .
Then, we construct by rescaling such that the maximum value among all density functions is attained by , and this number is exactly , that is, for some uniform segment . If the encoded is not large enough to be a maximum value, we enlarge it as needed by adding most significant bits.
By definition, returns when reports and returns when reports .
∎
6.5.2. Properties of the Mechanism
The mechanism always outputs a proportional allocation by definition. It is straightforward to check that it outputs a Pareto-efficient allocation. {propositionrep} The - mechanism for cake-cutting always returns a Pareto-efficient allocation.
Proof.
Suppose for the sake of contradiction that output by the mechanism is Pareto-dominated by , i.e., we have
-
(1)
for each agent , and
-
(2)
for at least one agent , .
Property (1) above ensures is proportional and thus is also in : for each , (as the allocation is proportional). Based on the property (2), find the smallest index such that . We see that does not maximize the -th agent in the order , which contradicts the definition of the mechanism. ∎
It then remains to show that the mechanism has RAT-degree . We need the following proposition; it follows from known results on super-proportional cake-cutting (dubins1961cut; woodall1986note); for completeness we provide a proof in the appendix.
Let be the set of all proportional allocations for the valuation profile . Let be the allocation in that maximizes agent ’s utility. If there exists such that and are not identical up to scaling, then .
Proof.
We will explicitly construct a proportional allocation where if the pre-condition in the statement is satisfied. Notice that this will imply the proposition, as we are finding the allocation maximizing ’s utility. To construct such an allocation, we assume and are normalized without loss of generality (then ), and consider the equal division allocation where each uniform segment is evenly divided. This already guarantees that agent receives a value of . Since and are normalized and , there exist two uniform segments and such that and . Agent and can then exchange parts of their allocations on and to improve the utility for both of them, which guarantees the resultant allocation is still proportional. For example, set be a very small number. Agent can give a length of from to agent , in exchange of a length of from . This describes the allocation . ∎
The proof of the following theorem is similar to the one for indivisible goods (Section 5.4.2). {theoremrep} The - mechanism for cake-cutting has RAT-degree .
Proof.
Consider an arbitrary agent with the true value density function , and an arbitrary agent whose reported value density function is unknown to . Fix arbitrary value density function for the remaining agents. Consider an arbitrary manipulation .
Choose a uniform segment with respect to , satisfying . Choose a very small interval , such that the value density function
is not a scaled version of some with . Apply the volatility of to find a value density function for agent that rescales such that
-
(1)
when agent reports , agent is the first in the order output by ;
-
(2)
when agent reports , agent is the first in the order output by .
Let and be the output allocation for the profiles and respectively. Since is not a scaled version of some , its rescaled version is also different. By Proposition 6.5.2, , as is the highest-priority agent when reports . Let be some subset of with and , and consider the allocation in which is moved from to , that is,
-
•
for , ;
-
•
;
-
•
.
It is clear by our construction that the new allocation is still proportional with respect to . In addition, by the relation between and (and thus the relation between and ), we have based on agent ’s true value density function . Therefore, under agent ’s true valuation, .
If the allocation is not proportional under the profile (where is changed to ), then the only agent for whom proportionality is violated must be agent , that is,. It then implies . On the other hand, agent receives at least her proportional share when reporting truthfully her value density function . This already implies the manipulation is not safe.
If the allocation is proportional under the profile , then it is in . Since agent is the first agent in the order when reporting truthfully, we have , which further implies . Again, the manipulation is not safe. ∎
Finally, we analyze the run-time of our mechanism. {propositionrep} The - mechanism for cake-cutting can be computed in polynomial time.
Proof.
We first note that can be computed in polynomial time. Finding and reading the information of , and can be performed in linear time, as it mostly only requires reading the input of the instance. In particular, the lengths of and are both less than the input length, so is of at most linear length and can also be computed in linear time. Finally, the length of is , so can be computed in polynomial time. We conclude that can be computed in polynomial time.
We next show that can be computed by solving linear programs. Let be the length of the -th uniform segment allocated to agent . Then an agent ’s utility is a linear expression , and requiring an agent’s utility is at least some value (e.g., her proportional share) is a linear constraint. We can use a linear program to find the maximum possible utility for agent among all proportional allocations. In the second iteration, we write the constraint for agent , the proportionality constraints for the agents , and maximize agent ’s utility. This can be done by another linear program and gives us the maximum possible utility for agent . We can iteratively do this to figure out all of by linear programs. ∎
6.6. Towards An Envy-Free and Pareto-Optimal Mechanism with RAT-degree
Given the result in the previous section, it is natural to ask if the fairness guarantee can be strengthened to envy-freeness. A compelling candidate is the mechanism that always outputs allocations with maximum Nash welfare. The Nash welfare of an allocation is defined by the product of agents utilities:
It is well-known that such an allocation is envy-free and Pareto-optimal. However, computing its RAT-degree turns out to be very challenging for us. We conjecture the answer is . {open} What is the RAT-degree of the maximum Nash welfare mechanism?
7. Single-Winner Ranked Voting
We consider voters (the agents) who need to elect one winner from a set of candidates. The agents’ preferences are given by strict linear orderings over the candidates.
When there are only two candidates, the majority rules and its variants (weighted majority rules) are truthful. With three or more candidates, the Gibbard–Satterthwaite Theorem implies that the only truthful rules are dictatorships. Our goal is to find non-dictatorial rules with a high RAT-degree.
Throughout the analysis, we consider a specific agent Alice, who looks for a safe profitable manipulation. Her true ranking is . We assume that, for any , Alice strictly prefers a victory of to a tie between and , and strictly prefers this tie to a victory of .444We could also assume that ties are broken at random, but this would require us to define preferences on lotteries, which we prefer to avoid in this paper.
7.1. Positional voting rules: general upper bound
A positional voting rule is parameterized by a vector of scores, , where and . Each voter reports his entire ranking of the candidates. Each such ranking is translated to an assignment of a score to each candidate: the lowest-ranked candidate is given a score of , the second-lowest candidate is given , etc., and the highest-ranked candidate is given a score of . The total score of each candidate is the sum of scores he received from the rankings of all voters. The winner is the candidate with the highest total score.
Formally, for any subset and any candidate , we denote by the total score that receives from the votes of the agents in . Then the winner is . If there are several agents with the same maximum score, then the outcome is considered a tie.
Common special cases of positional voting are plurality voting, in which , and anti-plurality voting, in which . By the Gibbard–Satterthwaite theorem, all positional voting rules are manipulable, so their RAT-degree is smaller than . But, as we will show next, some positional rules have a higher RAT-degree than others.
In the upcoming lemmas, we identify the manipulations that are safe and profitable for Alice under various conditions on the score vector . We assume throughout that there are candidates, and that is sufficiently large. We allow an agent to abstain, which means that his vote gives the same score to all candidates.555 We need the option to abstain in order to avoid having different constructions for even and odd ; see the proofs for details.
Let and . If and there are known agents, then switching the top two candidates ( and ) may be a safe profitable manipulation for Alice. {proofsketch} For some votes by the known agents, the manipulation is safe since has no chance to win, and it is profitable as it may help win over worse candidates.
Proof.
Suppose there is a subset of known agents, who vote as follows:
-
•
known agents rank .
-
•
One known agent ranks .
-
•
One known agent ranks .
-
•
In case is odd, the remaining known agent abstains.
We first show that cannot win. To this end, we show that the difference in scores between and is always strictly positive.
-
•
The difference in scores given by the known agents is
-
•
There are agents not in (including Alice). These agents can reduce the score-difference by at most . Therefore,
which is positive for any score vector. So has no chance to win or even tie.
Therefore, switching and can never harm Alice — the manipulation is safe.
Next, we show that the manipulation can help win. We compute the score-difference between and the other candidates with and without the manipulation.
Suppose that the agents not in vote as follows:
-
•
the unknown agents666Here we use the assumption . rank .
-
•
Alice votes truthfully .
Then,
which is positive by the assumption . The candidates are ranked even lower than by all agents. Therefore the winner is .
If Alice switches and , then the score of increases by and the scores of all other candidates except do not change. Therefore, becomes , and there is a tie between and , which is better for Alice by assumption. Therefore, the manipulation is profitable.
∎
Let and . If and there are known agents, then switching the bottom two candidates ( and ) may be a safe profitable manipulation for Alice. {proofsketch} For some votes by the known agents, has no chance to win, so the worst candidate for Alice that could win is . Therefore, switching and cannot harm, but may help a better candidate win over .
Proof.
Suppose there is a subset of known agents, who vote as follows:
-
•
known agents rank .
-
•
Two known agents rank .
-
•
In case is odd, the remaining known agent abstains.
We first show that cannot win. To this end, we show that the difference in scores between and is always strictly positive.
-
•
The difference in scores given by the known agents is
-
•
There are agents not in (including Alice). These agents can reduce the score-difference by at most . Therefore,
which is positive by the assumption . So has no chance to win or even tie.
Therefore, switching and can never harm Alice — the manipulation is safe.
Next, we show that the manipulation can help win, when the agents not in vote as follows:
-
•
unknown agents rank , where each candidate except is ranked last by at least one voter (here we use the assumption ).
-
•
One unknown agent ranks ;
-
•
Alice votes truthfully .
Then,
Moreover, for any , the score of is even lower (here we use the assumption that is ranked last by at least one unknown agent):
which is positive by the lemma assumption. Therefore, when Alice is truthful, the outcome is a tie between and .
If Alice switches and , then the score of decreases by , which is positive by the lemma assumption, and the scores of all other candidates except do not change. So wins, which is better for Alice than a tie. Therefore, the manipulation is profitable. ∎
Section 7.1 can be generalized as follows.
Let and . For every integer , if , and there are known agents, then switching and may be a safe profitable manipulation.
For some votes by the known agents, all candidates have no chance to win, so the worst candidate for Alice that could win is . Therefore, switching and cannot harm, but can help better candidates win over .
Proof.
Suppose there is a subset of known agents, who vote as follows:
-
•
known agents rank first and rank last.
-
•
Two known agents rank first and rank last.
-
•
In case is odd, the remaining known agent abstains.
We first show that the worst candidates for Alice () cannot win. Note that, by the lemma assumption , all these candidates receive exactly the same score by all known agents. We show that the difference in scores between and (and hence all worst candidates) is always strictly positive.
-
•
The difference in scores given by the known agents is
-
•
There are agents not in (including Alice). These agents can reduce the score-difference by at most . Therefore,
which is positive by the assumption and . So no candidate in has a chance to win or even tie.
Therefore, switching and can never harm Alice — the manipulation is safe.
Next, we show that the manipulation can help win. We compute the score-difference between and the other candidates with and without the manipulation.
Suppose that the agents not in vote as follows:
-
•
unknown agents rank , where each candidate in is ranked last by at least one voter (here we use the assumption ).
-
•
One unknown agent ranks ;
-
•
Alice votes truthfully .
Then,
Moreover, for any , the score of is even lower (here we use the assumption that is ranked last by at least one unknown agent):
which is positive by the lemma assumption. Therefore, when Alice is truthful, the outcome is a tie between and .
If Alice switches and , then the score of decreases by , which is positive by the lemma assumption, and the scores of all other candidates except do not change. As cannot win, wins, which is better for Alice than a tie. Therefore, the manipulation is profitable. ∎
Combining the lemmas leads to an upper bound on the RAT-degree of any positional voting rule:
Theorem 7.1.
The RAT-degree of any positional voting rule for candidates and agents is at most .
Proof.
Consider a positional voting rule with score-vector . Let be the smallest index for which (there must be such an index by definition of a score-vector).
If , then Section 7.1 implies that, for some votes by the known agents, switching and may be a safe and profitable manipulation for Alice.
Otherwise, , and Section 7.1 implies the same.
In all cases, Alice has a safe profitable manipulation. ∎
Next, we show that the plurality voting rule, which is the positional voting rule with score-vector , attains the upper bound of Theorem 7.1 (at least when is even).
Let be an even number. For the plurality voting rule, if there are at most known agents, then Alice has no safe profitable manipulation.
Proof.
For any potential manipulation by Alice we have to prove that, for any set of agents and any combination of votes by the agents of , either (1) the manipulation is not profitable (for any preference profile for the unknown agents, Alice weakly prefers to tell the truth); or (2) the manipulation is not safe (there exists a preference profile for the unknown agents such that Alice strictly prefers to tell the truth).
If the manipulation does not involve Alice’s top candidate , then it does not affect the outcome and cannot be profitable. So let us consider a manipulation in which Alice ranks another candidate at the top.
Let denote the candidate with the highest number of votes among the known agents, except Alice’s top candidate (). Consider the following two cases.
Case 1: .
Since is a candidate who got the maximum number of votes from except , this implies that all known agents either vote for or abstain, Then, it is possible that the unknown agents vote for or abstain, such that the score-difference . Then, when Alice tells the truth, her favorite candidate, wins, as and the scores of all other candidates are . But when Alice manipulates and votes for , the outcome is a tie between and , which is worse for Alice. Hence, Alice’s manipulation is not safe.
Case 2: .
Then again the manipulations not safe, as it is possible that the unknown agents vote as follows:
-
•
Some agents vote for ;
-
•
Some agents vote for .
Note that this is possible as both values are non-negative and , which means that (the number of unknown agents);
-
•
The remaining unknown agents (if any) are split evenly between and ; if the number of remaining unknown agents is odd, then the extra agent votes for .
We now prove that the manipulation is harmful for Alice. We distinguish three sub-cases. Denote by the set of Remaining unknown agents, mentioned in the third bullet above:
-
•
If (which means that ), then the scores of and without Alice’s vote are and respectively, which are at least and respectively (as ). The scores of all other candidates are .
When Alice is truthful, the outcome is a tie between and ; when Alice manipulates and votes for , wins, which is worse for Alice.
-
•
If and it is even, then without Alice’s vote, is strictly higher than the scores of all other candidates, and higher than by exactly .
When Alice is truthful, the outcome is a tie between and . But when Alice manipulates and votes for , either wins or there is a tie between and ; both outcomes are worse for Alice.
-
•
If and it is odd, then without Alice’s vote, , and both scores are at least the maximum score among the other candidates. When Alice is truthful, her favorite candidate wins. But when Alice manipulates and votes for , then either wins, or there is a tie between and (and possibly some other candidates); both outcomes are worse for Alice.
Thus, in all cases, Alice does not have a safe profitable manipulation. ∎
Combining Section 7.1 and Section 7.1 gives the exact RAT-degree of plurality voting.
Corollary 7.2.
When and and is even, the RAT-degree of plurality voting is .
7.2. Positional voting rules: tighter upper bounds
We now show that positional voting rules may have a RAT-degree substantially lower than plurality.
The following lemma strengthens Section 7.1.
Let be an integer. Consider a positional voting setting with candidates and agents. Denote the sum of the highest scores and the sum of the lowest scores.
If and there are known agents, where
then switching the bottom two candidates ( and ) may be a safe profitable manipulation for Alice. {proofsketch} The proof has a similar structure to that of Section 7.1. Note that the expression at the right-hand side can be as small as (for the anti-plurality rule), which is much smaller than the known agents required in Section 7.1. Still, we can prove that, for some reports of the known agents, the score of is necessarily lower than the arithmetic mean of the scores of the candidates . Hence, it is lower than at least one of these scores. Therefore , still cannot win, so switching and is safe.
Proof.
Suppose there is a subset of known agents, who vote as follows:
-
•
known agents rank , then all candidates in an arbitrary order, then the rest of the candidates in an arbitrary order, and lastly .
-
•
Two known agents rank , then all candidates in an arbitrary order, then the rest of the candidates in an arbitrary order, and lastly .
We first show that cannot win. Denote . We show that the difference in scores between some of the candidates in and is always strictly positive.
-
•
The known agents rank all candidates in at the top positions. Therefore, each agent gives all of them together a total score of . So
-
•
There are agents not in (including Alice). Each of these agents gives all candidates in together at least , and gives at most points. Therefore, we can bound the sum of score differences as follows:
The assumption on implies that this expression is positive. Therefore, for at least one , . So has no chance to win or even tie. Therefore, switching and is a safe manipulation.
Next, we show that the manipulation can help win, when the agents not in vote as follows:
-
•
unknown agents rank , where each candidate except is ranked last by at least one voter (here we use the assumption : the condition on implies , so and ).
-
•
One unknown agent ranks ;
-
•
Alice votes truthfully .
-
•
If there are remaining unknown agents, then they are split evenly between and (if the number of remaining agents is odd, then the last one abstains).
Then,
Moreover, for any , the score of is even lower (here we use the assumption that is ranked last by at least one unknown agent):
which is positive by the lemma assumption. Therefore, when Alice is truthful, the outcome is a tie between and .
If Alice switches and , then the score of decreases by , which is positive by the lemma assumption, and the scores of all other candidates except do not change. So wins, which is better for Alice than a tie. Therefore, the manipulation is profitable. ∎
In particular, for the anti-plurality rule the condition in Section 7.2 for is .
Corollary 7.3.
Let and . The RAT-degree of anti-plurality is at most .
Intuitively, the reason that anti-plurality fares worse than plurality is that, even with a small number of known agents, it is possible to deduce that some candidate has no chance to win, and therefore there is a safe manipulation.
While we do not yet have a complete characterization of the RAT-degree of positional voting rules, our current results already show the strategic importance of the choice of scores.
7.3. Higher RAT-degree?
Theorem 7.1 raises the question of whether some other, non-positional voting rules have RAT-degrees substantially higher than . Using ideas similar to those in Section 5.4, we could use a selection rule to choose a “dictator”, and implement the dictator’s first choice. This deterministic mechanism has RAT-degree , as without knowledge of all other agents’ inputs, every manipulation might cause the manipulator to lose the chance of being a dictator. However, besides the fact that this is an unnatural mechanism, it suffers from other problems such as the no-show paradox (a participating voter might affect the selection rule in a way that will make another agent a dictator, which might be worse than not participating at all).
Our main open problem is therefore to devise natural voting rules with a high RAT-degree. {open} Does there exist a non-dictatorial voting rule that satisfies the participation criterion (i.e. does not suffer from the no-show paradox), with RAT-degree larger than ?
8. Stable Matchings
In this section, we consider mechanisms for stable matchings. Here, the agents are divided into two disjoint subsets, and , that need to be matched to each other. The most common examples are men and women or students and universities. Each agent has a strict preference order over the agents in the other set and being unmatched – for each , an order over ; and for each an order, , over .
A matching between to is a mapping from to such that (1) for each , (2) and for each , and (3) if and only if for any . When it means that agent is unmatched under . A matching is said to be stable if (1) no agent prefers being unmatched over their assigned match, and (2) there is no pair such that prefers over his assigned match while prefers over her assigned match – and .
A mechanism in this context gets the preference orders of all agents and returns a stable matching.
Results
Our results for this problem are preliminary, so we provide only a brief overview here, with full descriptions and proofs in the appendix. We believe, however, that this is an important problem and that our new definition opens the door to many further questions.
We first analyze the deferred acceptance mechanism and prove that its RAT-degree is at most , showing that it is -known-agents safely manipulable. The proof relies on truncation, where an agent in falsely reports preferring to remain unmatched over certain options. We further show that even without truncation, the RAT-degree is at most .
Finally, we examine the Boston mechanism and establish an upper bound of on its RAT-degree.
8.1. Deferred Acceptance (Gale-Shapley)
The deferred acceptance algorithm (gale1962college) is one of the most well-known mechanisms for computing a stable matching. In this algorithm, one side of the market - here, - proposes, while the other side - - accepts or rejects offers iteratively. {toappendix}
8.2. Deferred Acceptance (Gale-Shapley): descriptions and proofs
The algorithm proceeds as follows:
-
(1)
Each proposes to his most preferred alternative according to that has not reject him yet and that he prefers over being matched.
-
(2)
Each tentatively accepts her most preferred proposal according to that she prefers over being matched, and rejects the rest.
-
(3)
The rejected agents propose to their next most preferred choice as in step 1.
-
(4)
The process repeats until no one of the rejected agents wishes to make a new proposal.
-
(5)
The final matching is determined by the last set of accepted proposal.
It is well known that the mechanism is truthful for the proposing side () but untruthful for the other side (). That is, the agents in may have an incentive to misreport their preferences to obtain a better match. We prove that:
The RAT-degree of deferred acceptance is at most .
Proof.
To prove that the RAT-degree is at most , we show that it is -known-agents manipulable.
Let and assume without loss of generality that (the preferences between the other alternatives are irrelevant).
Consider the case where the -known-agents are as follows::
-
•
Let be an agent whose preferences are .
-
•
The preferences of are .
-
•
The preferences of are .
In this case, when tells the truth, the resulting matching includes the pairs and , since in this case it proceeds as follows:
-
•
In the first step, all the agents in propose to their most preferred option: proposes to and proposes to .
Then, the agents in tentatively accept their most preferred proposal among those received, as long as she prefers it to remaining unmatched: tentatively accepts since she prefers him over being unmatched, and since must be her most preferred option among the proposers as (her top choice) did not propose to her. Similarly, tentatively accepts .
-
•
In the following steps, more rejected agents in might propose to and , but they will not switch their choices, as they prefer and , respectively.
Thus, when the algorithm terminates is matched to and is matched to .
Which means that is matched to her second-best option.
We shall now see that can increase her utility by misreporting that her preference order is: . The following shows that in this case, the resulting matching includes the pairs and , meaning that is matched to her most preferred option (instead of her second-best).
-
•
In the first step, as before, proposes to and proposes to .
However, here, rejects because, according to her false report, she prefers being unmatched over being matched to . As before, tentatively accepts .
-
•
In the second step, , having been rejected by , proposes to his second-best choice .
Since prefers over , she rejects and tentatively accepts .
-
•
In the third step, , having been rejected by , proposes to his second-best choice .
now tentatively accepts since according to her false report, she prefers him over being unmatched.
-
•
In the following steps, more rejected agents in might propose to and , but they will not switch their choices, as they prefer and , respectively.
In the following steps, more rejected agents in can propose to and but they would reject their proposes as both of them have the alternative of being matched to their best-option.
Thus, when the algorithm terminates will be matched to and and will be matched to .
Thus, regardless of the reports of the remaining remaining (unknown) agents report, strictly prefers to misreport her preferences. ∎
The proof is based on a key type of manipulation in this setting called truncation (roth1999truncation; ehlers2008truncation; coles2014optimal) – where agents in falsely report that they prefer being unmatched rather than being matched with certain agents — even though they actually prefer these matches to being unmatched.
However, in some settings, it is reasonable to assume that agents always prefer being matched if possible. In such cases, the mechanism is designed to accept only preferences over agents from the opposite set (or equivalently, orders where being unmatched is always the least preferred option). Clearly, under this restriction, truncation is not a possible manipulation. We prove that even when truncation is not possible, the RAT-degree is bounded.
Without truncation, the RAT-degree of deferred acceptance is at most .
Proof.
To prove that the RAT-degree is at most , we show that it is -known-agents manipulable.
Let and assume without loss of generality that (the preferences between the other alternatives are irrelevant).
Consider the case where the -known-agents are as follows:
-
•
Let be an agent whose preferences are .
-
•
Let be an agent whose preferences are .
-
•
The preferences of are .
-
•
The preferences of are .
-
•
The preferences of are .
In this case, when tells the truth, the resulting matching includes the pairs , and , since in this case it proceeds as follows:
-
•
In the first step, all the agents in propose to their most preferred option: proposes to , while and proposes to .
Then, the agents in tentatively accept their most preferred proposal among those received. tentatively accepts since he must be her most preferred option among the proposers – as he is her second-best and her top choice, , did not propose to her; and rejects . Similarly, tentatively accepts . did not get any proposes.
-
•
In the second step, , having been rejected by , proposes to his second-best choice .
Since prefers her current match over , she rejects .
-
•
In the third step, , having been rejected by , proposes to his third-best choice .
Since does not have a match, she tentatively accepts .
-
•
In the following steps, more rejected agents in - that are not and , might propose to and , but they will not switch their choices, as they can only be least preferred than their current match.
Thus, when the algorithm terminates is matched to , is matched to , and is matched to .
Which means that is matched to her second-best option.
However, when misreporting that her preference order is: . The following shows that in this case, the resulting matching includes the pairs , and , meaning that is matched to her most preferred option (instead of her second-best).
-
•
In the first step, as before, proposes to , while and proposes to .
However, here, tentatively accepts and rejects . As before, tentatively accepts and did not get any proposes.
-
•
In the second step, , having been rejected by , proposes to his second-best choice .
Since prefers over , she rejects and tentatively accepts .
-
•
In the third step, , having been rejected by , proposes to his second-best choice .
tentatively accepts since according to her false report, she prefers him over .
-
•
In the fourth step, , having been rejected by , proposes to his second-best choice .
prefers her current match , and thus rejects .
-
•
In the fifth step, , having been rejected by , proposes to his third-best choice .
As does not have a match, she tentatively accepts .
-
•
In the following steps, more rejected agents in - that are not and , might propose to and , but they will not switch their choices, as they can only be least preferred than their current match.
Thus, when the algorithm terminates is matched to , is matched to , and is matched to .
Thus, regardless of the reports of the remaining remaining (unknown) agents report, strictly prefers to misreport her preferences. ∎
8.3. Boston Mechanism
The Boston mechanism (abdulkadirouglu2003school) is a widely used mechanism for assigning students or schools. {toappendix}
8.4. Boston Mechanism: descriptions and proofs
The mechanism proceeds in rounds as follows:
-
(1)
Each proposes to his most preferred alternative according that has not yet rejected him and is still available.
-
(2)
Each (permanently) accepts her most preferred proposal according to and rejects the rest. Those who accept a propose become unavailable.
-
(3)
The rejected agents propose to their next most preferred choice as in step .
-
(4)
The process repeats until all agents are either assigned or have exhausted their preference lists.
We prove that:
The RAT-degree of the Boston mechanism is at most .
Proof.
To prove that the RAT-degree is at most , we show that it is -known-agents manipulable.
Let be an agent with preferences . Consider the case where the two known-agents are as follows:
-
•
Let be an agent whose preferences are similar, .
-
•
The preferences of are .
When reports truthfully, the mechanism proceeds as follows: In the first round, both and proposes to . Since prefers , she rejects and becomes unavailable. Thus, in the second round, proposes to .
However, we prove that has a safe-and-profitable manipulation: misreports his preference as . The manipulation is safe since never had a chance to be matched with (regardless of his report). The manipulation is profitable as there exists a case where the manipulation improves ’s outcome. Consider the case where the top choice of is and there is another agent, , whose top choice is . Notice that prefers over . When reports truthfully, then in the first round, proposes to and gets accepted, making her unavailable by the time reaches her in the second round. However, if misreports and proposes to in the first round, she will accept him (as she prefers him over ). This guarantees that is matched to , improving his outcome compared to truthful reporting. ∎
9. Discussion and Future Work
Our main goal in this paper is to encourage a more quantitative approach to truthfulness that can be applied to various problems. When truthfulness is incompatible with other desirable properties, we aim to find mechanisms that are “as hard to manipulate as possible”, where hardness is measured by the amount of knowledge required for a safe manipulation. We have considered several alternatives towards the same goal.
Using Randomization
If we used randomness, we could eliminate all safe manipulations. Consider for example the following voting rule:
-
•
With some small probability , run random dictatorship, that is, choose a random voter and elect his top candidate;
-
•
Otherwise, run plurality voting (or any other voting rule with desirable properties).
Due to the small probability of being a dictator, each voter might lose from manipulation, so no manipulation is safe. This rule is not entirely far-fetched: governments do sometimes use randomization to encourage truth-telling in tax reports (see e.g. (haan2012sound)). However, randomization is very unlikely to be acceptable in high-stakes voting scenarios.
Beyond Worst-Case
We defined the RAT-degree as a “worst case” concept: to prove an upper bound, we find a single example of a safe manipulation. This is similar to the situation with the classic truthfulness notion, where to prove non-truthfulness, it is sufficient to find a single example of a manipulation. To go beyond the worst case, one could follow relaxations of truthfulness, such as truthful-in-expectation (lavi2011truthful) or strategyproofness-in-the-large (azevedo2019strategy), and define similarly “RAT-degree in expectation” or “RAT-degree in the large”.
Alternative Information Measurements
Another avenue for future work is to study other ways to quantify truthfulness. For example, instead of counting the number of known agents, one could count the number of bits that an agent should know about other agents’ preferences in order to have a safe manipulation. The disadvantage of this approach is that different domains have different input formats, and therefore it would be hard to compare numbers of bits in different domains (see Appendix A for more details).
Changing Quantifiers
One could argue for a stronger definition requiring that a safe manipulation exists for every possible set of known-agents, rather than for some set, or similarly for every possible preference profile for the known agents rather than just in some profile. However, we believe such definitions would be less informative, as in many cases, a manipulation that is possible for some set of known-agents, is not possible for any such set. For example, in the first-price auction with discount (see Section 4.2), the RAT-degree is under our definition. But if we required the the knowledge on any agent’s bid would allow manipulation rather than just one, the degree would automatically jump to , making the measure far less meaningful.
Combining the ”known agents” concept with other notions
We believe that the “known agents” approach can be used to quantify the degree to which a mechanism is robust to other types of manipulations (besides safe manipulations), such as “always-profitable” manipulations or “obvious” manipulation. Accordingly, one can define the “max-min-strategyproofness degree” or the “NOM degree” (see Appendix A).
Appendix A Related Work: Extended Discussion
A.1. Truthfulness Relaxations
The large number of impossibility results have lead to extensive research into relaxations of truthfulness. A relaxed truthfulness notion usually focuses on a certain subset of all possible manipulations, which are considered more “likely”. It requires that none of the manipulations from this subset is profitable. Different relaxations consider different subsets of “likely” manipulations.
RAT
Closest to our work is the recent paper by (BU2023Rat), which introduces the definition on which this paper is build upon - risk-avoiding truthfulness (RAT). The definition assumes that agents avoid risk - they will manipulate only when it is sometimes beneficial but never harmful. We first note that their original term for this concept is risk-averse truthfulness. However, since the definition assumes that agents completely avoid any element of risk, we adopt this new name, aiming to more accurately reflect this assumption.
We extend their work in two key directions. First, we generalize the definition from cake-cutting to any social choice problem. Second, we move beyond a binary classification of whether a mechanism is RAT, to a quantitative measure of its robustness to manipulation by such agents. Our new definition provides deeper insight of the mechanism’ robustness. In Section 6, we also analyze the mechanisms proposed in their paper alongside additional mechanisms for cake-cutting.
Their paper (in Section 5) provides an extensive comparison between RAT of related relaxation of truthfulness. For completeness, we include some of these comparisons here as well.
Maximin Strategy-Proofness
Another related relaxation of truthfulness, introduced by brams2006better, assumes the opposite extreme to the standard definition regarding the agents’ willingness to manipulate. In the standard definition, an agent is assumed to manipulate if for some combination of the other agents’ reports, it is beneficial. In contrast, this relaxation assumes that an agent will manipulate only if the manipulation is always beneficial — i.e., for any combination of the other agents’ reports.
This definition is not only weaker than the standard one but also weaker than RAT, as it assumes that agents manipulate in only a subset of the cases where RAT predicts manipulation. We believe that, in this sense, RAT provides a more realistic and balanced assumption. However, we note that a similar approach can be applied to this definition as well — the degree to which a mechanism is robust to always-profitable manipulations (rather than to safe manipulations).
NOM
troyan2020obvious introduce the notion of not-obvious manipulability (NOM), which focuses on obvious manipulation — informally, a manipulation that benefits the agent either in the worst case or in the best case. It presumes that agents are boundedly rational, and only consider extreme situations — best or worst cases. A mechanism is NOM if there are no obvious manipulations.
This notion is a relaxation of truthfulness since the existence of an obvious manipulation imposes a stronger requirement than merely having a profitable one.
However, an obvious manipulation is not necessarily a safe manipulation, nor is a safe manipulation necessarily an obvious one (see Appendix D in (BU2023Rat) for more details). Therefore, NOM and RAT are independent notions.
Regret-Free Truth-telling (RFTT)
regret2018Fernandez proposes another relaxation of truthfulness, called regret-free truth-telling. This concept also considers the agent’s knowledge about other agents but does so in a different way. Specifically, a mechanism satisfies RFTT if no agent ever regrets telling the truth after observing the outcome – meaning that she could not have safely manipulated given that only the reports that are consistent with the observed outcome are possible. Note that the agent does not observe the actual reports of others but only the final outcome (the agents are assumed to know how the mechanism works).
An ex-post-profitable manipulation is always profitable, but the opposite is not true, as it is possible that the profiles in which a manipulation could be profitable are not consistent with the outcome. This means that RFTT does not imply RAT. A safe manipulation is always ex-post-safe, but the opposite is not true, as it is possible that the manipulation is harmful for some profiles that are inconsistent with the outcome. This means that RAT does not imply RFTT.
Different Types of Risk
slinko2008nondictatorial; slinko2014ever and hazon2010complexity study “safe manipulations” in voting. Their concept of safety is similar to ours, but allows a simultaneous manipulation by a coalition of voters. They focus on a different type of risk for the manipulators: the risk that too many or too few of them will perform the exact safe manipulation.
A.2. Alternative Measurements
Various measures can be used to quantify the manipulability of a mechanism. Below, we compare some existing approaches with our RAT-degree measure.
Computational Complexity
Even when an agent knows the reports of all other agents, it might be hard to compute a profitable manipulation. Mechanisms can be ranked according to the run-time complexity of this computation: mechanisms in which a profitable manipulation can be computed in polynomial time are arguably more manipulable than mechanisms in which this problem is NP-hard. This approach was pioneered by bartholdi1989computational; bartholdi1991single for voting, and applied in other social choice settings. See faliszewski2010ai; veselova2016computational for surveys.
Nowadays, with the advent of efficient SAT and CSP solvers, NP-hardness does not seem a very good defense against manipulation. Moreover, some empirical studies show that voting rules that are hard to manipulate in theory, may be easy to manipulate in practice (walsh2011hard). We claim that the main difficulty in manipulation is not the computational hardness, but rather the informational hardness — learning the other agents’ preferences.
Queries and Bits
Instead of counting the number of agents, we could count the number of bits that an agent needs to know in order to have a safe manipulation (this is similar in spirit to the concepts of communication complexity - e.g., (nisan2002communication; grigorieva2006communication; Communication2019Branzei; Babichenko2019communication) and compilation complexity - e.g., (chevaleyre2009compiling; xia2010compilation; karia2021compilation)). But this definition may be incomparable between different domains, as the bit length of the input is different. We could also measure the number of basic queries, rather than the number of agents. But then we have to define what “basic query” means, which may require a different definition for each setting. The number of agents is an objective measure that is relevant for all settings.
Probability of Having Profitable Manipulation.
Truthfulness requires that not even a single preference-profile allows a profitable manipulation. Mechanisms can be ranked according to the probability that such a profile exists – e.g., (barrot2017manipulation; lackner2018approval; lackner2023free). The probability of truthful mechanism is zero, and the lower the probability is, the more resistant the mechanism is to profitable manipulations.
A disadvantage of this approach is that it requires knowledge of the distribution over preference profiles; our RAT-degree does not require it.
Incentive Ratio
Another common measure is the incentive ratio of a mechanism chen2011profitable, which describes the extent to which an agent can increase her utility by manipulating. For each agent, it considers the maximum ratio between the utility obtained by manipulating and the utility received by being truthful, given the same profile of the others. The incentive ratio is then defined as the maximum of these values across all agents.
This measure is widely used - e.g., (chen2022incentive; li2024bounding; cheng2022tight; cheng2019improved). However, it is meaningful only when utility values have a clear numerical interpretation, such as in monetary settings. In contrast, our RAT-degree measure applies to any social choice setting, regardless of how utilities are represented.
Degree of Manipulability
aleskerov1999degree define four different indices of ”manipulability” of ranked voting rules, based on the fraction of profiles in which a profitable manipulation exists; the fraction of manipulations that are profitable; and the average and maximum gain per manipulation. As the indices are very hard to compute, they estimate them on 26 voting rules using computer experiments. These indices are useful under an assumption that all profiles are equally probable (“impartial culture”), or assuming a certain mapping between ranks of candidates and their values for the agent.
andersson2014budget; andersson2014least measure the degree of manipulability by counting the number of agents who have a profitable manipulation. They define a rule as minimally manipulable with respect to a set of acceptable mechanisms if for each preference profile, the number of agents with a profitable manipulation is minimal across all acceptable mechanisms.