Byzantine Spectral Ranking
Abstract
We study the problem of rank aggregation where the goal is to obtain a global ranking by aggregating pair-wise comparisons of voters over a set of objects. We consider an adversarial setting where the voters are partitioned into two sets. The first set votes in a stochastic manner according to the popular score-based Bradley-Terry-Luce (BTL) model for pairwise comparisons. The second set comprises malicious Byzantine voters trying to deteriorate the ranking. We consider a strongly-adversarial scenario where the Byzantine voters know the BTL scores, the votes of the good voters, the algorithm, and can collude with each other. We first show that the popular spectral ranking based Rank-Centrality algorithm, though optimal for the BTL model, does not perform well even when a small constant fraction of the voters are Byzantine.
We introduce the Byzantine Spectral Ranking Algorithm (and a faster variant of it), which produces a reliable ranking when the number of good voters exceeds the number of Byzantine voters. We show that no algorithm can produce a satisfactory ranking with probability for all BTL weights when there are more Byzantine voters than good voters, showing that our algorithm works for all possible population fractions. We support our theoretical results with experimental results on synthetic and real datasets to demonstrate the failure of the Rank-Centrality algorithm under several adversarial scenarios and how the proposed Byzantine Spectral Ranking algorithm is robust in obtaining good rankings.
1 Introduction
Rank aggregation is a fundamental task in a wide spectrum of learning and social contexts such as social choice [Soufiani et al., 2014, Caplin and Nalebuff, 1991], web search and information retrieval [Brin and Page, 1998, Dwork et al., 2001], recommendation systems [Baltrunas et al., 2010] and crowd sourcing [Chen et al., 2013]. Given pairwise comparison information over a set of objects, the goal is to identify a ranking that best respects the revealed preferences. Frequently, we are also interested in the scores associated with the objects to deduce the intensity of the resulting preference. There have been several solutions to this problem including Spectral Ranking methods such as Rank-Centrality [Negahban et al., 2017] and MLE methods [Ford, 1957].
The Bradley-Terry-Luce (BTL) model [Bradley and Terry, 1952, Luce, 1959] has been prevalent for a vast variety of ranking algorithms. The BTL model assumes that there are objects that are to be compared and each of these objects has a positive weight () associated with itself. Whenever any voter is asked a query for a pair, the voter claims is better than with probability independently at random. However, the BTL model assumes that all voters are identical and use the same probabilistic strategy for voting. Generally, in crowd-sourced settings, voters differ significantly from each other. Some of the voters may be spammers or even direct competitors of the entity ranking the objects, leading to data poisoning [Sun et al., 2021]. To model this situation, we assume that there is a split in the population. We consider that there are good voters and Byzantine voters, to appropriately model the division of the voters. The good voters vote according to the BTL model, while the Byzantine voters can potentially be controlled by an adversary that can centrally coordinate their behavior.
1.1 Overview of our Results
We naturally initially consider the tolerance of the Rank-Centrality algorithm against a constant fraction of Byzantine voters. Negahban et al. [2017] showed that for an Erdős-Rényi pairwise-comparison graph we have
Theorem 1 (Informal from Negahban et al. [2017]).
If and then the Rank-Centrality algorithm outputs a weight-distribution () such that with high probability111Whenever we say with high probability we mean with probability for
Here are the true weights of the objects and is the number of voter queries asked per edge in . We can see that when the RHS goes to 0 as goes to . We show that a strategy as simple as all Byzantine voters voting the reverse of the weights causes the Rank-Centrality algorithm to output weights far from true weights with high probability.
Theorem 2 (Informal).
There exists a strategy that the Byzantine voters can use such that the Rank-Centrality algorithm outputs a distribution such that with high probability
provided there are a constant fraction of Byzantine voters
Even simple algorithms that output the same weights regardless of the voter’s responses also give error. To tackle this situation, we propose Byzantine Spectral Ranking, an algorithm that runs in polynomial time to remove some of the voters such that the Rank-Centrality algorithm can be applied. As long as the proportion of Byzantine voters is bounded strictly below , the algorithm removes some voters such that the remaining Byzantine voters are unlikely to cause the Rank-Centrality algorithm to deviate significantly.
Theorem 3 (Informal).
For a range of values of , there exists an -time algorithm that removes voters such that with high probability we have:
Here is the maximum degree of the comparison graph. Finally, we also show that when there is no algorithm that can solve this problem for all weights with probability , by coming up with a strategy that the Byzantine voters can use for two different weight distributions, and showing therefore that the algorithm must fail in at least one of the weight distributions.
Theorem 4 (Informal).
If , then there is no algorithm that can for all weights () with probability , give a weight distribution () such that
1.2 Related Works
In recent times, there has been a lot of work on building robust machine learning algorithms [Blanchard et al., 2017, El Mhamdi et al., 2018, Li et al., 2020, El-Mhamdi et al., 2021, Chen et al., 2018, Wu et al., 2021] that are resilient to Byzantine faults.
Ranking from pairwise comparisons has been a very active field of research and there have been multiple algorithms proposed with regard to the BTL model [Negahban et al., 2017, Agarwal et al., 2018, Rajkumar and Agarwal, 2014, Chen and Suh, 2015]. One of the most popular algorithms proposed in recent times has been the Rank-Centrality algorithm. Negahban et al. [2017] were able to show an upper-bound on the relative -error (between true weights and proposed weights) such that the relative -error goes to 0 for larger . They did this by finding the stationary distribution of matrix which was defined as
(1) |
Here is the number of times beats in queries divided by , if else . The pseudocode can be found in Algorithm 1. They also show that their algorithm outperformed other algorithms on various datasets. As our work builds on Negahban et al. [2017] we have included a primer to Rank-Centrality in the Appendix A.1.
Besides research on adversarial attacks in ranking, there has also been research on how the robustness of the ranking algorithms when exposed to noise. Cucuringu [2016] proposed Sync-Rank, by viewing the ranking problem as an instance of the group synchronization problem over the group of planar rotations. This led to a robust algorithm for ranking from pairwise comparisons, as was verified by synthetic and real-world data. Mohajer and Suh [2016] worked on a stochastic noise model in order to find top- rankings. They were able to show an upper-bound on the minimum number of samples required for ranking and were able to provide a linear-time algorithm that achieved the lower-bound. However, both of these works [Cucuringu, 2016, Mohajer and Suh, 2016] were primarily tackling the ranking problem when there was noisy and incomplete data as opposed to an adversary who was trying to disrupt the algorithm.
Suh et al. [2017] worked on the adversarial ranking problem considering two cases one where the fraction of adversarial voters is known and is unknown. They got results that were asymptotically equivalent to Chen and Suh [2015]’s Spectral-MLE algorithm despite the presence of adversarial voters. However, they worked with the significantly weaker assumption that the adversarial voters will vote exactly opposite to the faithful voters (i.e. if the good voters voted that is better than with probability then the Byzantine voters voted with probability ). Our setting is much more general in that we do not make any assumptions regarding the Byzantine voters.
Agarwal et al. [2020] worked on rank aggregation when edges in the graph were compromised. They were able to prove fundamental results regarding the identifiability of weights for both general graphs as well as Erdős-Rényi graphs. They also came up with polynomial-time algorithms for identifying these weights. However, their model is significantly different from ours. They assume that some of the comparisons in the comparison graphs might have been completely compromised while the others are free from any kind of corrtuption. In practice this is unlikely to happen, we assume a stronger model where every edge can have good and Byzantine voters. A detailed comparison with Agarwal et al. [2020] made in section 3.5
2 Problem Setting
Given a set of objects, our goal is the obtain a ranking from a set of pairwise comparisons over the objects. We assume that the pairs to be compared are determined by an Erdős-Rényi comparison graph . Each edge in the graph corresponds to a pair that can be compared and we allow at most comparisons per pair.
The pairs can be compared by voters who fall into one of two categories - good or Byzantine. Out of total voters, are assumed to be Byzantine. Every good voter when queried to compare a pair of objects, votes stochastically according to a Bradley-Terry-Luce (BTL) preference model parametrized by a weight/score vector where the probability of preferring object over object is given by . The Byzantine voters on the other hand can vote however they wish (stochastic/deterministic). To make the problem general we consider a powerful central-adversary that could potentially control all of the Byzantine voters. We assume that the Byzantine adversary knows (1) The queries asked to the good voters and their corresponding votes (2) The comparison graph (3) The learning algorithm that is used to produce the ranking (4) The underlying true BTL weights of the objects that the good voters use to produce their comparisons.
We assume that both the good voters and the byzantine voters will produce the same output when the same pair is asked to be compared multiple times by them.222This ensures that simple algorithms which query all voters to vote on the same pair multiple times and obtain a median preference will not work.
The goal of a learning algorithm is to use the least number of voter comparisons to output a score vector that is as close to the true scores. The learning algorithm is allowed to choose any voter to compare any pair of objects. However, it does not know the type of voter - good or Byzantine and naturally, a good algorithm must be robust to the presence of the powerful Byzantine voters. Finally, we work in the passive setting, where a learning algorithm is needed to fix the pair to voter mapping apriori before the votes are collected.
3 Results for the Erdős-Rényi Graph
We focus our efforts on the Erdős-Rényi graph primarily because it has been heavily studied in the context of rankings. We pay careful attention to the . We go on to find that our algorithms turned out to be exponential in terms of the degree. This makes the Erdős-Rényi graph an optimal candidate for working out our algorithms because the degrees of the Erdős-Rényi graph will also be logarithmic with high probability, therefore giving us polynomial-time algorithms.
3.1 Rank-Centrality Failure
We initially try to motivate the need for a robust version of the Rank-Centrality algorithm by showing that even a simple strategy from a Byzantine adversary will lead to an unsuitable ranking. In this scenario, the Byzantine adversary does not know the comparison graph, nor does it know the votes of good voters, and still can beat the Rank-Centrality algorithm. Formally we show that:
Theorem 5.
Given objects, let the comparison graph be . Each pair in is compared times by voters chosen uniformly at random amongst the voters. If all Byzantine voters vote according to the opposite-weight strategy in an Erdős-Rényi graph with then there exists a for which the Rank-Centrality algorithm will output a such that
with probability . Here is the skew in object weights i.e. .
Here when we say the opposite-weight strategy we mean a strategy where the Byzantine voter will vote for if otherwise will vote for . If we carefully examine the RHS, we see that the first term asymptotically converges to . The third term is also clearly a constant if there is a constant fraction of Byzantine voters. Therefore, showing the existence of a strategy that will with high probability, significantly deviate the weights given by the Rank-Centrality algorithm. The detailed analysis can be found in Appendix A.2.
Proof Sketch.
We initially show that
(2) |
Here is the stationary distribution of the transition matrix () defined in Equation 1 and denotes the fluctuation of the transition matrix around its mean (, such that . We initially prove that will at most be , since there are at most terms in any row or column and each term can deviate by at most 1. Following this, we show that with probability we will have . Finally, we show that will be with probability by constructing a weight-distribution where half the weights are low and half were high. An arbitrary weight-distribution could not be used because if we would get for all . This is because the row sum of and will always be 1. ∎
3.2 Byzantine Spectral Ranking
In this section, we propose a novel algorithm called Byzantine Spectral Ranking (BSR). The BSR algorithm relies on asking multiple voters multiple queries and based on the collective response decides whether a voter is acting suspiciously or whether it is plausible that the voter might be a good voter. For each object to be compared, we take a set of objects to compare it against and have voters compare with all of the objects (leading to comparisons for each voter). Based on the voter’s responses, we try to eliminate voters who are likely to be Byzantine. The BSR algorithm ensures that a constant fraction of good voters remain and that if any Byzantine voters remain then these voters have votes similar to good voters. Intuitively, the BSR algorithm goes over every single subset of and tries to find if the Byzantine voters have collectively voted strongly in favor or against this subset. The pseudocode can be found in Algorithm 2.
3.2.1 Analysis
The detailed analysis can be found in Appendix A.3. Based on the proof of the Rank-Centrality Convergence the only result which does not hold for Byzantine voters is Lemma 3, where they bound the 2-norm of the matrix. We show that:
Lemma 6.
For an Erdős-Rényi graph with with comparisons per edge, algorithm 2 removes some voters such that we have:
with probability .
Proof Sketch.
Proving that is bounded with high probability turns out to be equivalent to proving that is bounded. is defined as , where . Since it is a sum of absolute values it can be upper-bounded by considering the union bound over all combinations for this sum.
We initially show that with high probability for all values of the mean() that we compute in the BSR algorithm will only be away from the expected mean. This gives us a good check for whether a voter is acting out of the ordinary or not. The algorithm then filters out voters that are collectively acting out of the ordinary for a particular . We show that at the end of these removals, the number of voters will only be a constant factor far from , i.e. most of the good voters will be retained. Finally, we show an upper-bound on the row sum of the absolute values of the deviations of the remaining Byzantine voters to complete the proof.∎
Intuitively, we get the terms and in Lemma 6 by considering the maximum average deviation for the row sum that can be caused for any by the Byzantine voters. This can be written as voters having a deviation of and the remaining voters having a deviation of . Therefore we can see that the overall deviation will be . Substituting the values of the parameters, and choosing an appropriate we get . Finally, using lemma 6, and a few lemmas from Negahban et al. [2017] we conclude that:
Theorem 7.
Given objects, let the comparison graph be . Each pair in is compared times with the outcomes of comparisons produced as per the Byzantine-BTL model with weights . Then there exists an algorithm such that when , , and the following bound on the error
holds with probability .
Remark. The BSR algorithm gives a suitable ranking in polynomial time, but is fairly slow as the complexity will be , which is not suitable for larger values of and .
Remark. Our process of filtering out Byzantine voters is tightly tied to the analysis of the Rank-Centrality algorithm and the procedure to remove voters may not be suitable for other algorithms. The advantage of tying the removal procedure to the ranking algorithm is that we would not remove voters who may vote in an adversarial fashion, but do not significantly affect the row sum of the absolute difference between the empirical and true Markov chain transition matrices. An algorithm that tries to explicitly identify Byzantine voters might end up removing such voters too (and hence their votes for other useful pairs) which might be counter-productive.
3.3 Fast Byzantine Spectral Ranking
While Algorithm 2 performs asymptotically similar to the Rank-Centrality algorithm, it is fairly slow requiring at least time. We now present the Fast Byzantine Spectral Ranking (FBSR) Algorithm for more practical settings. Essentially in our previous algorithm, we ensure that is upper-bounded. This takes up a lot of time because essentially we iterate over all in the algorithm. But, this can be improved. If we can bucket the entries into buckets () of size each. Therefore an equivalent formulation would be to upper-bound . Here instead of summing up absolute values, we will only be summing up values times.
Theorem 8.
Given objects, let the comparison graph be . Each pair in is compared times with the outcomes of comparisons produced as per the Byzantine-BTL model with weights . Then there exists an algorithm such that when , , , and the following bound on the error
holds with probability that runs in time.
The detailed analysis can be found in Appendix A.4.
3.4 An Impossibility Result and optimality of FBSR
The main idea in the BSR algorithm was the estimation of an accurate mean for all values of . However, we can see that finding the mean can become very challenging if the Byzantine voters outnumber the good voters. This is because the Byzantine voters can always create a shifted binomial distribution and trick us into believing that an entirely different value is the mean.
We answer a natural question: is it even possible to find algorithms when crosses ? We prove a fundamental result showing that for any ranking algorithm to give a satisfactory result, a majority of good voters will always be required. Formally we show that:
Theorem 9.
If , then no algorithm can for all weights (), output weights () such that
with probability , where is a function that converges to 0 as goes to .
Proof Sketch.
We prove this result by contradiction, let us suppose there is an algorithm which can give a satisfactory ranking for all weights. We consider the case, as the case where can clearly be reduced to the case.
We then consider two instances: (1) Good voters vote according to and the Byzantine voters vote according to . (2) Byzantine voters vote according to and the good voters vote according to . We go on to claim that when these instances are indistinguishable to any algorithm. Since succeeds with probability for any instance, we can say by using the union bound that there must be a such that it is close to both and . We can then choose the values of and such that they are far from each other and therefore showing that there is no that it is close to both and . Giving us a contradiction. ∎
The detailed proof can be found in Appendix A.5.
Remark. This impossibility consequently shows us that the FBSR algorithm (cf. Theorem 8) is optimal in terms of tolerance towards the Byzantine fraction ().
3.5 Comparison with Agarwal et al. [2020]
While the setting of Agarwal et al. [2020] is different from our setting as mentioned in Section 1.2, if we were to consider both works in Byzantine voter setting a comparison could be made by considering an edge that has a Byzantine voter to be corrupted as per their model. Here, their algorithm ,to have a valid convergence proof, needs . This would imply that for the sparse graphs and for denser graphs. Our algorithms can potentially tolerate a significantly higher corruption rate of .
Another way to compare our results with Agarwal et al. [2020] would be to set for our model and for their model. This allocation ensures only a single voter per edge for their model and equal number of comparisons in expectation. We see that their algorithm does not converge (), while our algorithms converges . Furthermore, our algorithm can handle a corruption rate of while their algorithm can handle a corruption rate of 333they also come up with an algorithm for denser graphs that can handle a corruption rate of , however the algorithm has worse time-complexity and requires for .
4 Experimental Results
In this section, we confirm our theoretical bounds with experiments conducted on synthetic and real data. While truly Byzantine strategies might be hard to find and can even be computationally infeasible, we consider some practically possible strategies. We show the performance of Rank-Centrality against these strategies, supporting our results in section 3.1. We then proceed to show that these strategies are not as effective against the BSR/FBSR algorithm which gives better rankings.
Experimental Settings: We see that for the FBSR Algorithm to work well we need the existence of at least a few entries that are at a distance from . However, since we are summing up entries we will need . Otherwise, the BSR algorithm will run exactly as the Rank-Centrality algorithm. Using the value of from Algorithm 3 and setting and using we get:
While it is asymptotically true, we see that the RHS only becomes greater than the LHS when . However, as our proof was designed to work against any Byzantine strategy, and at the same time the convergence derivation of Negahban et al. [2017] was fairly loose (for example, the union bound of probabilities to ensure the sum of absolute terms is bounded) the FBSR algorithm empirically works for a variety of strategies even for significantly smaller thresholds. We applied the FBSR Algorithm with the following modifications considering the smaller and values: (1) setting and (2) setting as . All experimental results in sections 4 and 3 have been taken by taking the mean over 10 random trials.
Synthetic Data: We consider the following parameters in our testing (1) (2) (3) and (4) . These are then normalized such that . We compare our model with the Rank-Centrality algorithm on synthetic data. We consider four potential strategies used by Byzantine voters:
-
(1)
Fixed Order Vote: The Byzantine voters vote according to some random pre-determined order. This is a fairly simple strategy that does not even require the Byzantine voters to know the values.
-
(2)
Opposite Vote: The Byzantine voters vote exactly the reverse of the weights. We had shown in section 3.1 that this strategy leads to a failure for some weights.
-
(3)
Opposite Vote Probabilistic: The Byzantine voters are equivalent to good voters, except their final answer is flipped. For example, the Byzantine voters will vote for object with probability and vote for object with probability .
-
(4)
Random Subset: For each pair, the Byzantine voters either decide to behave like good voters with half probability or decide to opposite vote.
Figures 3 and 3 demonstrate the performance of the Rank-Centrality algorithm and the FBSR algorithm against the aforementioned strategies. For Figure 3, we plot the relative -Error against the Byzantine Fraction (). Similar to our results in section 3.1, we see that for most strategies the relative -Error for the Rank-Centrality Algorithm grows linearly with . We can see that the relative -Error for both Fixed Order Vote and Opposite Vote strategies are fairly high for the Rank-Centrality algorithm and that the Fixed Order Vote dominates the relative -Error for the FBSR algorithm. However, even though the Fixed Order Vote is the best strategy against the FBSR algorithm. The FBSR algorithm is still able to perform around two times better than the Rank-Centrality algorithm across all values of .
We also confirm that our rankings are close to the actual rankings. To do this we consider Kendall’s Tau Correlation [Kendall, 1938] between our ranking and the actual ranking. From Figure 3, we see that for the Rank-Centrality algorithm, the Opposite Vote strategy is able to give very low values of . When the Byzantine fraction is around we get a Kendall’s Tau Correlation close to (essentially a random ranking). We also see that the FBSR algorithm is able to give better results for all strategies.



To further show the convergence of the algorithms in the Byzantine-BTL model for larger , we plot the relative error and Kendall’s Tau Correlation against the number of objects in Figures 3 and 6 for the Fixed Order Vote strategy. We consider values of in . These results come when we have set . We see that for a large range of the number of objects both metrics have remained constant for the Rank-Centrality algorithm thus supporting our hypothesis. On the other hand, we see that as we increase the number of objects the -error gradually falls and the Kendall’s Tau Correlation gradually increases. The rather slow descent of the relative -error is backed by Theorem 8 where we see that for larger the relative error is .
Real Data: Experimentation with real data was not straightforward, considering that most datasets do not have voter labels nor do they have voter queries as requested by the BSR algorithm (one voter is always asked multiple queries of the form where is fixed and is varied). To get around this issue, we use complete rankings. We consider the Sushi dataset comprising voters ordering the sushis from most preferred to least preferred. Given the preference order, we get pairwise votes from each of the voters. Since we are assuming that we are dealing with permutations here, we assume the Byzantine voters can use the following strategies:
-
(1)
Fixed Order Vote / Opposite Vote: Same as previously defined, but this time the entire permutation is given.
-
(2)
Opposite Random Flips: Each Byzantine voter starts with the opposite permutation and then swaps some of the objects at random.
Since the Sushi dataset has many voters we do not use our (2)nd modification mentioned above. We compute the true weights by applying the Rank-Centrality algorithm directly (thus the Rank-Centrality algorithm has a big advantage, especially for smaller weights). Figures 6 and 6 show the Rank-Centrality algorithm and the BSR algorithm’s performance. We see that the BSR algorithm achieves lower relative -errors/Kendall’s Tau Correlation for most Byzantine fractions and strategies. The comparatively small improvement in Kendall’s Tau Correlation is attributed to a rather small difference in the weights of the various Sushi (5 sushis have weights ).



5 Conclusion and Further Work
We studied the potential disruption that Byzantine voters can do to the widely-used Rank-Centrality algorithm and proposed fast solutions to counter the same. Our algorithms and analysis techniques are quite non-trivial and indicate to us the need for developing this theory further. We require and for the success of the BSR and FBSR algorithms. However, Negahban et al. [2017] only required and . An interesting direction would be to explore whether sub-logarithmic values of will be possible. Alternatively, a lower bound on the number of required votes per edge for an algorithm to succeed with constant probability would also be an interesting direction.
References
- Agarwal et al. [2018] A. Agarwal, P. Patil, and S. Agarwal. Accelerated spectral ranking. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 70–79. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/agarwal18b.html.
- Agarwal et al. [2020] A. Agarwal, S. Agarwal, S. Khanna, and P. Patil. Rank aggregation from pairwise comparisons in the presence of adversarial corruptions. In H. D. III and A. Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 85–95. PMLR, 7 2020. URL https://proceedings.mlr.press/v119/agarwal20a.html.
- Baltrunas et al. [2010] L. Baltrunas, T. Makcinskas, and F. Ricci. Group recommendations with rank aggregation and collaborative filtering. In Proceedings of the Fourth ACM Conference on Recommender Systems, RecSys ’10, page 119–126, New York, NY, USA, 2010. Association for Computing Machinery. ISBN 9781605589060. doi: 10.1145/1864708.1864733. URL https://doi.org/10.1145/1864708.1864733.
- Blanchard et al. [2017] P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer. Machine learning with adversaries: Byzantine tolerant gradient descent. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/2017/file/f4b9ec30ad9f68f89b29639786cb62ef-Paper.pdf.
- Bradley and Terry [1952] R. A. Bradley and M. E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324–345, 1952.
- Brin and Page [1998] S. Brin and L. Page. The anatomy of a large-scale hypertextual web search engine. In Proceedings of the Seventh International Conference on World Wide Web 7, WWW7, page 107–117, NLD, 1998. Elsevier Science Publishers B. V.
- Caplin and Nalebuff [1991] A. Caplin and B. Nalebuff. Aggregation and social choice: A mean voter theorem. Econometrica, 59(1):1–23, 1991. ISSN 00129682, 14680262. URL http://www.jstor.org/stable/2938238.
- Chen et al. [2013] X. Chen, P. N. Bennett, K. Collins-Thompson, and E. Horvitz. Pairwise ranking aggregation in a crowdsourced setting. In Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, WSDM ’13, page 193–202, New York, NY, USA, 2013. Association for Computing Machinery. ISBN 9781450318693. doi: 10.1145/2433396.2433420. URL https://doi.org/10.1145/2433396.2433420.
- Chen and Suh [2015] Y. Chen and C. Suh. Spectral mle: Top-k rank aggregation from pairwise comparisons. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, page 371–380. JMLR.org, 2015.
- Chen et al. [2018] Y. Chen, L. Su, and J. Xu. Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. SIGMETRICS Perform. Eval. Rev., 46(1):96, jun 2018. ISSN 0163-5999. doi: 10.1145/3292040.3219655. URL https://doi.org/10.1145/3292040.3219655.
- Cucuringu [2016] M. Cucuringu. Sync-rank: Robust ranking, constrained ranking and rank aggregation via eigenvector and sdp synchronization. IEEE Transactions on Network Science and Engineering, 3(1):58–79, 2016. doi: 10.1109/TNSE.2016.2523761.
- Dwork et al. [2001] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar. Rank aggregation methods for the web. In Proceedings of the 10th International Conference on World Wide Web, WWW ’01, page 613–622, New York, NY, USA, 2001. Association for Computing Machinery. ISBN 1581133480. doi: 10.1145/371920.372165. URL https://doi.org/10.1145/371920.372165.
- El Mhamdi et al. [2018] E. M. El Mhamdi, R. Guerraoui, and S. Rouault. The hidden vulnerability of distributed learning in Byzantium. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3521–3530. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.press/v80/mhamdi18a.html.
- El-Mhamdi et al. [2021] E. M. El-Mhamdi, S. Farhadkhani, R. Guerraoui, A. Guirguis, L.-N. Hoang, and S. Rouault. Collaborative learning in the jungle (decentralized, byzantine, heterogeneous, asynchronous and nonconvex learning). In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 25044–25057. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/d2cd33e9c0236a8c2d8bd3fa91ad3acf-Paper.pdf.
- Ford [1957] L. R. Ford. Solution of a ranking problem from binary comparisons. The American Mathematical Monthly, 64(8):28–33, 1957. ISSN 00029890, 19300972. URL http://www.jstor.org/stable/2308513.
- Kendall [1938] M. G. Kendall. A NEW MEASURE OF RANK CORRELATION. Biometrika, 30(1-2):81–93, 06 1938. ISSN 0006-3444. doi: 10.1093/biomet/30.1-2.81. URL https://doi.org/10.1093/biomet/30.1-2.81.
- Li et al. [2020] J. Li, W. Abbas, and X. Koutsoukos. Byzantine resilient distributed multi-task learning. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 18215–18225. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/d37eb50d868361ea729bb4147eb3c1d8-Paper.pdf.
- Luce [1959] R. D. Luce. Individual Choice Behavior. John Wiley, 1959.
- Mohajer and Suh [2016] S. Mohajer and C. Suh. Active top-k ranking from noisy comparisons. 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 875–882, 2016.
- Negahban et al. [2017] S. Negahban, S. Oh, and D. Shah. Rank centrality: Ranking from pairwise comparisons. Operations Research, 65(1):266–287, 2017.
- Rajkumar and Agarwal [2014] A. Rajkumar and S. Agarwal. A statistical convergence perspective of algorithms for rank aggregation from pairwise data. In E. P. Xing and T. Jebara, editors, Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pages 118–126, Bejing, China, 22–24 Jun 2014. PMLR. URL https://proceedings.mlr.press/v32/rajkumar14.html.
- Soufiani et al. [2014] H. A. Soufiani, D. Parkes, and L. Xia. Computing parametric ranking models via rank-breaking. In E. P. Xing and T. Jebara, editors, Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pages 360–368, Bejing, China, 6 2014. PMLR. URL https://proceedings.mlr.press/v32/soufiani14.html.
- Suh et al. [2017] C. Suh, V. Y. F. Tan, and R. Zhao. Adversarial top- ranking. IEEE Trans. Inf. Theor., 63(4):2201–2225, 4 2017. ISSN 0018-9448. doi: 10.1109/TIT.2017.2659660. URL https://doi.org/10.1109/TIT.2017.2659660.
- Sun et al. [2021] G. Sun, Y. Cong, J. Dong, Q. Wang, L. Lyu, and J. Liu. Data poisoning attacks on federated machine learning. IEEE Internet of Things Journal, 2021.
- Wu et al. [2021] Y. Wu, H. Chen, X. Wang, C. Liu, P. Nguyen, and Y. Yesha. Tolerating adversarial attacks and byzantine faults in distributed machine learning. In 2021 IEEE International Conference on Big Data (Big Data), pages 3380–3389, 2021. doi: 10.1109/BigData52589.2021.9671583.
Updated Checklist
-
1.
For all authors…
-
(a)
Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes]
-
(b)
Did you describe the limitations of your work? [Yes]
-
(c)
Did you discuss any potential negative societal impacts of your work? [Yes]
-
(d)
Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
-
(a)
-
2.
If you are including theoretical results…
-
(a)
Did you state the full set of assumptions of all theoretical results? [Yes]
-
(b)
Did you include complete proofs of all theoretical results? [Yes]
-
(a)
-
3.
If you ran experiments…
-
(a)
Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]
-
(b)
Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes]
-
(c)
Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes]
-
(d)
Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes]
-
(a)
-
4.
If you are using existing assets (e.g., code, data, models) or curating/releasing new assets…
-
(a)
If your work uses existing assets, did you cite the creators? [N/A]
-
(b)
Did you mention the license of the assets? [N/A]
-
(c)
Did you include any new assets either in the supplemental material or as a URL? [N/A]
-
(d)
Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A]
-
(e)
Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A]
-
(a)
-
5.
If you used crowdsourcing or conducted research with human subjects…
-
(a)
Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
-
(b)
Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
-
(c)
Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
-
(a)
Appendix A Appendix
A.1 A primer on Rank-Centrality
A.1.1 Rank-Centrality Convergence
Rank-Centrality proves the below theorem
Theorem 10 (From Negahban et al. [2017]).
Given objects and a connected comparison graph , let each pair be compared for times with outcomes produced as per a BTL model with parameters . Then, for some positive constant and when , the following bound on the normalized error holds with probability at least :
where , and
which when extended to Erdős-Rényi random graphs takes the form of
Theorem 11 (From Negahban et al. [2017]).
Given objects, let the comparison graph be generated by selecting each pair to be in with probability independently of everything else. Each such chosen pair of objects is compared times with the outcomes of comparisons produced as per a BTL model with parameters . Then if and , the following bound on the error rate holds with probability at least :
where and
A.1.2 Rank-Centrality Convergence Proof Sketch
The proof from theorem 10 to 11 is straightforward and has no relation to the fact that there are Byzantine voters. Rank-Centrality algorithm’s convergence proof for Theorem 10 requires the proving of three lemmas (Lemmas 2,3 and 4 given below)
Lemma 12 (From Negahban et al. [2017]).
For any Markov chain with a reversible Markov chain , let be the distribution of the Markov chain when started with the initial distribution . Then,
where is the stationary distribution of , , and
The next lemma bounds the norm of the matrix:
Lemma 13 (From Negahban et al. [2017]).
For some constant , the error matrix satisfies
with probability at least .
The next lemma bounds the value of :
Lemma 14 (From Negahban et al. [2017]).
If and if the value of then,
Here we find that Lemmas 12 and 14 hold irrespective of whether there are Byzantine voters or not. Therefore we need to find a substitute for Lemma 13. The proof of Lemma 13 is split into two parts, we mainly focus on the first part where is sufficiently small and therefore is a sparse graph. Proving Lemma 13 for Erdős-Rényi graphs with lower degrees essentially boils down to proving that with a very low probability, where . In simpler terms, this implies that the sum of the absolute values of the deviation of the voters’ entries from the expected entries for a particular object is not a big number with high probability.
A.2 Rank-Centrality Failure
Proof.
Negahban et al. [2017] define an inner product space as a space of -dimensional vectors endowed with
Similarly, they define as the 2-norm in and as the operator norm for a self-adjoint operator in . They also relate these norms to norms in the Euclidean as
(3) | ||||
(4) |
Finally Negahban et al. [2017] define the following matrices.
-
(1)
: Diagonal matrix such that .
-
(2)
-
(3)
: The rank-1 projection of .
-
(4)
Using the above results Negahban et al. [2017] show that:
(5) |
where is the distribution of the Markov chain on after iterations, for large we have . By the definition of , it follows that . Here corresponds to the second-largest eigenvalue of the matrix, which by the Perron-Frobenius Theorem is . We can rearrange Equation 5 to get:
here . Choosing a large enough where the Markov chain has converged we get:
Lemma 15.
For the opposite-voting Byzantine strategy, we have
with probability
Proof.
We will first get an upper-bound for the denominator. We know that:
These are essentially the maximum row and column sums. Since all of the entries in are the difference corresponding entries between and we know that they will always be between and . Therefore we can say that both the maximum column and maximum row sum will be at most . We can see the probability of having a vertex with a degree greater than can by Chernoff’s bound that . Applying union bound we can say that . We can see that by setting the value of we can say that with high probability that . Therefore we can conclude that . Hence Proved. ∎
Lemma 16.
For the opposite-voting Byzantine strategy, there exists a such that we have
with probability
Proof.
We now try to get a lower-bound on the RHS of Equation 6. We consider weights:
Let us call the resulting vector from as . We can see that for to we have:
() |
Given the above relation for we can say that
(Cauchy-Schwarz) |
Since the first two terms are already known we are primarily interesting in getting a high probability bound on the third term. We can see that we are only interested in entries in the adjacency matrix. We can see that each of these have a probability of being an edge. Since we can conclude there will be at least edges with probability
Furthermore, it is easy to see a single byzantine voter amongst the voters will cause a deviation of . On the other hand, any good voter will in expectancy not cause any deviation. Therefore in expectancy, we can say that the deviation for a randomly chosen voter will be . Here we are summing binomial random variables, therefore we can say that by Chernoff bound out of the voters the total deviation will be at least Byzantine voters with probability
Therefore we can claim that:
Substituting in the above equation and considering larger we get:
Coming back to our objective of bounding we can say that:
We can also see that the maximum term in can at most be . Therefore we can conclude that will at most be , which for constant values of essentially comes out to .
Therefore we claim that:
with probability ∎
A.3 BSR Algorithm Analysis
We start off by proving Lemma 6.
Proof.
We initially prove a few lemmas to find out the probability that for some a voter will be at a distance of away from the mean.
Lemma 17.
For each good voter and for each let where is the vote casted by voter when queried about . Then is at distance away from the expected mean with probability at most .
Proof.
We can directly apply Hoeffding’s inequality as even after being multiplied by the range of still remains between and . Therefore we can conclude that with probability , does not deviate from the mean by more than . ∎
Corollary 18.
When , then we have the probability of being at most away from the mean .
Proof.
Trivially true using Lemma 17. ∎
Lemma 19.
If and then with probability
the predicted mean will be away from the observed mean for all .
Proof.
We define a voter to be in good range if it is away from the mean, otherwise, we say it is in bad range. We now make two claims:
-
(1)
At least fraction of the good voters will be in good range with a probability for all values of .
-
(2)
With a probability of we will have , where are the number of Byzantine voters amongst the chosen voters.
Proof of Claim (1): From Corollary 18 we know that a voter will be in good range with probability . Therefore we can claim by the Chernoff bound(setting and ) that for some the probability of having voters not being in good range can be written as . However, for this to be applicable over all values of we need to multiply this by . Therefore we can get the statement of Claim (1). It should be noted that for Claim 1 to hold with high probability we need
Proof of Claim (2): We can also see that since we can apply Hoeffding’s inequality to conclude the statement of claim 2.
By claim (1), we can see that there are at least good voters that lie in good range with probability . By Claim (2), we can also see that will be at least with probability . Therefore, we can conclude that there are at least good voters in good range with probability by using a simple application of the union bound. By choosing the value of such that
(8) |
We can see that at most good voters will be in good range whp if . Therefore we can set to satisfy Equation 8.
Since know that at least of the voters lie in the good range we can conclude that the median of the voter’s (good and Byzantine) outputs also lies in the good range. Therefore we can conclude that the sample median of the good voters is at most away from the predicted median.
While we have this result what we ideally want is a good predictor for the expected mean. We have already shown that the predicted median would be at a distance from the sample median of the good voters. We will now go on to show that the sample median of the good voters is also very close to the expected mean. For the sample median to be more than away from the expected mean we would need there to be less than of the entries away from the expected mean. However, since we are assuming Claim 1 is in effect we know that of the entries are in good range with high probability. We can therefore conclude that the predicted median is at most away from the expected mean. Therefore if we consider entries that are away from the predicted median this will include the entries that are away from the mean encapsulating a large fraction of the voters while ensuring that any chosen voter is at most away from the expected mean. ∎
We will now finally prove Lemma 6. As in the Rank-Centrality algorithm’s analysis, we will prove this in two parts. We know that , where is a diagonal matrix consisting only of the diagonal entries of . We can also decipher that We now separately show that both and are .
Lemma 20.
Proof.
We will initially show that the non-diagonal matrix is bounded. Since we know that:
Since is a symmetrical matrix we can simply bound the maximal-row sum of the absolute values of . To that end we define
where . Continuing from Equation (22) from Negahban et al. [2017] we get:
(9) | ||||
(10) | ||||
(11) |
We need to show two things
-
(1)
The number of good voters that will get eliminated are at most a constant fraction.
-
(2)
The Byzantine voters that are not eliminated in any round can not cause the second term to get very large.
The first term in Equation 11 will work out the same way as originally upto constant factors because we will show that only a constant factor of the good voters can be removed with high probability.
Proof of (1):
We will now show that Line 13 in Algorithm 2 does not eliminate more than a constant fraction of the good voters. This is necessary to show a bound on the good voters’ term in Equation 11.
We will now calculate a bound on the maximum number of good out-of-bound( away from the ) entries that can be there across all . We claim that this number is . This is because by a similar argument as claim (1) we know that by the Chernoff-Hoeffding bound that the probability with which there will be so many out-of-bound entries is:
( and ) | ||||
( for ) | ||||
For the last equation to hold we need . In other words, we need , we can see that this can clearly be achieved for all values of for large values of if . However, the probability that we have computed above is for some . For it to be applicable over all we need to be very small, which we can see is satisfied when . Since we have defined , we can see that it is always true.
We see that for a voter to be removed there need to be out-of-bounds entries, therefore when we remove a single voter we will subsequently be removing 3 Byzantine voters. This ensures that at most good voters can be removed. Therefore in total after the removal, the number of good voters are . Therefore using the same equation from Negahban et al. [2017], with probability we can conclude that:
(12) |
The above result comes directly by applying Hoeffding’s inequality and is similar to the result in Negahban et al. [2017] as we are only handling good agents here. We can see that for this result to hold with high probability we need since . Therefore we come to the conclusion of:
(13) |
Proof of (2): We will now prove that the second term also does not occur with high probability. We will do this in two parts. We will consider voters that lie close to the mean and the voters that do not lie close to the mean.
We initially only consider the cases where the voters lie within the range from . We can see that for each , the Byzantine entries of can only be at a distance of away from the mean or they cannot fall in the range. Therefore can only be at most away from the mean. Therefore all we need to show is that , since . This is equivalent to
(14) |
Now all we need to do is consider the cases where the voters do not lie within the range. We can see that we eliminate voters if for a particular they have a lot of out-of-bound entries. Therefore we can claim that for any there are only entries that are not in the range and each can at most be , we can see that the maximum deviation will at most be . Since we know that we can conclude that the max deviation will be . Using an approximate value for we get:
(15) |
Therefore we can conclude that the second term does not occur. Therefore we can conclude that:
(16) |
The last equation comes by using Equation 8. Combining Equations 13, 14, 15 we get
However, since we can get rid of the first term to get:
(17) |
Combining equations 12 and 16 with equation 11 we get:
Since this is a sum of 4 terms we individually analyse them. Firstly we see that the first two terms are clearly small because . For the third term, we see that because of equation 17, therefore making the third term fairly small as well. Finally, the definition of ensures that the last term is small. Since we have to find a value for we can set to be:
(18) |
We see that the third term is asymptotically smaller than the first two, we can remove it from consideration. By setting
we get
The Rank-Centrality analysis later takes into account the probability that all degrees are in between and . Therefore we get
Applying the Union bound we know that
(19) |
Therefore we can say that:
with probability . Since for values of , we can say
with probability . Hence Proved. ∎
We will now prove that the diagonal entries of are bounded
Lemma 21.
Proof.
From Equations (15) and (16) we know that
(20) |
Therefore we can write
Since in the previous part we had proved a much strong bound wherein we had shown that , we can conclude that by the triangle inequality we have:
Hence Proved. ∎
We now finally prove Theorem 7. Similar to the Rank-Centrality convergence analysis, using the above results and using Lemmas 12 and 14 from Negahban et al. [2017] we can conclude that
where . Similar to Negahban et al. [2017] we can see that for an Erdős-Rényi graph with probability we have and . Substituting we get:
A.4 FBSR Algorithm Analysis
Firstly since we are dealing with and since the proof of the Rank-Centrality algorithm already takes into account the probability that all degrees will be between and we can say that the minimum degree will at most be . Therefore we can say that any bucket will have at least objects and at most objects. We know that:
(21) |
Using the same analysis as above we can say that:
Substituting we get
We can also see that will be at most . Substituting in Equation 21 we get:
where is defined as
Using and to get:
Finally using Equation 19 we get
The other steps are identical to the analysis of the BSR algorithm
A.5 Impossibility result proof
Proof.
We will prove by contradiction. Let there be an algorithm such that for all input weights it returns output weights such that with probability , where converges to . We consider the case where . We can see that the cases where can be handled by making Byzantine voters just output irrespective of the queries. We can assume that the algorithm is aware of these voters and will therefore always ignore these voters. Therefore the problem reduces to the case.
Consider an instance (Instance 1) where is defined as follows:
(22) |
Let us consider the case where the good voters are numbered , while the Byzantine voters are numbered . We can use to give us a such that with probability . We now outline the strategy that the Byzantine voters can use. The Byzantine voters give their results according to some . Where:
(23) |
In another instance (Instance 2), we consider the good and Byzantine voters flip their numberings i.e. the Byzantine voters are numbered from and the good voters are numbered from . Furthermore, let the input weights be and the Byzantine voters vote according to . Based on the description of our instance and the existence of algorithm , with probability we will get a weight output from such that
(24) |
But to any algorithm, the above two instances are identical and therefore with probability :
(25) |
We can therefore conclude that with a non-zero probability we require both Equations 24 and 25 by a simple application of the union bound. We see that:
(Triangle Inequality) | ||||
(Equations 24 and 25) | ||||
(Equations 22 and 23) |
For larger , the LHS converges to , while the RHS converges to 0. Giving a contradiction. ∎
A.6 Broader Impact
Due to the more theoretical nature of our research, we do not see any potential negative societal impacts associated with the work.
A.7 Experimental Results and Code
We provide the detailed tabular results for Figures 3, 3, 3, 6, 6, and 6 in Tables 1, 2, 3, 4, 5 and 6 with error bounds for all the obtained results. The code for producing these can be found here.
Strategy Algo 0 0.05 0.1 0.15 0.2 0.25 0.3 FOV FBSR 0.02 0.001 0.03 0.002 0.04 0.002 0.07 0.004 0.11 0.008 0.15 0.011 0.23 0.015 RC 0.02 0.002 0.07 0.002 0.14 0.002 0.2 0.005 0.27 0.006 0.33 0.008 0.4 0.01 OV FBSR 0.02 0.002 0.02 0.001 0.02 0.001 0.02 0.002 0.03 0.002 0.04 0.005 0.06 0.007 RC 0.02 0.001 0.09 0.002 0.17 0.005 0.24 0.004 0.31 0.005 0.38 0.007 0.46 0.006 OVP FBSR 0.02 0.001 0.02 0.001 0.03 0.002 0.05 0.002 0.07 0.004 0.09 0.004 0.13 0.004 RC 0.02 0.001 0.06 0.003 0.1 0.008 0.15 0.01 0.2 0.006 0.24 0.01 0.29 0.015 RS FBSR 0.02 0.001 0.02 0.001 0.03 0.002 0.04 0.002 0.06 0.004 0.09 0.003 0.12 0.004 RC 0.02 0.001 0.05 0.002 0.09 0.004 0.13 0.005 0.17 0.002 0.21 0.005 0.25 0.006
Strategy Algo 0 0.05 0.1 0.15 0.2 0.25 0.3 FOV FBSR 0.98 0.003 0.97 0.003 0.96 0.003 0.92 0.007 0.89 0.012 0.85 0.015 0.8 0.018 RC 0.97 0.003 0.92 0.006 0.85 0.01 0.77 0.016 0.7 0.014 0.62 0.027 0.55 0.026 OV FBSR 0.98 0.002 0.98 0.002 0.98 0.003 0.97 0.002 0.97 0.004 0.97 0.004 0.96 0.005 RC 0.97 0.002 0.97 0.002 0.94 0.021 0.89 0.018 0.72 0.052 0.49 0.079 0.15 0.107 OVP FBSR 0.98 0.003 0.98 0.002 0.98 0.003 0.97 0.003 0.97 0.002 0.97 0.004 0.96 0.004 RC 0.98 0.002 0.97 0.002 0.97 0.004 0.96 0.004 0.96 0.003 0.95 0.005 0.93 0.005 RS FBSR 0.98 0.002 0.98 0.002 0.97 0.003 0.97 0.004 0.97 0.004 0.96 0.003 0.95 0.009 RC 0.97 0.002 0.97 0.003 0.96 0.006 0.94 0.004 0.92 0.008 0.88 0.016 0.83 0.027
Algo BF 50 90 130 170 210 250 FBSR 0.1 0.063 0.009 0.046 0.005 0.042 0.003 0.039 0.005 0.037 0.004 0.034 0.003 FBSR 0.2 0.133 0.019 0.123 0.011 0.106 0.011 0.099 0.008 0.1 0.009 0.094 0.006 RC 0.1 0.14 0.011 0.14 0.005 0.14 0.005 0.14 0.005 0.14 0.007 0.14 0.002 RC 0.2 0.27 0.013 0.27 0.005 0.26 0.01 0.27 0.006 0.27 0.007 0.27 0.005
Algo BF 50 90 130 170 210 250 FBSR 0.1 0.936 0.008 0.949 0.008 0.952 0.008 0.958 0.007 0.958 0.004 0.963 0.004 FBSR 0.2 0.867 0.021 0.892 0.017 0.899 0.007 0.898 0.014 0.902 0.01 0.908 0.008 RC 0.1 0.83 0.017 0.85 0.02 0.85 0.012 0.85 0.014 0.84 0.015 0.84 0.005 RC 0.2 0.67 0.047 0.67 0.027 0.69 0.019 0.69 0.021 0.69 0.023 0.69 0.01
Strategy Algo 0 0.05 0.1 0.15 0.2 FOV BSR 0.0 0.0 0.063 0.013 0.102 0.041 0.207 0.062 0.267 0.049 RC 0.0 0.0 0.07 0.019 0.15 0.043 0.2 0.047 0.24 0.042 OV BSR 0.0 0.0 0.08 0.0 0.076 0.0 0.126 0.0 0.169 0.0 RC 0.0 0.0 0.11 0.0 0.21 0.0 0.29 0.0 0.37 0.0 ORF BSR 0.0 0.0 0.135 0.002 0.067 0.001 0.085 0.002 0.117 0.006 RC 0.0 0.0 0.09 0.003 0.16 0.002 0.23 0.003 0.29 0.003
Strategy Algo 0 0.05 0.1 0.15 0.2 FOV BSR 1.0 0.0 0.942 0.049 0.844 0.116 0.742 0.067 0.653 0.084 RC 1.0 0.0 0.92 0.062 0.88 0.056 0.8 0.069 0.69 0.153 OV BSR 1.0 0.0 1.0 0.0 0.956 0.0 0.822 0.0 0.511 0.0 RC 1.0 0.0 1.0 0.0 0.96 0.0 0.82 0.0 0.38 0.0 ORF BSR 1.0 0.0 1.0 0.0 0.991 0.013 0.969 0.018 0.893 0.04 RC 1.0 0.0 1.0 0.0 0.96 0.0 0.96 0.013 0.85 0.022