Decomposable Probability-of-Success Metrics in Algorithmic Search
Abstract
Previous studies have used a specific success metric within an algorithmic search framework to prove machine learning impossibility results. However, this specific success metric prevents us from applying these results on other forms of machine learning, e.g. transfer learning. We define decomposable metrics as a category of success metrics for search problems which can be expressed as a linear operation on a probability distribution to solve this issue. Using an arbitrary decomposable metric to measure the success of a search, we demonstrate theorems which bound success in various ways, generalizing several existing results in the literature.
1 INTRODUCTION
Many machine learning tasks, such as classification, regression and clustering, can be reduced to search problems [Montañez, 2017]. Through this reduction, one can apply concepts from information theory to derive results about machine learning. To compare the success of different algorithms, or the expected probability of finding a desired element, Montañez defined a metric of success that averaged the probability of success over all iterations of an algorithm [Montañez, 2017]. While this metric has many applications, it is not appropriate for cases where the probability of success for a given iteration of an algorithm is required. An example of this is transfer learning, where the probability of success at the final step of the algorithm is more relevant than the average probability of success.
Building on this work, we define decomposability as a property of probability-of-success metrics and show that the expected per-query probability of success [Montañez, 2017] and more general probability of success metrics are decomposable. We then show that the results previously proven for the expected per-query probability of success hold for all decomposable probability-of-success metrics. Under this generalization,we can prove results related to the probability of success for specific iterations of a search rather than just uniformly averaged over the entire search, giving the results much broader applicability.
2 RELATED WORK
Several decades ago, Mitchell proposed that classification could be viewed as search, and reduced the problem of learning generalizations to a search problem within a hypothesis space [Mitchell, 1980, Mitchell, 1982]. Montañez subsequently expanded this idea into a formal search framework [Montañez, 2017].
Montañez showed that for a given algorithm with a fixed information resource, favorable target sets, or the target sets on which the algorithm would perform better than uniform random sampling, are rare. He did this by proving that the proportion of -bit favorable problems has an exponentially decaying restrictive bound [Montañez, 2017]. He further showed that this scarcity of favorable problems exists even for -sparse target sets.
Montañez et al. later defined bias, the degree to which an algorithm is predisposed to a fixed target, with respect to the expected per-query probability of success metric, and proved that there were a limited number of favorable information resources for a given bias [Montañez et al., 2019]. Using the search framework, they proved that an algorithm cannot be favorably biased towards many distinct targets simultaneously.
As machine learning grew in prominence, researchers began to probe what was possible within machine learning. Valiant considered learnability of a task as the ability to generate a program for performing the task without explicit programming of the task [Valiant, 1984]. By restricting the tasks to a specific context, Valiant demonstrated a set of tasks which were provably learnable.
Schaffer provided an early foundation to the idea of bounding universal performance of an algorithm [Schaffer, 1994]. Schaffer analyzed generalization performance, the ability of a learner to classify objects outside of its training set, in a classification task. Using a baseline of uniform sampling from the classifiers, he showed that, over the set of all learning situations, a learner’s generalization performance sums to zero, which makes generalization performance a conserved quantity.
Wolpert and Macready demonstrated that the historical performance of a deterministic optimization algorithm provides no a priori justification whatsoever for its continued use over any other alternative going forward [Wolpert and Macready, 1997], implying that there is no utility in rationally choosing a thus-far better algorithm over choosing the opposite. Furthermore, just as there does not exist a single algorithm that performs better than random on all possible optimization problems, they proved that there also does not exist an optimization problem on which all algorithms perform better than average.
Continuing the application of prior knowledge to learning and optimization, Gülçehre and Bengio showed that the worse-than-chance performance of certain machine learning algorithms can be improved through learning with hints, namely, guidance using a curriculum [Gülçehre and Bengio, 2016]. So, while Wolpert’s results might make certain tasks seem futile and infeasible, Gülçehre’s empirical results show that there exist some alternate means through which we can utilize prior knowledge to get better results in both learning and optimization.
Others have worked towards meaningful bounds on algorithmic success through different approaches. Sterkenburg approached this concept from the perspective of Putnam, who originally claimed that a universal learning machine is impossible through the use of a diagonalization argument [Sterkenburg, 2019]. Sterkenburg follows up on this claim, attempting to find a universal inductive rule by exploring a measure which cannot be diagonalized. Even when attempting to evade Putnam’s original diagonalization, Sterkenburg is able to apply a new diagonalization that reinforces Putnam’s original claim of the impossibility of a universal learning machine.
There has also been work on proving learning bounds for specific problems. Kumagai and Kanamori analyzed the theoretical bounds of parameter transfer algorithms and self-taught learning [Kumagai and Kanamori, 2019]. By looking at the local stability, or the degree to which a feature is affected by shifting parameters, they developed a definition for parameter transfer learnability, which describes the probability of effective transfer.
2.1 Distinctions from Prior Work
The expected per-query probability of success metric previously defined in the algorithmic search framework [Montañez, 2017] tells us, for a given information resource, algorithm, and target set, how often (in expectation) our algorithm will successfully locate elements of the target set. While this metric is useful when making general claims about the performance of an algorithm or the favorability of an algorithm and information resource to the target set, it lacks the specificity to make claims about similar performance and favorability on a per-iteration basis. This trade-off calls for a more general metric that can be used to make both general and specific (per iteration) claims. For instance, in transfer learning tasks, the performance and favorability of the last pre-transfer iteration is more relevant than the overall expected per-query probability of success. The general probability of success, which we will define as a particular decomposable probability-of-success metric, is a tool through which we can make claims at such specific and relevant steps.
3 BACKGROUND
In this section, we will present definitions for the main framework that we will use throughout this paper.
3.1 The Search Framework
Montañez describes a framework which formalizes search problems in order to analyze search and learning algorithms [Montañez, 2017]. There are three components to a search problem. The first is the finite discrete search space, , which is the set of elements to be examined. Next is the target set, , which is a nonempty subset of the search space that we are trying to find. Finally, we have an external information resource, , which provides an evaluation of elements of the search space. Typically, there is a tight relationship between the target set and the external information resource, as the resource is expected to lead to or describe the target set in some way, such as the target set being elements which meet a certain threshold under the external information resource.
Within the framework, we have an iterative algorithm which seeks to find elements of the target set, shown in Figure 1. The algorithm is a black-box that has access to a search history and produces a probability distribution over the search space. At each step, the algorithm samples over the search space using the probability distribution, evaluates that element using the information resource, adds the result to the search history, and determines the next probability distribution. The abstraction of finding the next probability distribution as a black-box algorithm allows the search framework to work with all types of search problems.

The ML-as-search framework is valuable because it provides a structure to understand and reason about different machine learning problems within the same formalism. For example, we can understand regression as a search through a space of possible regression functions, or parameter estimation as a search through possible vectors for a black-box process [Montañez, 2017]. Therefore, we can apply results about search problems to any machine learning problem we can cast into the search framework.
3.2 Expected Per-Query Probability of Success
In order to compare search algorithms, Montañez defined the expected per-query probability of success,
(3.1) |
where is the sequence of probability distributions generated by the black box, is the search history, and and are the target set and information resource of the search problem, respectively [Montañez, 2017]. This metric of success is particularly useful because it can be shown that , where is the average of the vector representation of the probability distribution from the search algorithm at each step, conditioned on an information resource .
Measuring success using the expected per-query probability of success, Montañez demonstrated bounds on the success of any search algorithm [Montañez, 2017]. The Famine of Forte states that for a given algorithm, the proportion of target-information resource pairs yielding a success level above a given threshold is inversely related to the threshold. Thus, the greater the threshold for success, the fewer problems you can be successful on, regardless of the algorithm. The expected per-query probability of success can also be used to prove a version of the No Free Lunch theorems, demonstrating that all algorithms perform the same averaged over all target sets and information resources, as is done in Theorem 1 of the current manuscript.
3.3 Bias
Using the search framework, Montañez defined a measure of bias between a distribution over information resources and a fixed target [Montañez et al., 2019]. For a distribution over a collection of possible information resources , with , and a fixed -hot111-hot vectors are binary vectors of length with exactly ones. target t, the bias between the distribution and the target is defined as
(3.2) | ||||
(3.3) | ||||
(3.4) |
Recall from above that was the averaged probability distribution over from a search.
The bias term measures the performance of an algorithm in expectation (over a given distribution of information resources) compared to uniform sampling. Mathematically, this is computed by taking the difference between the expected value of the average performance of an algorithm and the performance of uniform sampling. The distribution captures what information resources (e.g., datasets) one is likely to encounter.
For a non-mathematical example of the effect of bias, suppose we are searching for parking space within a parking lot. If we randomly choose parking spaces to check, we are searching without bias. However, if we consider the location of the parking spaces, we may find that parking spaces furthest from the entrance are usually free, and could find an open parking space with a higher probability. Here, the information resource telling us the distance of each parking space from the entrance and our belief that parking spaces further from the entrance tend to be open creates a distribution over possible parking spaces, favoring those that are further away for being checked first.
4 PRELIMINARIES
In this section, we introduce a new property of success metrics called decomposability, which allows us to generalize concepts of success and bias. We provide a number of prelimimary lemmata, with full proofs given in the Appendix.
4.1 Decomposability
We now give a formal definition for a decomposable probability-of-success metric, which will be used throughout the rest of the paper.
Definition 4.1.
A probability-of-success metric is decomposable if and only if there exists a such that
(4.1) |
where is not a function of , being conditionally independent of it given .
As we stated previously, what makes the expected per-query probability of success particularly useful is that it can be represented as a linear function of a probability distribution. This definition allows us to reference any probability-of-success metric having this property.
As a first example, we show that the expected per-query probability of success is a decomposable probability-of-success metric.
Lemma 4.2 (Decomposability of the Expected Per-Query Probability of Success ).
The expected per-query probability of success is decomposable, namely,
(4.2) |
Our goal is to show that the theorems proved for the expected per-query probability of success hold for all decomposable metrics. Showing that the expected per-query probability of success is decomposable suggests that these theorems may be generalizable to any metrics sharing that property.
4.1.1 The General Probability of Success
While the expected per-query probability of success averages the probability of success over each of the queries in a search history, we may care more about a specific query in the search history, e.g., the final query of a sequence. Thus, we can generalize the expected per-query probability of success by replacing the averaging with an arbitrary distribution over the probability distributions in the search history. We define the General Probability of Success as
(4.3) |
where is a valid probability distribution on the search space and is the weight allocated to the th probability distribution in our sequence. This formula allows us to consider a wide variety of success metrics as being instances of the general probability of success metric. For example, the expected per-query probability of success is equivalent to setting , with . Similarly, a metric of success which only cares about the final query can be represented by letting where is the length of the sequence of queries, and is the average of the distributions from the -th iteration of our search.
It should be noted that within the expectation will be random, being defined over the random number of steps within . Our operative definition of the distribution, however, will allow us to generate the corresponding distribution for the needed number of steps, such as when we place all mass on the -th iteration of the search. With a slight abuse of notation, we thus let signify both the process by which the distribution is generated as well as the particular distribution produced for a given number of steps.
As the general probability of success provides a layer of abstraction above the expected per-query probability of success, if we prove that results about the expected per-query probability of success also hold for the general probability of success, we gain a more powerful tool set. To do so, we must first demonstrate that the general probability of success is a decomposable probability-of-success metric.
Lemma 4.3 (Decomposability of the General Probability of Success Metric).
The general probability of success is decomposable, namely,
(4.4) |
These lemmata allow us to apply later theorems about decomposable metrics to these two useful metrics. Given a metric of interest, performing a similar proof of decomposability will allow for the application of the subsequent theorems.
Lemma 4.4 (Decomposability closed under expectation).
Given a set of decomposable probability-of-success metrics and a distribution over , it holds that
(4.5) |
is also a decomposable probability-of-success metric.
Lemma 4.4 gives us an easy way to construct a new decomposable metric from a set of known decomposable metrics. Note that not every success metric is decomposable; we can create non-decomposable success metrics by taking non-convex combinations of decomposable probability-of-success metrics.
4.2 Generalization of Bias
Our definition of decomposability allows us to redefine bias in terms of any decomposable metric, . We replace with and obtain
(4.6) | ||||
(4.7) | ||||
(4.8) |
Because is decomposable, it is equal to . This makes results about the bias particularly interesting, since they relate directly to any probability-of-success metric we create, so long as the metric is decomposable.
5 RESULTS
Montañez proved a number of results and bounds on the success of machine learning algorithms relative to the expected per-query probability of success, along with its corresponding definition of bias [Montañez, 2017, Montañez et al., 2019]. We now generalize these to apply to any decomposable probability-of-success metric, with full proofs given in the Appendix (available online).
5.1 No Free Lunch for Search
First, we prove a version of the No Free Lunch Theorems for any decomposable probability-of-success metric within the search framework.
Theorem 1 (No Free Lunch for Search and Machine Learning).
For any pair of search/learning algorithms , operating on discrete finite search space , any closed under permutation set of target sets , any set of information resources , and decomposable probability-of-success metric ,
(5.1) |
This means that performance, in terms of our decomposable probability-of-success metric, is conserved in the sense that increased performance of one algorithm over another on some information resource-target pair comes at the cost of a loss in performance elsewhere.
5.2 The Fraction of Favorable Targets
Montañez proved that for a fixed information resource, a given algorithm will perform favorably relative to uniform random sampling on only a few target sets, under the expected per-query probability of success [Montañez, 2017]. We generalize this result with a decomposable probability-of-success metric and define a version of active information of expectations for decomposable metrics . This transforms the ratio of success probabilities into bits where , the per-query probability of success for uniform random sampling with replacement. denotes the advantage has over uniform random sampling with replacement, in bits.
Theorem 2 (The Fraction of Favorable Targets).
Let , , and decomposable probability-of-success metric . Then for ,
(5.2) |
Thus, the scarcity of -bit favorable targets still holds under for any decomposable probability-of-success metric.
5.3 The Famine of Favorable Targets
Following up on the previous result, we can show a similar bound in terms of the success of a given algorithm, for targets of a fixed size.
Theorem 3 (The Famine of Favorable Targets).
For fixed , fixed information resource , and decomposable probability-of-success metric , define
Then,
(5.3) |
where .
Here, we compare success not against uniform sampling but against a fixed constant . This theorem thus upper bounds the number of targets for which the probability of success of the search is greater than .
5.4 Famine of Forte
We generalize the Famine of Forte [Montañez, 2017], showing a bound that holds in the -sparse case using any decomposable probability-of-success metric.
Theorem 4 (The Famine of Forte).
Define
and let denote any set of binary strings, such that the strings are of length or less. Let
where is the decomposable probability-of-success metric for algorithm on problem . Then for any ,
(5.4) |
This demonstrates that for any decomposable metric there is an upper bound on the proportion of problems an algorithm is successful on. Here, we measure success as being above a certain threshold with respect to a decomposable metric, and the upper bound is inversely related to this threshold.
5.5 Learning Under Dependence
While the previous theorems highlight cases where an algorithm is unlikely to succeed, we now consider the conditions that make an algorithm likely to succeed. To begin, we consider how the target and information resource can influence an algorithm’s success by generalizing the Learning Under Dependence theorem [Montañez, 2017].
Theorem 5 (Learning Under Dependence).
Define and let denote any set of binary strings (information resources), such that the strings are of length or less. Define as the expected decomposable probability of success under the joint distribution on and for any fixed algorithm , such that , namely,
Then,
(5.5) |
where , is the Kullback-Liebler divergence between the marginal distribution on and the uniform distribution on , and is the mutual information. Alternatively, we can write
(5.6) |
where .
The value of defined here represents the expected single-query probability of success of an algorithm relative to a randomly selected target and information resource, distributed according to some joint distribution. The probability of success for a single query (marginalized over information resources) is equivalent to the expectation of the conditional probability of success, conditioned on the random information resource. Upper bounding this value states that regardless of the choice of decomposable probability-of-success metric, the probability of success depends on the amount of information regarding the target contained within the information resource, as measured by the mutual information.
5.6 Famine of Favorable Information Resources
We now demonstrate the effect of the general bias term defined earlier on the probability of a success of an algorithm. We begin with a generalization of the Famine of Favorable Information Resources [Montañez et al., 2019].
Theorem 6 (Famine of Favorable Information Resources).
Let be a finite set of information resources and let be an arbitrary fixed -size target set with corresponding target function . Define
where is an arbitrary decomposable probability-of-success metric for algorithm on search problem and represents the minimally acceptable probability of success. Then,
(5.7) |
where .
This result demonstrates the mathematical effect of bias, of which we have previously provided one hypothetical example (car parking). Now, we can show that the bias of our expected information resources towards the target upper bounds the probability of a given information resource leading to a successful search.
5.7 Futility of Bias-Free Search
We can also use our definition of bias to generalize the Futility of Bias-Free Search [Montañez, 2017], which demonstrates the inability of an algorithm to perform better than uniform random sampling without bias, defined with respect to the expected per-query probability of success. Our generalization proves that the theorem holds for bias defined with respect to any decomposable probability-of-success metric.
Theorem 7 (Futility of Bias-Free Search).
For any fixed algorithm , fixed target with corresponding target function , and distribution over information resources , if , then
(5.8) |
where represents the expected decomposable probability of successfully sampling an element of using , marginalized over information resources , and is the single-query probability of success under uniform random sampling.
This result demonstrates that, regardless of how we measure the success of an algorithm with respect to a decomposable metric, it cannot perform better than uniform random sampling without bias.
5.8 Famine of Favorable Biasing Distributions
Montañez proved that the percentage of minimally favorable distributions (biased over some threshold towards some specific target) is inversely proportional to the threshold value and directly proportional to the bias between the information resource and target function [Montañez, 2017]. We will show that this scarcity of favorable biasing distributions holds, in general, for bias under any decomposable probability-of-success metric.
Theorem 8 (Famine of Favorable Biasing Distributions).
Given a fixed target function , a finite set of information resources , a distribution over information resources , and a set of all discrete -dimensional simplex vectors,
(5.9) |
where , and is Lebesgue measure.
This result shows that the more bias there is between our set of information resources and the target function , the easier it is to find a minimally favorable distribution, and the higher the threshold for what qualifies as a minimally favorable distribution, the harder our search becomes. Thus, unless we want to suppose that we begin with a set of information resources already favorable towards our fixed target, finding a highly favorable distribution is difficult.
6 CONCLUSION
Casting machine learning problems as search provides a common formalism within which to prove bounds and impossibility results for a wide variety of learning algorithms and tasks. In this paper, we introduce a property of probability-of-success metrics called decomposability, and show that the expected per-query probability of success and general probability of success are decomposable. To demonstrate the value of this property, we prove that a number of existing algorithmic search framework results continue to hold for all decomposable probability-of-success metrics. These results provide a number of useful insights: we show that algorithmic performance is conserved with respect to all decomposable probability-of-success metrics, favorable targets are scarce no matter your decomposable probability-of-success metric, and that without the generalized bias defined here, an algorithm will not perform better than uniform random sampling.
The goal of this work is to offer additional machinery within the search framework, allowing for more general application. Concretely, we can develop decomposable probability-of-success metrics for problems concerned with the state of an algorithm at specific steps, and leverage existing results as a foundation for additional insight into those problems.
ACKNOWLEDGEMENTS
This work was supported by the Walter Bradley Center for Natural and Artificial Intelligence. We thank Dr. Robert J. Marks II (Baylor University) for providing support and feedback. We also thank Harvey Mudd College’s Department of Computer Science for their continued resources and support.
REFERENCES
- Fano and Hawkins, 1961 Fano, R. M. and Hawkins, D. (1961). Transmission of information: A statistical theory of communications. American Journal of Physics, 29(11):793–794.
- Gülçehre and Bengio, 2016 Gülçehre, Ç. and Bengio, Y. (2016). Knowledge matters: Importance of prior information for optimization. The Journal of Machine Learning Research, 17(1):226–257.
- Kumagai and Kanamori, 2019 Kumagai, W. and Kanamori, T. (2019). Risk bound of transfer learning using parameric feature mapping and its application to sparse coding. Machine Learning, 108:1975–2008.
- Mitchell, 1980 Mitchell, T. M. (1980). The need for biases in learning generalizations. Technical report, Computer Science Department, Rutgers University, New Brunswick, MA.
- Mitchell, 1982 Mitchell, T. M. (1982). Generalization as Search. Artificial intelligence, 18(2):203–226.
- Montañez, 2017 Montañez, G. D. (2017). The Famine of Forte: Few Search Problems Greatly Favor Your Algorithm. In Systems, Man, and Cybernetics (SMC), 2017 IEEE International Conference on, pages 477–482. IEEE.
- Montañez, 2017 Montañez, G. D. (2017). Why Machine Learning Works. PhD thesis, Carnegie Mellon University.
- Montañez et al., 2019 Montañez, G. D., Hayase, J., Lauw, J., Macias, D., Trikha, A., and Vendemiatti, J. (2019). The Futility of Bias-Free Learning and Search. CoRR, abs/1907.06010.
- Sauer, 1972 Sauer, N. (1972). On the density of families of sets. Journal of Combinatorial Theory, Series A, 13(1):145–147.
- Schaffer, 1994 Schaffer, C. (1994). A Conservation Law for Generalization Performance. Machine Learning Proceedings 1994, 1:259–265.
- Sterkenburg, 2019 Sterkenburg, T. F. (2019). Putnam’s Diagonal Argument and the Impossibility of a Universal Learning Machine. Erkenntnis, 84(3):633–656.
- Valiant, 1984 Valiant, L. (1984). A Theory of the Learnable. Communications of the ACM, 27:1134–1142.
- Wolpert and Macready, 1997 Wolpert, D. H. and Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Trans. Evolutionary Computation, 1:67–82.
7 Appendix
Lemma 7.1, Lemma 7.2, Lemma 7.3, and Lemma 7.4 with their proofs are taken from [Montañez, 2017], with Lemma 7.4 being adapted for decomposable probability-of-success metrics.
7.1 Lemmata
Lemma 7.1 (Sauer-Shelah Inequality).
For , .
Proof.
Lemma 7.2 (Binomial Approximation).
for and .
Proof.
By the condition , we have
which implies . Therefore,
using the condition , which implies
Thus,
where the penultimate inequality follows from the Sauer-Shelah inequality [Sauer, 1972]. Dividing through by gives the desired result. ∎
Lemma 7.3.
(Maximum Number of Satisfying Vectors) Given an integer , a set of all -length -hot binary vectors, a set of discrete -dimensional simplex vectors, and a fixed scalar threshold , then for any fixed ,
where denotes the vector dot product between and .
Proof.
For , the bound holds trivially. For , let be a random quantity that takes values uniformly in the set . Then, for any fixed ,
Let denotes the all ones vector. Under a uniform distribution on random quantity and because does not change with respect to , we have
since must sum to .
Noting that , we use Markov’s inequality to obtain
∎
Lemma 7.4.
If , then
Proof.
is the probability that random variable is in target over all values of , for random and for drawn from . Then,
where the third equality makes use of the law of iterated expectation, the fourth follows from the conditional independence assumption, and the final equality follows from the definition of decomposability. ∎
See 4.2
Proof.
By definition,
∎
See 4.3
Proof.
Observe that . Since is a probability distribution,
∎
See 4.4
Proof.
∎
7.2 Proofs of Theorems
See 1
Proof.
Note that the closed under permutation condition implies for some constant .
where the first and final equalities follow from the definition of decomposability. ∎
See 2
Proof.
First, by the definition of active information of expectations, implies , since . Thus,
(7.1) |
For and , we have for all elements of (making the set empty) and the theorem follows immediately. Thus, for the remainder.
See 3
Proof.
Let . For brevity, we will allow to also denote its corresponding target set, letting the context make clear whether the target set or target function is meant. Then,
where the final step follows from Markov’s inequality. By decomposability of and linearity of expectation, we have
∎
See 4
Proof.
We begin by defining a set of all -length target functions with exactly ones, namely, . As in Theorem 3, we again allow to also denote its corresponding target set. For each of these , we have information resources. The total number of search problems is therefore
(7.6) |
We seek to bound the proportion of possible search problems for which for any threshold . Thus,
(7.7) | ||||
(7.8) |
where denotes the arg sup of the expression. Therefore,
where the equality follows decomposability of and represents the -length probability vector defined by . By Lemma 7.3, we have
(7.9) |
proving the result for finite information resources.
∎
See 5
Proof.
This proof loosely follows that of Fano’s Inequality [Fano and Hawkins, 1961], being a reversed generalization of it. Let . Using the chain rule for entropy to expand in two different ways, we get
By definition, , and by the data processing inequality . Thus,
Define . Then,
We let , being the entropy of the uniform distribution over -sparse target sets in . Therefore,
Using the definitions of conditional entropy and , we get
which implies
Examining , we see it captures how much entropy of is due to the randomness of . We upperbound this by its maximum value of 1 and obtain
and substitute for to obtain the first result, noting that specifies a proper probability distribution by the linearity and boundedness of the expectation. To obtain the second form, use the definitions and . ∎
See 6
Proof.
We seek to bound the proportion of successful search problems for which for any threshold . Let . Then,
By decomposability, we have
Applying Markov’s Inequality and by the definition of , we obtain
∎
See 7
Proof.
Let be the space of possible information resources. Then,
Since we are considering the per-query probability of success for algorithm on using information resource , we have
Also note that by the fact that . Making these substitutions, we obtain
∎
See 8
Proof.
This result follows from Montañez’s [Montañez et al., 2019] proof of the Famine of Favorable Biasing Distributions but instead using the generalized form of bias. No other changes to the proof are needed. ∎